id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.00169 | Data-Driven Output Regulation using Single-Gain Tuning Regulators | Current approaches to data-driven control are geared towards optimal
performance, and often integrate aspects of machine learning and large-scale
convex optimization, leading to complex implementations. In many applications,
it may be preferable to sacrifice performance to obtain significantly simpler
controller designs. We focus here on the problem of output regulation for
linear systems, and revisit the so-called tuning regulator of E. J. Davison as
a minimal-order data-driven design for tracking and disturbance rejection. Our
proposed modification of the tuning regulator relies only on samples of the
open-loop plant frequency response for design, is tuned online by adjusting a
single scalar gain, and comes with a guaranteed margin of stability; this
provides a faithful extension of tuning procedures for SISO integral
controllers to MIMO systems with mixed constant and harmonic disturbances. The
results are illustrated via application to a four-tank water control process. | Liangjie Chen, John W. Simpson-Porco | 2023-03-31T23:30:18Z | http://arxiv.org/abs/2304.00169v1 | # Data-Driven Output Regulation using Single-Gain Tuning Regulators
###### Abstract
Current approaches to data-driven control are geared towards optimal performance, and often integrate aspects of machine learning and large-scale convex optimization, leading to complex implementations. In many applications, it may be preferable to sacrifice performance to obtain significantly simpler controller designs. We focus here on the problem of output regulation for linear systems, and revisit the so-called tuning regulator of E. J. Davison as a minimal-order data-driven design for tracking and disturbance rejection. Our proposed modification of the tuning regulator relies only on samples of the open-loop plant frequency response for design, is tuned online by adjusting a single scalar gain, and comes with a guaranteed margin of stability; this provides a faithful extension of tuning procedures for SISO integral controllers to MIMO systems with mixed constant and harmonic disturbances. The results are illustrated via application to a four-tank water control process.
## I Introduction
Many multivariable controller design methods require, as a starting point, knowledge of a reasonably accurate parametric system model. Presently however, motivated by high-complexity and/or large-scale control problems where building or fitting a parametric model is prohibitively expensive, interest in direct model-free or _data-driven_ multivariable controller design methods is steadily increasing. Established control problems such as the LQR problem [1, 2, 3, 4, 5] and MPC [6, 7] have recently been investigated from a learning-based or data-based perspective. This paper instead examines a data-driven design method for the classical output regulation problem [8, 9].
Popular technical approaches for data-based control include reinforcement learning [10], deep learning [5], and behavioural systems theory [2, 6, 7, 11]. Many of these approaches are geared towards obtaining optimal performance, and either incorporate machine learning modules or require the solution of other large convex optimization problems parameterized by collected data. For both practical and theoretical reasons, it important to consider the possibility of trading off performance for increased simplicity in both design and implementation of data-driven controllers. Perhaps the simplest and most successful controller of all, the proportional-integral (PI) controller, may be tuned in a learning-based fashion via the Ziegler-Nichols procedure, and involves no optimization or machine learning. This suggests that revisiting traditional control paradigms from decades past will shed light on the complexity-performance trade-off for data-driven control.
Motivated precisely by the model-independence and online tuning success of PI control, E. J. Davison in 1976 introduced the _multivariable tuning regulator_, a minimal-order controller which solves the error-feedback output regulation problem for stable multi-input multi-output (MIMO) linear time-invariant (LTI) systems [12][13, Chp. 4]; see also [14, 15, 16, 17] for related work. The design asymptotically rejects any combination of polynomial and harmonic disturbances, and enjoys two remarkable features: (i) it is inherently data-driven, as the feedback gain matrices depend _only_ on frequency response data, and (ii) it can be systematically tuned online with a guarantee of closed-loop stability. Recently, similar properties have been established for integral control of nonlinear systems [18, 19, 20, 21], and have found use in online feedback-based optimization [22, 23, 24].
Like the PI controller, and in contrast to most current data-driven control approaches, the tuning regulator favors simplicity over optimality. Unfortunately, the original tuning regulator suffers from two major design drawbacks; as these are technical in nature and require further background on the tuning regulator concept to explain, we defer further discussion on them to Section II. Our objective here is to revisit the tuning regulator as a simple canonical data-driven design for MIMO output regulation, address several deficiencies in the original design methodology, and lay groundwork for further exploration of the complexity-performance trade-off curve in data-driven control.
_Contributions:_ We propose and analyze the _single-gain tuning regulator_ (SGTR), a simple data-driven output-regulating controller for stable LTI systems. The SGTR improves upon the original tuning regulator design in two ways. First, the SGTR design relies only on _open loop_ frequency response data from the plant (Theorem 2), which can be determined via simple experiments [12]; the original tuning regulator requires repeated reidentification during the tuning process. Second, the SGTR can be tuned online by adjusting a single scalar \(\epsilon>0\), while the original design of [12] requires tuning of (in general) many scalar gains. In contrast with [12], our design comes with a stability certificate (Lemma 3) that the dominant closed-loop eigenvalues have a stability margin of \(\mathcal{O}(\epsilon)\). In this sense, the design provides a true extension of classical data-driven SISO integral controller tuning procedures to the multivariable case with mixed constant and harmonic disturbances. We illustrate our design on a problem of disturbance rejection for the four-tank control process of [25].
_Notation:_ For a matrix \(A\in\mathbb{C}^{m\times n}\), \(\mathrm{conj}(A)\) denotes its element-wise complex conjugate, \(A^{*}\) denotes its Hermitian transpose, \(A^{\mathsf{T}}\) is its transpose without conjugation, \(\mathrm{vec}(A)\) denotes its column-wise vectorization, and \(A^{\dagger}\) denotes its
Moore-Penrose pseudoinverse. If \(A\) is square, then \(\mathrm{eig}(A)\) denotes the set of all distinct eigenvalues of \(A\). The symbol \(\otimes\) is the Kronecker product. Given row vectors \(x_{1},\ldots,x_{n}\) of size \(m,\mathrm{col}(x_{1},\ldots,x_{n})\) denotes the associated \(n\times m\) stacked column matrix. Finally, we say \(F:\mathbb{R}_{\geq 0}\to\mathbb{R}^{n\times m}\) is \(\mathcal{O}(\epsilon)\) as \(\epsilon\to 0^{+}\) if \(\lim_{\epsilon\to 0^{+}}\|F(\epsilon)\|/\epsilon<\infty\).
## II Review: Linear Output Regulation
### _General Problem Formulation_
Consider the finite-dimensional causal LTI plant
\[\mathcal{P}:\quad\begin{aligned} \dot{x}&=Ax+Bu+B_{d}d, \qquad x(0)\in\mathbb{R}^{n}\\ e&=Cx+Du+D_{d}d\end{aligned} \tag{1}\]
with state \(x(t)\in\mathbb{R}^{n}\) and control input \(u(t)\in\mathbb{R}^{m}\). The output \(e\in\mathbb{R}^{r}\) with \(r\leq m\) is a set of measurable _error_ variables (e.g., tracking errors) to be regulated to zero. We assume throughout this work that \(A\) is Hurwitz stable, but that the matrices \((A,B,B_{d},C,D,D_{d})\) are otherwise unknown, as is the order \(n\) of the plant. The problem of regulating a stable but otherwise unknown system arises frequently in applications, and examples include large-scale power systems frequency control [26], active noise cancellation [27], and chemical process control [28].
The exogenous input signal \(d\in\mathbb{R}^{n_{d}}\) models disturbances to be rejected and reference signals to be tracked, and is assumed to be generated by the LTI _exosystem_
\[\dot{w}=Sw\,,\qquad d=Ew,\qquad w(0)\in\mathbb{R}^{n_{w}}, \tag{2}\]
with state \(w\in\mathbb{R}^{n_{w}}\). We assume here that \(S\in\mathbb{R}^{n_{w}\times n_{w}}\) has only semisimple eigenvalues on the imaginary axis, and let
\[\mu_{S}(s) =s(s^{2}+\omega_{1}^{2})\cdots(s^{2}+\omega_{\ell}^{2}) \tag{3a}\] \[=(s-\lambda_{0})(s-\lambda_{1})(s-\lambda_{1}^{*})\cdots \tag{3b}\]
denote the minimal polynomial of \(S\) with order \(q\triangleq 2\ell+1\) and where \(0<\omega_{1}<\omega_{2}<\cdots<\omega_{\ell}\). Note that \(\mu_{S}(s)\) has one root at \(\lambda_{0}\triangleq 0\), and for \(k\in\{1,\ldots,\ell\}\), one complex conjugate pair of roots at \(\lambda_{k}=\mathsf{j}\omega_{k}\) and \(\lambda_{k}^{*}=-\mathsf{j}\omega_{k}\). We let \(\hat{P}(s)=C(sI_{n}-A)^{-1}B+D\) denote the \(r\times m\) transfer matrix of (1) from \(u\) to \(e\). For \(k\in\{0,\ldots,\ell\}\), then \(\hat{P}(\lambda_{k})\) is the frequency response of the plant \(\mathcal{P}\) on the \(u\mapsto e\) channel evaluated at the \(k\)th exosystem eigenvalue.
The problem of _error-feedback output regulation_ is that of designing a dynamic controller for (1), processing \(e(t)\) and producing \(u(t)\), such that the closed-loop system is internally exponentially stable when \(d\equiv 0\), and such that \(\lim_{t\to\infty}e(t)=0\) for all initial conditions and for all exogenous input signals \(d(t)\) generated by (2). Achieving regulation _robustly_ with respect to variations in the plant data requires a canonical two-piece construction of the controller, consisting of an error-processing subsystem (the _internal model_ or _servocompensator_), and a stabilizing compensator. Our preferred construction of the servocompensator follows [12], and has the advantage that the states of the resulting servocompensator are easily interpreted in relation to the exosystem dynamics. We refer the reader to [9, Chapter 4.4] for another common alternative construction of the servocompensator.
Based on (3), define \(\phi_{0}\triangleq 0\), \(g_{0}=1\), with \(\Phi_{0}\triangleq\phi_{0}\otimes I_{r}=\mathbb{0}_{r\times r}\), \(G_{0}\triangleq g_{0}\otimes I_{r}=I_{r}\). For \(k\in\{1,\ldots,\ell\}\), similarly define
\[\phi_{k}\triangleq\begin{bmatrix}0&1\\ -\omega_{k}^{2}&0\end{bmatrix},\quad g_{k}\triangleq\begin{bmatrix}0\\ 1\end{bmatrix} \tag{4}\]
with \(\Phi_{k}\triangleq\phi_{k}\otimes I_{r}\) and \(G_{k}\triangleq g_{k}\otimes I_{r}\). Finally, we let
\[\phi \triangleq\mathrm{blkdiag}(\phi_{0},\phi_{1},\ldots,\phi_{\ell}) \in\mathbb{R}^{q\times q},\qquad\Phi\triangleq\phi\otimes I_{r}\] \[g \triangleq\mathrm{col}(g_{0},g_{1},\ldots,g_{\ell})\in\mathbb{R}^{q}, \qquad\qquad\qquad G\triangleq g\otimes I_{r}.\]
By construction, \(\mathrm{eig}(\Phi)=\mathrm{eig}(S)\), and \((\Phi,G)\) is controllable. The servocompensator (i.e., internal model) is
\[\dot{\eta}=\Phi\eta+Ge,\qquad\eta(0)\in\mathbb{R}^{rq}, \tag{5}\]
which processes the error signal \(e\). Consider now the cascaded system consisting of (1) and (5), with input \(u\) and outputs \((e,\eta)\). The cascade is stabilizable and detectable -- and hence, there exists a compensator stabilizing the cascaded system and solving the regulation problem -- if and only if [29]
\[\mathrm{rank}\,\begin{bmatrix}A-\lambda I_{n}&B\\ C&D\end{bmatrix}=n+r,\quad\text{for all }\lambda\in\mathrm{eig}(S). \tag{6}\]
Since \(A\) is Hurwitz, by row operations (6) is equivalent to
\[\mathrm{rank}\,\hat{P}(\lambda)=r\qquad\text{for all }\lambda\in\mathrm{eig}(S). \tag{7}\]
The "non-resonance" condition (7) stipulates that the transmission zeros of the plant \(\mathcal{P}\) on the \(u\mapsto e\) channel are disjoint from the poles of the servocompensator.
### _Davison's Tuning Regulator_
In [12], E. J. Davison posed an important special case of the design approach in Section II-A, inspired by classical online tuning approaches for integral controllers. As motivation, consider the SISO integral controller \(\dot{\eta}=e\), \(u=-\epsilon\eta\), where \(\epsilon\in\mathbb{R}\) is the gain. For stable SISO LTI processes, the online tuning procedure is to select \(\epsilon\) such that \(\mathrm{sign}(\epsilon)=\mathrm{sign}(\hat{P}(0))\), and slowly increase the magnitude of \(\epsilon\) from a small value until the desired tracking performance is achieved.1 This approach has three key characteristics:
Footnote 1: If satisfactory performance cannot be achieved, then the plant requires additional stabilizing pre-compensation before tuning of the integral loop.
1. only the DC gain of the _open-loop plant_ is required;
2. a stable closed-loop system can be systematically obtained through tuning of a _single_ scalar parameter, and the dominant pole of the closed-loop system has a negative real part which is of \(\mathcal{O}(\epsilon)\) as \(\epsilon\to 0^{+}\)[19];
3. the control implementation is simple and practical.
The so-called multivariable tuning regulator of [12] was Davison's effort to mirror the characteristics (C1)-(C3) in the MIMO LTI case, and for more general refrence/disturbance signals generated by (2), with the following design procedure. For the exogenous input signals \(d\triangleq\sum_{i=0}^{\ell}d_{i}\), let \(d_{i}\) be a constant signal if \(i=0\), and a harmonic signal with frequency \(\omega_{i}\) otherwise. Then, we require an integral controller
\[\mathcal{C}_{0}:\quad\dot{\eta}_{0}=e,\quad u_{0}=-\epsilon_{0}F_{0}\eta_{0}\]
to reject \(d_{0}\), and a resonant controller
\[\mathcal{C}_{k}:\quad\dot{\eta}_{k}=\Phi_{k}\eta_{k}+G_{k}e,\quad u_{k}=-\epsilon_ {k}F_{k}\eta_{k}\]
to reject \(d_{k}\) for \(k\in\{1,\ldots,\ell\}\), where \((\Phi_{k},G_{k})\) are as defined in (4) and \(\epsilon_{k}\) are tuning parameters. The matrix gains \(F_{k}\) are constructed as follows.
**Lemma 1** (Lemma 3, [12]).: _Suppose that \(d=d_{0}\) and the DC gain satisfies \(\operatorname{rank}\hat{P}(0)=r\). If \(F_{0}=\hat{P}(0)^{\dagger}\), then there exists an \(\epsilon^{\star}\) such that for all \(\epsilon_{0}\in(0,\epsilon^{\star}]\), the closed-loop system with \(\mathcal{P}\) and \(\mathcal{C}_{0}\) is internally exponentially stable._
**Lemma 2** (Lemma 4, [12]).: _Suppose that \(d=d_{k}\) is harmonic with frequency \(\omega_{k}\) and the frequency response satisfies \(\operatorname{rank}P(\mathbf{j}\omega)=r\). Let \(F_{k}\triangleq\begin{bmatrix}F_{k}^{1}&F_{k}^{2}\end{bmatrix}\), where \(F_{k}^{1}\triangleq 2\omega_{k}\mathrm{Im}[\hat{P}(\mathbf{j}\omega_{k})]^{\dagger}\) and \(F_{k}^{2}\triangleq 2\mathrm{Re}[\hat{P}(\mathbf{j}\omega_{k})]^{\dagger}\). Then, there exists an \(\epsilon^{\star}\) such that for all \(\epsilon_{k}\in(0,\epsilon^{\star}]\), the closed-loop system with \(\mathcal{P}\) and \(\mathcal{C}_{k}\) is internally exponentially stable._
Lemma 1 allows us to construct the controller \(\mathcal{C}_{0}\) and tune \(\epsilon_{0}\) so that the closed-loop system performance is satisfactory, while temporarily disregarding the effects of the harmonic exogenous signals \(d_{1},\ldots,d_{\ell}\). Similarly, Lemma 2 allows us to construct \(\mathcal{C}_{k}\) and tune \(\epsilon_{k}\) while temporarily disregarding the effects of the other harmonic exogenous signals \(\left\{d_{i}\right\}_{i\neq k}\) and the constant \(d_{0}\). For more general exogenous disturbances with constant and \(\ell\) harmonic components, the design process requires the sequential application of Lemma 1, then Lemma 2\(\ell\) times. For \(k\in\{1,\ldots,\ell\}\), constructing the gain matrix \(F_{k}\) thus requires the frequency response data of the closed-loop system consisting \(\mathcal{P},\mathcal{C}_{0},\ldots,\mathcal{C}_{k-1}\). Evidently, as \(\ell\) increases, the implementation of Davison's regulator becomes more cumbersome, and we can conclude that it does not in fact possess the characteristics (C1)-(C3). Moreover, while the design procedure produces a stable closed-loop system, no results have been reported regarding the margin of stability.
## III The Single-Gain Tuning Regulator
### _Problem Statement_
Our objective is to remedy the tuning and commissioning issues present in the original tuning regulator proposal, resulting in a procedure more directly analogous to the SISO tuning of integral loops described in Section II-B. Thus, our new tuning procedure should (i) produce a direct mapping from (samples of) open-loop plant frequency response data to some fixed controller gains, and (ii) the number of online tuning parameters should be reduced to a single scalar \(\epsilon>0\). To this end, consider the _single-gain tuning regulator_ (SGTR)
\[\boxed{\dot{\eta}=\Phi\eta+Ge,\qquad u=-F(\epsilon)\eta,} \tag{8}\]
where \((\Phi,G)\) are as defined in Section II-A. The feedback gain \(F:\mathbb{R}_{\geq 0}\to\mathbb{R}^{m\times rq}\) belongs to the class \(\mathcal{F}\) of continuous mappings which are \(\mathcal{O}(\epsilon)\) as \(\epsilon\to 0^{+}\). In particular, note that \(F\) need not be a linear function of \(\epsilon\).
The architecture is shown in Figure 1. Combining the SGTR (8) with the plant \(\mathcal{P}\) in (1), the closed-loop system takes the form
\[\begin{bmatrix}\dot{x}\\ \dot{\eta}\end{bmatrix} =\underbrace{\begin{bmatrix}A&-BF(\epsilon)\\ GC&\Phi-GDF(\epsilon)\end{bmatrix}}_{\triangleq\mathcal{A}(\epsilon)}\begin{bmatrix} x\\ \eta\end{bmatrix}+\begin{bmatrix}B_{w}E\\ GD_{w}E\end{bmatrix}w \tag{9}\] \[e =\begin{bmatrix}C&-DF(\epsilon)\end{bmatrix}\begin{bmatrix}x\\ \eta\end{bmatrix}+\begin{bmatrix}D_{d}E\end{bmatrix}w\]
with \(w\) generated by (2). The presence of the servocompensator ensures that output regulation will be achieved if the closed-loop system is exponentially stable; we will omit the standard invariant subspace analysis [9]. The specific stability property we will seek to impose is inspired by the characteristic (C2) of SISO integral control loops, as discussed in Section II-B. We let \(\alpha(A)\triangleq\max_{\lambda\in\text{eig}(A)}\mathrm{Re}[\lambda]\) denote the spectral abscissa of a square matrix \(A\).
**Definition 1** (Low-gain Hurwitz stability).: A continuous matrix-valued function \(\mathcal{A}:\mathbb{R}_{\geq 0}\to\mathbb{R}^{n\times n}\) is _low-gain Hurwitz stable_ if there exist constants \(c,\epsilon^{\star}>0\) such that \(\alpha(\mathcal{A}(\epsilon))\leq-c\epsilon\) for all \(\epsilon\in[0,\epsilon^{\star})\).
Definition 1 is stronger than the Hurwitz stability of \(\mathcal{A}(\epsilon)\) for each \(\epsilon\in(0,\epsilon^{\star})\), as the dominant eigenvalue of \(A(\epsilon)\) is additionally required to be \(\mathcal{O}(\epsilon)\) away from the imaginary axis for sufficiently small values of \(\epsilon\). A Lyapunov characterization of low-gain Hurwitz stability is provided in Appendix I. We can now state our design problem.
**Problem 1** (Single-gain tuning regulator).: Given the minimal polynomial (3) of the exosystem (2) and the plant frequency response samples \(\hat{P}(\lambda_{k})\) for \(k\in\{0,\ldots,\ell\}\), design a feedback \(F\in\mathcal{F}\) such that the closed-loop system matrix in (9) is low-gain Hurwitz stable.
### _Stability Analysis_
We begin by developing a reduced characterization for low-gain Hurwitz stability of the closed-loop matrix \(\mathcal{A}(\epsilon)\) from (9). Given \(A\) and \(\Phi\), define the Sylvester operator
\[\mathrm{Syl}_{\Phi,A}:\mathbb{R}^{n\times rq}\to\mathbb{R}^{n\times rq},\ \mathrm{Syl}_{\Phi,A}(\Pi)\triangleq\Pi\Phi-A\Pi. \tag{10}\]
Since \(\mathrm{eig}(\Phi)\) are imaginary and \(A\) is Hurwitz, it is a standard result that \(\mathrm{Syl}_{\Phi,A}\) is bijective [9, Cor. A.1], and we define an associated linear operator \(\mathscr{L}:\mathbb{R}^{m\times rq}\to\mathbb{R}^{r\times rq}\) as
\[\mathscr{L}(F)=C\,\mathrm{Syl}_{\Phi,A}^{-1}(BF)+DF. \tag{11}\]
Put differently, \(\mathscr{L}(F)=C\Pi+DF\), where \(\Pi\in\mathbb{R}^{n\times q}\) is the unique solution to \(\Pi\Phi-A\Pi=BF\). We call \(\mathscr{L}\) the _steady-state loop gain (SSLG)_ operator of the system (1) with respect to the exosystem (2). Our first key result is the following.
Fig. 1: The single-gain tuning regulator.
**Lemma 3** (**Reduction of SGTR stability analysis problem)**.: _The closed-loop system matrix \(\mathcal{A}(\epsilon)\) in (9) is low-gain Hurwitz stable if_
\[\mathcal{A}_{\rm red}(\epsilon)\triangleq\Phi-G\mathscr{L}(F(\epsilon))\]
_is low-gain Hurwitz stable._
_Proof:_ Consider the Sylvester equation
\[\mathrm{Syl}_{\Phi,A}(\Pi)=\Pi\Phi-A\Pi=-\tfrac{1}{\epsilon}BF(\epsilon), \tag{12}\]
where \(\Pi\in\mathbb{R}^{n\times rq}\), with unique solution
\[\Pi(\epsilon)=\mathrm{Syl}_{\Phi,A}^{-1}(-\tfrac{1}{\epsilon}BF(\epsilon))=- \tfrac{1}{\epsilon}\mathrm{Syl}_{\Phi,A}^{-1}(BF(\epsilon)),\]
where in the second equality we have used linearity of \(\mathrm{Syl}_{\Phi,A}\). Since \(\epsilon\mapsto F(\epsilon)\) is continuous and is \(\mathcal{O}(\epsilon)\) as \(\epsilon\to 0^{+}\), we conclude that \(\epsilon\mapsto\Pi(\epsilon)\) is continuous and is \(\mathcal{O}(1)\) as \(\epsilon\to 0^{+}\). Consider now the transformation matrix
\[\mathscr{T}=\begin{bmatrix}I_{n}&-\epsilon\Pi(\epsilon)\\ 0&\epsilon I_{rq}\end{bmatrix},\quad\mathscr{T}^{-1}=\begin{bmatrix}I_{n}& \Pi(\epsilon)\\ 0&\frac{1}{\epsilon}I_{rq}\end{bmatrix},\]
which defines the change of state variables \((x^{\prime},\eta^{\prime})=(x-\epsilon\Pi(\epsilon)\eta,\epsilon\eta)\). Direct computation shows that the system matrix \(\mathcal{A}(\epsilon)\) from (9) transforms into
\[\tilde{\mathcal{A}}(\epsilon)=\begin{bmatrix}A-\epsilon\Pi(\epsilon)GC&\Pi( \epsilon)GC\mathscr{L}(F(\epsilon))\\ \epsilon GC&\mathcal{A}_{\rm red}(\epsilon).\end{bmatrix}\]
As \(\mathcal{A}_{\rm red}(\epsilon)\) is low-gain Hurwitz stable, by Proposition 2 there exist constants \(c_{2}^{\prime},\epsilon^{*}>0\) and a continuous \(P_{2}(\epsilon)\) which for all \(\epsilon\in(0,\epsilon^{*})\) satisfies \(0\prec P_{2}(\epsilon)\preceq c_{2}^{\prime}I_{n}\) and
\[\mathcal{A}_{\rm red}(\epsilon)^{\mathsf{T}}P_{2}(\epsilon)+P_{2}(\epsilon) \mathcal{A}_{\rm red}(\epsilon)=-\epsilon I_{rq}.\]
Additionally, since \(A\) is Hurwitz, there exists \(P_{1}\succ 0\) such that \(A^{\mathsf{T}}P_{1}+P_{1}A=-I_{n}\). Let \(\tilde{\mathcal{P}}(\epsilon)=\mathrm{blkdiag}(P_{1},P_{2}(\epsilon))\). Direct calculation then shows that for all \(\epsilon\in(0,\epsilon^{*})\),
\[\tilde{\mathcal{A}}(\epsilon)^{\mathsf{T}}\tilde{\mathcal{P}}(\epsilon)+ \tilde{\mathcal{P}}(\epsilon)\tilde{\mathcal{A}}(\epsilon)=-\underbrace{ \begin{bmatrix}I_{n}+M_{1}(\epsilon)&-M_{2}(\epsilon)\\ -M_{2}(\epsilon)^{\mathsf{T}}&\epsilon I_{rq}\end{bmatrix}}_{\tilde{ \mathcal{Q}}(\epsilon)},\]
where
\[M_{1}(\epsilon) =\epsilon P_{1}\Pi(\epsilon)GC+\epsilon(P_{1}\Pi(\epsilon)GC)^{ \mathsf{T}}\] \[M_{2}(\epsilon) =P_{1}\Pi(\epsilon)G\mathscr{L}(F(\epsilon))+\epsilon(P_{2}GC)^{ \mathsf{T}}\]
are both \(\mathcal{O}(\epsilon)\) as \(\epsilon\to 0^{+}\). It is clear that \(\tilde{\mathcal{Q}}(\epsilon)\) is continuous, so again by Proposition 2, it remains only to show that \(\lambda_{\min}(\tilde{\mathcal{Q}}(\epsilon))\) is positive and \(\mathcal{O}(\epsilon)\) as \(\epsilon\to 0\). A direct argument using Schur complements quickly establishes this, and hence \(\tilde{\mathcal{A}}(\epsilon)\) is low-gain Hurwitz stable. \(\square\)
Lemma 3 is effectively a time-scale separation result: for small \(\epsilon\), the closed-loop eigenvalues decouple into two groups, the first being the eigenvalues of the open-loop plant, and the second being the eigenvalues of \(\mathcal{A}_{\rm red}(\epsilon)\); see [30, Chapter 2] for detailed discussion on this point. The result implies that we may focus our attention on low-gain stability of the matrix \(\mathcal{A}_{\rm red}(\epsilon)=\Phi-G\mathscr{L}(F(\epsilon))\).2 We next consider what properties the pair \((\Phi,G)\) should possess; see
Footnote 2: The converse result of Lemma 3 in fact holds as well, but will be of no use for us here.
**Definition 2** (**Low-gain stabilizability)**.: Let \(\mathsf{A}\in\mathbb{R}^{n\times n}\) and \(\mathsf{B}\in\mathbb{R}^{n\times m}\). The pair \((\mathsf{A},\mathsf{B})\) is _low-gain stabilizable_ if there exists a feedback \(\mathsf{K}\in\mathcal{F}\) such that \(\mathsf{A}-\mathsf{BK}(\epsilon)\) is low-gain Hurwitz stable.
This stabilizability property is characterized as follows; the proof can be found in the appendix.
**Lemma 4** (**Low-gain stabilizability)**.: _A pair \((\mathsf{A},\mathsf{B})\) is low-gain stabilizable if and only if \((\mathsf{A},\mathsf{B})\) is stabilizable and all eigenvalues of \(\mathsf{A}\) are contained in \(\overline{\mathbb{C}}^{-}\).3_
Footnote 3: This property is also known in the literature as _asymptotic null-controllability with bounded controls_ (ANCBC); see [31].
By the constructions in Section II-A, \((\phi,g)\) is controllable and all eigenvalues of \(\phi\) are on the imaginary axis. We therefore conclude from Lemma 4 that \((\phi,g)\) is low-gain stabilizable. It follows that there always exists \(z\in\mathcal{F}\) such that \(\phi-gz(\epsilon)\) is low-gain Hurwitz stable, and with \(Z(\epsilon)=z(\epsilon)\otimes I_{r}\), we immediately have that \(\Phi-GZ(\epsilon)\) is low-gain Hurwitz stable. Comparing to \(\mathcal{A}_{\rm red}(\epsilon)\) as defined in Lemma 3, we see that the question now becomes whether the linear operator equation \(Z(\epsilon)=\mathscr{L}(F(\epsilon))\) can be solved for \(F(\epsilon)\). If so, then we can first compute \(Z(\epsilon)\), then recover a feedback gain \(F(\epsilon)\) for use in (8). We summarize in a lemma, and move next to the study of the SSLG operator \(\mathscr{L}\).
**Lemma 5**.: _Let \(Z\in\mathcal{F}\) be such that \(\Phi-GZ(\epsilon)\) is low-gain Hurwitz stable. If \(Z(\epsilon)=\mathscr{L}(F(\epsilon))\) is solvable for \(F\in\mathcal{F}\), then \(F\) solves the SGTR problem._
### _Computation of the Controller Gain \(F(\epsilon)\)_
Given \(Z\), our goal is now to solve the operator equation \(Z=\mathscr{L}(F)\) for \(F\); indeed, this is always possible under (7).
**Proposition 1** (**Surjectivity of SSLG operator)**.: _The SSLG operator \(\mathscr{L}\) defined in (11) is surjective if and only if (7) holds. If in addition \(r=m\), then \(\mathscr{L}\) is invertible._
_Proof:_ The operator \(\mathscr{L}\) is surjective if for any \(Z\in\mathbb{R}^{r\times rq}\) there exists a solution \((\Pi,H)\) to
\[\begin{split}\Pi\Phi&=A\Pi+BH\\ Z&=C\Pi+DH\end{split} \tag{13}\]
which we can equivalently write as
\[\begin{bmatrix}0\\ Z\end{bmatrix}=\begin{bmatrix}A&B\\ C&D\end{bmatrix}\begin{bmatrix}\Pi\\ H\end{bmatrix}+\begin{bmatrix}I&0\\ 0&0\end{bmatrix}\begin{bmatrix}\Pi\\ H\end{bmatrix}(-\Phi).\]
This is a Hautus equation, and since \(\mathrm{eig}(S)=\mathrm{eig}(\Phi)\), [9, Thm. A.1] now yields that \(\mathscr{L}\) is surjective if and only if (6) holds, and hence if and only if (6) holds. \(\square\)
Combining all results thus far, we can state the following.
**Theorem 1** (**Solvability of SGTR design problem)**.: _Problem 1 is solvable if (7) holds._
While the definition of \(\mathscr{L}\) in (11) suggests that \(\mathscr{L}\) depends on _all the plant data_\((A,B,C,D)\), we will demonstrate that, in fact, \(\mathscr{L}\) depends _only_ on the frequency response samples
\(\hat{P}(\lambda_{k})\) and on the eigendecomposition of \(\phi\); this enables _gain computation based only on frequency response data_.
Recall from (3) that \(\{\lambda_{0},\lambda_{1},\lambda_{1}^{*},\ldots,\lambda_{\ell},\lambda_{\ell}^{*}\}\) denote the roots of the minimal polynomial \(\mu_{S}=\mu_{\Phi}=\mu_{\phi}\), and \(q=1+2\ell\). Since the roots are all simple and distinct, \(\phi\) admits an eigen-decomposition \(\phi=V\Lambda V^{-1}\) with eigenvalues \(\Lambda\triangleq\operatorname{diag}(\lambda_{0},\lambda_{1},\lambda_{1}^{*}, \ldots,\lambda_{\ell},\lambda_{\ell}^{*})\) and right and left eigenvectors
\[V \triangleq\begin{bmatrix}v_{0}&v_{1}&\operatorname{conj}(v_{1})& \cdots&v_{\ell}&\operatorname{conj}(v_{\ell})\end{bmatrix}\] \[V^{-1} =W \triangleq\operatorname{col}(w_{0},w_{1},\operatorname{conj}(w_{1 }),\ldots,w_{\ell},\operatorname{conj}(w_{\ell}))\]
with \(\{v_{k}\}\) being column vectors and \(\{w_{k}\}\) being row vectors. Finally, define the matrices
\[X_{k}\triangleq v_{k}w_{k},\quad\boldsymbol{X}_{k}\triangleq X_{k}\otimes I_{ r},\quad k\in\{0,\ldots,\ell\}, \tag{14}\]
and we can state the key result.
**Theorem 2** (**Characterization of SSLG operator)**.: _The SSLG operator \(\mathscr{L}\) defined in (11) is equivalently given by_
\[\mathscr{L}(F)=\hat{P}(0)F\boldsymbol{X}_{0}+2\sum\nolimits_{k=1}^{\ell} \operatorname{Re}\{\hat{P}(\mathbf{j}\omega_{k})F\boldsymbol{X}_{k}\}. \tag{15}\]
Proof.: To begin, recall the Sylvester operator defined in (10); we claim that
\[\operatorname{Syl}_{\Phi,A}^{-1}(BH)=\int_{0}^{\infty}e^{A\tau}BHe^{-\Phi\tau} \operatorname{d}\!\tau. \tag{16}\]
Since \(A\) is Hurwitz and all eigenvalues of \(\Phi\) have zero real part, all elements of \(t\mapsto e^{At}\) decay exponentially, while all elements of \(t\mapsto e^{-\Phi t}\) grow at most polynomially; it follows that all elements of \(t\mapsto e^{At}BHe^{-\Phi t}\) tend to zero exponentially fast as \(t\to\infty\), and hence the right-hand side of (16) is well-defined. Setting \(\Pi=\operatorname{Syl}_{\Phi,A}^{-1}(BH)\) we verify that
\[\Pi\Phi-A\Pi =\int_{0}^{\infty}e^{A\tau}BHe^{-\Phi\tau}\Phi-Ae^{A\tau}BHe^{- \Phi\tau}\Phi\operatorname{d}\!\tau\] \[=-\int_{0}^{\infty}\frac{\operatorname{d}}{\operatorname{d}\! \tau}(e^{A\tau}BHe^{-\Phi\tau})\operatorname{d}\!\tau=BH,\]
where we have again used that \(A\) is Hurwitz. Since \(\operatorname{Syl}_{\Phi,A}\) is bijective, (16) is indeed the unique solution of \(\operatorname{Syl}_{\Phi,A}(\Pi)=BH\). Inserting (16) into (11), we find that
\[\mathscr{L}(H)=\int_{0^{-}}^{\infty}P(\tau)He^{-\Phi\tau}\operatorname{d}\!\tau, \tag{17}\]
where \(P(t)\triangleq Ce^{At}B\mathds{1}_{\geq 0}(t)+\delta(t)D\) is the causal impulse response matrix of the plant \(\mathcal{P}\) from input \(u\) to output \(e\). The integral (17) can be evaluated via Laplace transform theory and contour integration. Define the matrix-valued signal \(M(t)\triangleq P(t)He^{-\Phi t}\). The signal \(t\mapsto\int_{0^{-}}^{t}M(\tau)\operatorname{d}\!\tau\) has a Laplace transform \(\frac{1}{s}\hat{M}(s)\) which is analytic in \(\mathbb{C}_{>0}\), and the signal has a well-defined limit as \(t\to\infty\). Thus, by the final value theorem,
\[\mathscr{L}(H)=\lim_{t\to\infty}\int_{0^{-}}^{t}M(\tau)\operatorname{d}\!\tau =\lim_{s\to 0^{+}}\hat{M}(s). \tag{18}\]
Since \(M(t)\) is the product of the two causal signals \(P(t)\) and \(He^{-\Phi t}\mathds{1}_{\geq 0}\), it follows by convolution (e.g., [32, Section 11-5]) and taking the limit as \(s\to 0^{+}\) that
\[\mathscr{L}(H)=\frac{-1}{2\pi\mathbf{j}}\int_{\sigma-\mathbf{j}\infty}^{\sigma +\mathbf{j}\infty}\underbrace{\hat{P}(\xi)H(\xi I_{rq}-\Phi)^{-1}}_{\triangleq \Gamma(\xi)}\operatorname{d}\!\xi, \tag{19}\]
where \(\sigma\in\mathbb{R}\) is chosen such that the vertical line \(\{\sigma+\mathbf{j}\omega\mid\omega\in\mathbb{R}\}\) is contained within the region of convergence of the transform \(\hat{P}\) of \(P\), which is a superset of \(\{s\in\mathbb{C}\mid\operatorname{Re}(s)>\alpha(A)\}\). Select \(\sigma\in(\alpha(A),0)\), and consider the closed clockwise-oriented contour in \(\mathbb{C}\) consisting of the vertical line \(\{\sigma+\mathbf{j}\omega\mid\omega\in\mathbb{R}\}\) completed by an infinite semi-circle to the right of the vertical line. As the contour encloses only the singularities of \((\xi I_{rq}-\Phi)^{-1}\), by Jordan's Lemma and the Residue Theorem we obtain
\[\begin{split}\mathscr{L}(H)&=\operatorname{\mathsf{ Res}}_{\lambda_{0}}\{\Gamma(\xi)\}\\ &\quad+\sum_{k=1}^{\ell}\left(\operatorname{\mathsf{Res}}_{ \lambda_{k}}\{\Gamma(\xi)\}+\operatorname{\mathsf{Res}}_{\lambda_{k}^{*}}\{ \Gamma(\xi)\}\right),\end{split} \tag{20}\]
where \(\operatorname{\mathsf{Res}}_{\lambda}\{\cdot\}\) evaluates the residue at \(\xi=\lambda\). Note that
\[\begin{split}(\xi I_{rq}-\Phi)^{-1}&=V(\xi I_{q}- \Lambda)^{-1}V^{-1}\otimes I_{r}\\ &=\frac{1}{\xi-\lambda_{0}}\boldsymbol{X}_{0}+\sum_{k=1}^{\ell} \left[\frac{\boldsymbol{X}_{k}}{\xi-\lambda_{k}}+\frac{\operatorname{conj}( \boldsymbol{X}_{k})}{\xi-\lambda_{k}^{*}}\right].\end{split}\]
Since all poles of \(\hat{P}(\xi)\) belong to \(\mathbb{C}^{-}\) and all eigenvalues of \(\Phi\) are simple, the residues evaluate to
\[\begin{split}\operatorname{\mathsf{Res}}_{\lambda_{0}}\{\Gamma( \xi)\}&=\hat{P}(0)H\boldsymbol{X}_{0}\\ \operatorname{\mathsf{Res}}_{\lambda_{k}}\{\Gamma(\xi)\}& =\hat{P}(\lambda_{k})H\boldsymbol{X}_{k}\\ \operatorname{\mathsf{Res}}_{\lambda_{k}^{*}}\{\Gamma(\xi)\}& =\operatorname{conj}(\hat{P}(\lambda_{k})H\boldsymbol{X}_{k}),\end{split}\]
where we have used the fact that \(\hat{P}(\lambda_{k}^{*})=\operatorname{conj}(\hat{P}(\lambda_{k}))\). This leads immediately to (15) by combining terms.
### _SGTR Design Procedure_
The following three-step procedure provides a constructive solution to the design of the single-gain tuning regulator (8):
1. Design \(Z\in\mathcal{F}\) such that \(\Phi-GZ(\epsilon)\) is low-gain Hurwitz stable; such a design always exists, since \((\Phi,G)\) is low-gain stabilizable (Lemma 4). A particular approach which results in a low-dimensional design problem is to design \(z\in\mathcal{F}\) such that \(\phi-gz(\epsilon)\) is low-gain Hurwitz stable, and then simply set \(Z(\epsilon)=z(\epsilon)\otimes I_{r}\).
2. Solve the linear matrix equation \(\mathscr{L}(F(\epsilon))=Z(\epsilon)\) for \(F\in\mathcal{F}\); a solution always exists since \(\mathscr{L}\) is surjective (Proposition 1). The solution can be computed, for instance, by solving the vectorized linear system \(\boldsymbol{M}\operatorname{vec}(F(\epsilon))=\operatorname{vec}(Z(\epsilon))\), where \[\boldsymbol{M}=\boldsymbol{X}_{0}^{\mathsf{T}}\otimes\hat{P}(0)+2\sum \nolimits_{k=1}^{\ell}\operatorname{Re}\{\boldsymbol{X}_{k}^{\mathsf{T}} \otimes\hat{P}(\mathbf{j}\omega_{k})\}.\]
3. Tune \(\epsilon>0\) for performance. By construction, there exists \(\epsilon^{*}>0\) such that the closed-loop system will be internally exponentially stable for all \(\epsilon\in(0,\epsilon^{*})\).
As an example of what could be done in step 1) above, one could pursue a pole-placement design by specifying that \(\phi-gz(\epsilon)\) have a characteristic polynomial of the form
\[(s+k_{0}\epsilon)(s+k_{1}\epsilon+\mathbf{j}\omega_{1})(s+k_{1}\epsilon- \mathbf{j}\omega_{1})\cdots \tag{21}\]
for some positive constants \(k_{0},k_{1}\), and so on. This leads to an _a priori_ specified pattern of \(\mathcal{O}(\epsilon)\) eigenvalues for the reduced system matrix \(\mathcal{A}_{\mathrm{red}}(\epsilon)\) of Lemma 3. Explicit computation of feedback gains achieving desired pole placements, along with optimal designs, will be pursued in a future publication.
## IV Application: Four Tank Process
To illustrate the ideas and to compare our single-gain regulator to Davison's original design, we consider a problem of disturbance rejection on the four-tank system of [25], linearized at the operating point with minimum phase characteristics. The control inputs \(u(t)\in\mathbb{R}^{2}\) are the voltages applied to the two pumps, and the error output \(e(t)\in\mathbb{R}^{2}\) is the deviation in tank water level measurements from their respective operating points. The exosystem is assumed to generate a constant disturbance and harmonic disturbances at \(\omega_{1}=0.01\) rad/s and \(\omega_{2}=0.1\) rad/s, together they model an external flow of water into tank 4. The minimum polynomial of \(S\) therefore has the form \(\mu_{S}(s)=s(s^{2}+\omega_{1}^{2})(s^{2}+\omega_{2}^{2})\).
For the SGTR design, we follow the steps laid out in Section III-D. The intermediate feedback variable \(z(\epsilon)\) is computed via pole placement such that \(\mathrm{eig}(\phi-gz(\epsilon))=\{-k_{1}\epsilon,-k_{2}\epsilon\pm\mathbf{j} \omega_{1},-k_{3}\epsilon\pm\mathbf{j}\omega_{2}\}\). We then solve for \(F(\epsilon)\) as described in the second step. Based on the trade-off between the overshoot and oscillatory behavior of the error trajectories, we select \(\epsilon=0.0002\), and \(k_{1}=6.21,k_{2}=28.42,k_{3}=30.77\). For Davison's design, we follow the sequential procedure outlined in Section II-B, including recomputing of frequency response data after each loop is closed; we emphasize that the SGTR _does not_ require this extra burden. The tuned values obtained are \(\epsilon_{0}=\epsilon_{1}=0.0025\) and \(\epsilon_{2}=0.003\).
Figure 2 shows the external flow disturbance \(d(t)\) that enters the upper tank, and the closed-loop error trajectories in the two lower tanks. Our best tuning of Davison's design leads to a slower dominant mode, as can be seen in the error response for tank 2. The sequential tuning of \(\{\epsilon_{0},\epsilon_{1},\epsilon_{2}\}\) in Davison's design leads to unnecessary performance trade-offs; for example, an increased value \(\epsilon_{0}=0.005\) provides improved step disturbance rejection, but results in a smaller range of stabilizing selections for \(\epsilon_{1}\) and worse harmonic disturbance rejection. Figure 3 shows4 the closed-loop system eigenvalues close to the imaginary axis for the two designs; the dominant eigenvalue with the SGTR is further to the left in \(\mathbb{C}^{-}\) than that with Davison's design.
Footnote 4: In practice, Figure 3 would be impossible to produce due to the unknown plant dynamics, but is useful here for ground-truth comparison of the controllers.
## V Conclusions
We have proposed and developed a design procedure for the single-gain tuning regulator, which is a simple, data-driven, and minimal-order LTI controller solving the error-feedback output regulation problem for stable LTI systems. The design is based only on samples of the open-loop frequency response, is simple to compute, tune, and implement, and comes with a guaranteed stability margin. Several important directions for future work are being pursued, including extensions of the design procedure to the case of repeated exosystem poles and unknown exosystems [33], incorporation of feedforward and proportional-derivative action, connections to more recent advances in data-driven control based on behavioral systems theory, discrete-time versions of the results, and applications in renewable energy integration problems.
|
2306.00076 | The afterglow of GW170817 from every angle: Prospects for detecting the
afterglows of binary neutron star mergers | To date GW170817, produced by a binary neutron star (BNS) merger, is the only
gravitational wave event with an electromagnetic (EM) counterpart. It was
associated with a prompt short gamma-ray burst (GRB), an optical kilonova, and
the afterglow of a structured, off-axis relativistic jet. We model the
prospects for future mergers discovered in gravitational waves to produce
detectable afterglows. Using a model fit to GW170817, we assume all BNS mergers
produce jets with the same parameters, and model the afterglow luminosity for a
full distribution of observer angles, ISM densities, and distances. We find
that in the LIGO/Virgo/KAGRA O4 run, 30% - 45% of BNS mergers with a
well-localized counterpart will have an afterglow detectable with current
instrumentation in the X-ray, radio and optical. Without a previously detected
counterpart, 10% - 15% will have an afterglow detectable by wide-area radio and
optical surveys, compared to only about 5% - 12% of events expected to have
bright (on-axis) gamma-ray emission. Most afterglows that are detected will be
from off-axis jets. Further in the future, in the A+ era (O5), 40% - 50% of
mergers will have afterglows detectable with next-generation X-ray and radio
instruments. Future wide-area radio survey instruments, particularly DSA-2000,
could detect 40% of afterglows, even without a kilonova counterpart. Finding
and monitoring these afterglows will provide valuable insight into the
structure and diversity of relativistic jets, the rate at which mergers produce
jets, and constrain the angle of the mergers relative to our line of sight. | Brian James Morsony, Ryan De Los Santos, Rubin Hernandez, Joshua Bustamante, Brandon Yassuiae, German Astorga, Juan Parra, Jared C. Workman | 2023-05-31T18:00:19Z | http://arxiv.org/abs/2306.00076v3 | The afterglow of GW170817 from every angle: Prospects for detecting the afterglows of binary neutron star mergers
###### Abstract
To date GW170817, produced by a binary neutron star (BNS) merger, is the only gravitational wave event with an electromagnetic (EM) counterpart. It was associated with a prompt short gamma-ray burst (GRB), an optical kilonova, and the afterglow of a structured, off-axis relativistic jet. We model the prospects for future mergers discovered in gravitational waves to produce detectable afterglows. Using a model fit to GW170817, we assume all BNS mergers produce jets with the same parameters, and model the afterglow luminosity for a full distribution of observer angles, ISM densities, and distances. We find that in the LIGO/Virgo O4 run, 30% - 50% of BNS mergers with a well-localized counterpart will have an afterglow detectable with current instrumentation in the X-ray, radio and optical. Without a previously detected counterpart, up to 18% will have an afterglow detectable by wide-area radio and optical surveys, compared to only about 5% of events expected to have bright (on-axis) gamma-ray emission. Therefore, most afterglows that are detected will be from off-axis jets. Further in the future, in the A+ era (O5), 50% - 60% of mergers will have afterglows detectable with next-generation X-ray and radio instruments. Future wide-area radio survey instruments, particularly DSA-2000, could detect 50% of afterglows, even without a kilonova counterpart. Finding and monitoring these afterglows will provide valuable insight into the structure and diversity of relativistic jets, the rate at which mergers produce jets, and constrain the angle of the mergers relative to our line of sight.
## 1 Introduction
GW170817 was the first BNS merger detected in gravitational waves (Abbott et al., 2017), and was followed 1.7s later by a short gamma-ray burst, GRB170817A (Abbott et al., 2017). Rapid optical followup associated this event with an optical transient in NGC4993 (Abbott et al., 2017) at a redshift of \(z=0.0098\)(Hjorth et al., 2017). The optical transient is well fit by kilonova models (e.g. Cowperthwaite et al., 2017).
Although associated with intrinsically faint gamma-ray emission, initially there was no X-ray (Margutti et al., 2017) or radio emission (see Abbott et al., 2017, and references therein) detected at the site of the kilonova that would indicate a relativistic afterglow. However, X-rays were detected by 9 days after the merger (Troja et al., 2017) and radio by 16 days (Mooley et al., 2017; Corsi et al., 2017; Hallinan et al., 2017). Afterglow luminosity increased by about a factor of 5 over the next 5 months, before beginning to decrease rapidly (\(F_{\nu}\sim t^{-1.9}\), Makhatnii et al., 2021), consistent with an off-axis relativistic jet. VLBI observations on days 75 and 230 showed superluminal motion of the radio source, with an apparent velocity of \(4.1\pm 0.5\) c (Mooley et al., 2018), confirming the presence of a relativistic jet aimed about \(20^{2}\) away from our line of sight.
Numerous modeling efforts (e.g Lazzati et al., 2018; Mooley et al., 2018; Margutti et al., 2018; Lamb and Kobayashi, 2018; Wu and MacFadyen, 2018, 2019; Lin et al., 2019; Ioka and Nakamura, 2019; Gill et al., 2019; Fraija et al., 2019; Troja et al., 2019; Hajela et al., 2019; Ziaeepour, 2019; Beniamini et al., 2020; Cheng et al., 2021; Li and Dai, 2021; Lamb et al., 2021; McDowell and MacFadyen, 2023), both before and after the afterglow emission began to decline, are consistent with emission from a relativistic GRB jet seen \(20^{\circ}\)-\(30^{\circ}\) off-axis. The jet has a structured energy distribution, such that it has more energy closer to the jet axis. As the jet decelerated, light from material closer to the jet axis could be seen, leading to the brightness increasing over several months. Once the center was visible, the brightness began to decrease rapidly. This is the first definitive case where a) a GRB was seen off-axis and b) the jet was definitely structured, not just a flat energy distribution with a cutoff (top-hat jet).
Constraining the angle of the jet relative to Earth, combined with GW data, allows for better determinations of the Hubble constant than is possible with GW data alone. For example, the angle limits for GW170817 allowed the measurement of \(H_{0}\) to be improved from \(70.0^{+12.0}_{-8.0}\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Abbott et al., 2017) to \(68.9^{+4.7}_{-4.6}\) km s\({}^{-1}\) Mpc\({}^{-1}\)(Hotokezaka et al., 2019). Just 16 BNS mergers with similar
quality jet angle determinations could constraint \(H_{0}\) to less that 2% (Hotokezaka et al., 2019), compared to 50 - 100 needed without afterglow measurements.
Moving forward, the range of GW detectors will significantly increase. In O4, LIGO will be able to detect BNS out to 190 Mpc; in the A+ era (Os run) this will increase to 325 Mpc (Abbott et al., 2020). This means there will be more BNS detected, but their EM counterparts will be significantly fainter for the same luminosity. We set out to determine what fraction of these mergers will have detectable afterglows.
This paper is organized as follows: In section 2 we outline our afterglow model and fitting procedure, and provide an updated fit to the afterglow observations of GW170817. In section 3 we model the fraction of GW events that will have a detectable afterglow accounting for observer angle and ISM density. We then explore how changes to GW horizon distance, EM instrument sensitivity, ISM density distribution, observation timing, and synchrotron electron index impact the fraction of detectable afterglows. In section 4 we summarize our conclusions.
## 2 Methods
### Afterglow Model
We model GRB afterglows using the semi-analytic Trans-Relativistic Afterglow Code (TRAC) This code was first used in Morsony et al. (2016) and is described in Appendix A. TRAC is available on GitHub1 and the version used here is archived on Zenodo (Morsony, 2023). The afterglow is modeled as an impulsive explosion expanding into an ISM with a constant particle density of \(n_{\rm ISM}\). This creates a shock that is tracked smoothly from its development through the ultrarelativistic phase and into the non-relativistic phase. Emission from the shock is assumed to be synchrotron radiation (see Appendix B) with electron powerlaw index \(p\), electron energy fraction \(\epsilon_{e}\), and magnetic energy fraction \(\epsilon_{B}\), which are assumed to be the same at all positions and at all times for a given shock.
Footnote 1: TRAC codebase: [https://github.com/morsmy/TRAC](https://github.com/morsmy/TRAC).
For the relativistic jet, we use a fixed jet profile taken from Lazzati et al. (2017). This energy distribution was produced by a relativistic hydrodynamical simulation of a jet propagating in the aftermath of a neutron star merger, and was previously used to fit the afterglow of GW170817 in Lazzati et al. (2018). This jet profile provides both the energy and initial mass of the ejecta as a function of angle from the jet axis. We assume a thickness of the ejecta of \(\Delta=3\times 10^{9}\) cm (0.1 light-seconds) at all angles.
### Fitting Procedure
To fit the afterglow of GW170817, we have 5 free parameters: \(n_{\rm ISM}\), observer angle \(\theta_{obs}\), \(p\), \(\epsilon_{e}\), and \(\epsilon_{B}\). We fit to the observations of the afterglow of GW170817 from Makhathini et al. (2021), and the change in position of the afterglow from VLBI observations in Mooley et al. (2018). The change in position of our modeled afterglow is determined by creating a 2D afterglow image at the time and frequencies corresponding to the VLBI observations, then fitting a 2D Gaussian to the image, and taking the centroid position to be the location of the afterglow at that time. The difference between centroid locations is then the change in position.
We use Markov-Chain Monte Carlo (MCMC) to find the best fit of our 5 free parameters to the observations, using the emcee python package (Foreman-Mackey et al., 2013). However, running TRAC for a specific set of parameters is expensive. We therefore begin with an initial set of 90 afterglow models and interpolate between them for the MCMC fitting. The initial models cover a 4-dimensional space of \(10^{-5}\leq n_{\rm ISM}\leq 1\), \(15\leq\theta_{obs}\leq 35\), \(2.05\leq p\leq 2.35\), and \(10^{-4}\leq\epsilon_{B}\leq 10^{-1}\). By assuming none of the observations are effected by synchrotron self-absorption, \(\epsilon_{e}\) only changes the normalization of the modeled light curves. We therefore fix \(\epsilon_{e}\) to 0.02 for all of our initial models. One model is run at each corner of the 4-dimensional space (16 models), one model at the center, and the remaining 73 models randomly distributed.
For each model, the brightness of the afterglow is calculated for 25 times, log spaced between \(10^{5}\)s and \(10^{9}\)s, and for 101 frequencies at each time, long spaced between \(10^{-11}\) eV and \(10^{9}\) eV (2400 Hz to \(2.4\times 10^{14}\) GHz), as well the change in location between VLBI observations. All models are carried out at redshift \(z=0.0098\) and luminosity distance \(d_{L}=40.4\) Mpc (Hjorth et al., 2017).
The initial models are first interpolated to the appropriate time and frequency for each observation. We can then interpolate between model parameters for each set of parameters needed for the MCMC fitting. Interpolation is carried out using Gaussian Process Regression (GPR), a machine learning technique (e.g. Rasmussen & Williams, 2006). We use the sklearn python package (Pedregosa et al., 2011) and the Matern kernel to create the GPR model. The interpolated models created with this technique are within a few percent of a full TRAC model run with the same parameters.
Finally, best fit parameters and error distributions are found using MCMC fitting over 5 free parameters, using our GPR-interpolated models for 4 parameters with the ranges listed above, and a normalization for \(\epsilon_{e}\), limited to \(10^{-5}\leq\epsilon_{e}\leq 1\).
### Best Fit for GW170817
Carrying out our fitting procedure on observations of GW170817 achieves a reduced chi-squared of 1.73. The best-fit parameters from fitting our structured jet model, and 1-\(\sigma\) errors, are shown in Table 1. These values are broadly consistent with previous fits (e.g Lazzati et al., 2018; Mooley et al., 2018; Margutti et al., 2018; Wu & MacFadyen, 2018, 2019; Hajela et al., 2019), with a low density (\(\sim 10^{-3}\) cm\({}^{-3}\)), small \(\epsilon_{B}\)
(\(\sim 10^{-3}\)), and observer angle between 20 and 30 degrees. The inclusion of VLBI position data pushes our fit to a smaller angle. A corner plot of our fit parameters is shown in Fig. 1. There is significant degeneracy between ISM density and observer angle, with small angles and high densities both producing a bright, early peak, and between \(\epsilon_{e}\) and \(\epsilon_{B}\), with high values of either producing a brighter afterglow.
Fig. 2 compares observations of GW170817 to our best-fit model. Between days 75 and 230, our model predicts an average apparent velocity of the radio afterglow of 3.4 c, within 1.5\(\sigma\) of the observed value of \(4.1\pm 0.5\) c from Mooley et al. (2018) Our model fits the data well, particularly for the rise and fall of the light curve. However, there are some discrepancies, as should be expected for a fixed jet profile. The peak of our fitted light curve is not quite as sharp or as bright as the observed peak, particularly in the radio. This is likely because our jet model flattens in the inner few degrees. An even more sharply peaked jet is needed to produce a sharper afterglow peak. Our model also under-predicts the brightness of the first X-ray detection. This could indicate our jet model has too much mass loading off-axis (the material directed towards Earth is travelling too slowly) producing a faint initial afterglow.
## 3 Results
### Detectability of GW afterglows over angle and density
With a best-fit model for GW170817 in hand, we can now examine how likely it is that future BNS mergers will have a detectable afterglow, assuming all mergers produce a GW170817-like jet. For our standard case, we assume the jet energy distribution and all parameters are the same as our best fit for GW170817, but we vary the distance, observer angle, and ISM density. We assume mergers are randomly distributed in space and observer angle, but account for the increased sensitivity of GW detectors to more face-on vs. edge-on mergers. For the ISM density distribution, we assume densities are equally likely in log space between \(10^{-6}\) cm\({}^{-3}\) and \(10\) cm\({}^{-3}\). This is consistent with the distribution of short GRB ISM densities found in Fong et al. (2015). The effects of modifying the density distribution are explored in section 3.4. For our standard assumptions, we model a GW horizon distance for face-on mergers of 200 Mpc, approximately what will be achieved for the LIGO O4 run (Abbott et al., 2020).
To determine if the afterglow of a merger is detectable, we set a threshold detection limit, then say an afterglow is detectable if it reaches a brightness at least double this limit at any point between 1 day and 1 year after the merger. This ensures an afterglow would be detected with reasonably spaced observations, but additional faint afterglows might be detectable with high-cadence observations.
We model the detectability of afterglows in X-ray, radio, and optical observations. For our standard assumptions on sensitivity, we assume targeted observations of a known source location, achievable with current instrumentation. This could be a location determined by, e.g., optical observations of a kilonova or a Swift-BAT gamma-ray counterpart. Our standard detection thresholds are \(10^{-15}\) erg cm\({}^{-2}\) s\({}^{-1}\) for 0.3 - 10 keV X-ray observations (achievable with Chandra or XMM), 20 \(\mu\)Jy at 6 GHz in radio (VLA), and 27th AB-magnitude in r-band optical observations (8-meter class telescope).
Under these assumptions, the afterglow of a GW170817-like event would have a 50% chance of being detectable in X-rays, a 32% chance in radio, and a 29% chance in optical for a GW horizon of 200 Mpc (see Table 2). Figs. 3 and 4 show the probability of an afterglow being detected vs. observer angle and ISM density. Our best fit to GW170817 has a hard electron spectrum (\(p=2.127\)), making the X-ray afterglow relatively bright. Afterglows are brighter close to the jet axis and in denser environments, making the probability of being detectable higher at small angles and large densities. At the highest densities, in particular, all mergers would have a detectable afterglow, regardless of observer angle.
If Fig. 5, we show the detection probability vs. both angle and density in each band. There is a sharp transition between all events being detectable (within the GW horizon), and only a small fraction being detectable at close range. Note that most mergers with a detectable afterglow will not have bright gamma-ray emission directed at Earth. Taking \(10^{\circ}\) off-axis as the limit to have bright gamma-rays, only 5% of GW BNS mergers would be accompanied by a bright, classical short GRB, compared to up to 50% with a detectable afterglow. Of those events within \(10^{\circ}\), the vast majority would have a detectable afterglow (95% in X-rays, 78% in radio, 76% in optical), with the exceptions being at very low densities.
\begin{table}
\begin{tabular}{l l l} \hline \hline Parameter & Value & 1-\(\sigma\) Error \\ \hline \(n_{\rm ISM}\) & \(8.8\times 10^{-4}\) cm\({}^{-3}\) & \([-1.7,+2.0]\times 10^{-4}\) cm\({}^{-3}\) \\ \(\theta_{\rm obs}\) & \(21.5^{\circ}\) & \(-0.5^{\circ}\), \(+0.5^{\circ}\) \\ \(p\) & \(2.127\) & \(-0.005\), \(+0.004\) \\ \(\epsilon_{e}\) & \(0.069\) & \(-0.012\), \(+0.012\) \\ \(\epsilon_{B}\) & \(8.1\times 10^{-4}\) & \([-1.5,+2.4]\times 10^{-4}\) \\ \hline \end{tabular}
\end{table}
Table 1: Best-fit parameters for the afterglow of GW170817, along with 1-\(\sigma\) error range
### Impact of GW horizon distance
The distance at which BNS mergers can be detected will affect the fraction of mergers with detectable afterglows. As the sensitivity of GW detectors increases, the total number of detectable afterglows will increase. Fig. 6 shows the number of detectable afterglows per year as a function of the GW (on-axis) horizon distance. The total number of BNS mergers within the horizon are normalized to 1.5 mergers per year within 100 Mpc (Abbott et al., 2021). The dashed red line is the total number of BNS mergers. The number of detectable afterglows, with the standard assumption from section 3.1, increase roughly as horizon distance cubed out to a couple of hundred Mpc, the as distance squared (dotted purple line) beyond a Gpc.
Going from a horizon distance of 200 Mpc to 325 Mpc, appropriate for LIGO A+ era (O5 run), the number of BNS mergers per year
Figure 1: Corner plot of degeneracy in best-fit parameters for GW170817. There is degeneracy between ISM density and observer angle, and between \(\epsilon_{e}\) and \(\epsilon_{B}\).
increases from 12 to 52, while the number of detectable X-ray afterglows goes from 6 to 22. The detectable fraction is about 8% less in all bands, dropping to 42% in X-ray, 24% in radio and 22% in optical (see Table 2).
We can also plot the detection probability as a function of GW strain, a quantity directly measurable from GW observations. Fig. 7 plots the detection probability vs. strain-distance: the distance a face-on BNS merger would be at to produce the detected strain. For example, at 43 Mpc, the approximate strain-distance of GW170817, the probability of having a detectable afterglow is 62%, 45%, and 43% in the X-ray, radio, and optical, respectively. At 200 Mpc, and at our best-fit observer angle and ISM density, GW170817 would not have had a detectable afterglow.
Figure 2: Comparison of observations of GW170817 (points with 1-\(\sigma\) error bars) and best-fit model (lines). Data is plotted for flux density in radio normalized to 6 GHz (orange), optical normalized to r-band (green), and X-ray at 1 keV (blue).
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & & Sensitivity & & Horizon & p-index & Density & & Detection Probability \\ Model Name & X-ray & Radio & Optical & Distance & & Distribution & X-ray & Radio & Optical \\ & (erg cm\({}^{-2}\) s\({}^{-1}\)) & (\(\mu\)Jy) & (AB mag) & (Mpc) & & & & \\ \hline Standard & 1.00e-15 & 20 & 27 & 200 & 2.1 & Standard & 50\% & 32\% & 29\% \\ Standard A+ & 1.00e-15 & 20 & 27 & 325 & 2.1 & Standard & 42\% & 24\% & 22\% \\ Survey & 4.4E-14\({}^{a}\) & 250\({}^{b}\) & 24.5 & 200 & 2.1 & Standard & 11\% & 18\% & 13\% \\ Survey A+ & 4.4E-14\({}^{a}\) & 250\({}^{b}\) & 24.5 & 325 & 2.1 & Standard & 7\% & 7\% & 8\% \\ Next Gen & 1.00e-16 & 2\({}^{b}\) & 30 & 200 & 2.1 & Standard & 65\% & 56\% & 50\% \\ Next Gen A+ & 1.00e-16 & 2\({}^{b}\) & 30 & 325 & 2.1 & Standard & 59\% & 49\% & 44\% \\ Low Dens & 1.00e-15 & 20 & 27 & 200 & 2.1 & \(n<1\) & 41\% & 21\% & 18\% \\ Truncated Dens & 1.00e-15 & 20 & 27 & 200 & 2.1 & \(10^{-4}<n<1\) & 59\% & 30\% & 27\% \\ \(p=2.5\) & 1.00e-15 & 20 & 27 & 200 & 2.5 & Standard & 27\% & 38\% & 21\% \\ \(p=2.9\) & 1.00e-15 & 20 & 27 & 200 & 2.9 & Standard & 11\% & 39\% & 13\% \\ \hline \hline \end{tabular} \({}^{a}\) between 0.5 and 2 keV
\({}^{b}\) at 1 GHz
\end{table}
Table 2: Afterglow model parameters and detection probabilities. Column p-index is the electron index, with 2.1 corresponding to the best-fit value of 2.127. Standard density distribution is \(10^{-6}<n_{\rm ISM}<10\), with densities in between equally distributed in log space. X-ray sensitivities are between 0.3 and 10 keV, and radio sensitivities are at 6 GHz, except where noted. All optical sensitivities are in r-band.
Figure 3: The probability of detecting the afterglow of a BNS merger vs. observer angle, assuming a GW horizon of 200 Mpc and our standard sensitivities (see section 3.1). Lines correspond to X-ray (blue), radio (orange), and optical (green) detection probabilities. On-axis afterglows tend to be brighter and hence more likely to be detected.
Figure 4: Same as Fig. 3, but for detection probability vs. ISM density. High densities produce brighter afterglow, which are more likely to be detected. For the highest densities, all mergers within the GW horizon would produce a detectable afterglow.
Figure 5: Each panel show the afterglow detection probability as a function of observer angle and ISM density for X-ray (left), radio (middle), and optical (right) observations, under our standard assumptions. Afterglows are most detectable at high densities and/or small angles, and fall off sharply at low densities and large angles. Horizontal dotted grey lines represent densities of 1 cm\({}^{-3}\) and \(10^{-4}\) cm\({}^{-3}\) (see section 3.4). Vertical dotted grey line is at \(10^{\circ}\), inside which a bright GRB would nominally be expected. The red dot in each panel is at the best-fit parameters for GW170817. At 200 Mpc, GW170817 would not have had a detectable afterglow.
### Targeted vs. untargeted searches
We also explore the prospects for afterglow detection with different search sensitivities. We consider here a "survey" depth for untargeted searches and a "next gen" depth for near-future observing facilities.
For the survey sensitivity, we assume a GW detection but with no kilonova or other well-localized counterpart. Although not intended for this purpose, for the X-ray we set a threshold value of \(4.4\times 10^{-14}\) erg cm\({}^{-3}\) s\({}^{-1}\) from 0.5 to 2 keV for an eROSITA survey (Merloni et al., 2012). For radio, we assume a threshold of 250 \(\mu\)Jy at 1 GHz, achievable with ASKAP or Apertif. For optical, we a assume threshold of 24.5 AB-mag in r-band, for Rubin. The radio and optical surveys could be either purely serendipitous or targeted to a specific event. At the survey depths, a reasonable fraction of BNS mergers still have a detectable afterglow for a 200 Mpc GW horizon, particularly in the radio with 18% detectable (see Table 2). At larger distances, the detectable fraction falls of rapidly to \(7-8\%\) at 325 Mpc. In Fig. 8, the detection rate in the radio cuts off rapidly beyond 200 Mpc. This is because synchrotron self-absorption is important at 1 GHz for afterglows at high densities.
For the "next gen" sensitivity, we assume well-targeted observations with future facilities. For X-rays, we use a threshold of \(10^{-16}\) erg cm\({}^{-3}\) s\({}^{-1}\), possible with missions like Athena or AXIS (Piro et al., 2022; Mushotzky et al., 2019). For radio, we use a threshold of 2 \(\mu\)Jy at 1 GHz, possible with SKA Phase 1, ngVLA, or DSA-2000 (Braun et al., 2019; Selina et al., 2018; Hallinan et al., 2019), and for optical we assume a threshold of 30 AB-mag in r-band, possible with very deep HST or ground-based observations. Deeper observations detect significantly more afterglows, even at extended range (see Table 2).
Even with a 325 Mpc GW horizon for A+, more than 44% of afterglows are detectable in all bands, and 59% are detectable in X-rays, corresponding to about 30 afterglow detections per year. DSA-2000 will be able to survey large areas down to 2 \(\mu\)Jy (Hallinan et al., 2019), allowing an afterglow detection for about half of all A+ BNS mergers, even without a kilonova localization. In Fig. 9, the detection fraction remain high, \(>20\%\), even beyond 1 Gpc.
Figure 6: The total number of afterglows detectable per year, assuming our standard sensitivities, as a function of GW horizon distance. The dashed red line represents the number of BNS mergers detected per year, normalized to 1.5 mergers per year within 100 Mpc. The number of detectable afterglows increase as horizon distance cubed out to a couple hundred Mpc, then transitions and is proportional to roughly distance squared (dotted purple line) at Gpc distances.
### Impact of density distribution
The density of the ISM around BNS mergers is not well known. Although the distribution we choose is consistent with Fong et al. (2015), any deviations could have a strong affect on the number of detectable afterglows. For example, at high density almost all of the afterglows are detectable. If the density distribution is truncated at 1 cm\({}^{-3}\) rather than 10 cm\({}^{-3}\), the detectability rate drops by about 10%, e.g. from 50% to 41% in X-ray (see Table 2). On the other hand, almost no afterglows are detectable at very low densities. Truncating the density distribution both below \(10^{-4}\) cm\({}^{-3}\) and above 1 cm\({}^{-3}\) (dotted lines in Fig. 5), also a plausible distribution based on Fong et al. (2015), leads to almost no change in the detectability in the radio and optical and an increase in X-ray detectability, from 50% to 59%, under our standard assumptions.
### Impact of timing of observations
Our default observing window extends from 1 day to 1 year after the BNS merger, with the requirement that the afterglow reach twice the threshold detection limit to be considered detectable. However, afterglows, particularly off-axis afterglows, are broadly peaked, so the timing of individual observations is not particularly critical. Fig. 10 shows the probability that an event will have a detectable afterglow (brighter than the threshold limit) on a given day for our standard model assumptions. This reaches up to a 44% chance of detecting an X-ray afterglow on day 106, compared to a 50% overall chance of being detectable. The radio and optical detection probabilities peak earlier in Fig. 10, because the afterglows detectable in those bands are at higher densities and/or smaller angles, meaning they peak earlier.
Due to the broad afterglow peak, the observing window can be shortened without significantly decreasing the number of detectable afterglows. For example, if the end of the observing window is shorted from 1 year to 3 months, the fraction of afterglows detectable only decreases by about 10%, e.g. form 50% to 45% for X-ray afterglows with our standard assumptions. In other words, by 3 months after the merger, 90% of all afterglows that will ever be detectable will have been bright enough to be detected. Ending observations at 6 months, the decrease in detectability is only about 5% compared to 1 year.
Delaying the start of the observing window also does not result in a significant decrease in the number of detectable afterglows. For example,
Figure 7: The probability that an afterglow will be detectable, assuming our standard sensitivities, for a BNS merger at a given GW strain, represented as the distance to a face-on BNS merger of the same strain in Mpc. At 43 Mpc, the approximate strain-distance of GW170817, the detection probability is 62%, 45%, and 43% in the X-ray, radio, and optical, respectively.
delaying the start of observations from 1 day to 1 week only results in a \(\sim 1\%\) decrease. However, early observations are critical for detecting the afterglow before it reaches it's peak brightness, which is needed to constrain the observer angle and other afterglow parameters. In X-rays, with our standard model parameters, about 5% of detectable afterglows have already peaked at 1 day. By 1 week, 15% of X-ray afterglows have already peaked.
The situation is even worse as the fraction of afterglows that are detectable drops. For the "survey" sensitivity, 15% of the X-ray afterglows have passed their peak at 1 day, and 47% are past their peak by 1 week. Regardless of band, by the time at which an afterglow is most likely to be observable, (the peak of the curves in Figs. 10, 11, and 12) two-thirds of the detectable afterglows have already passed their peak.
### Impact of p-index distribution
Our best fit model of GW170817 has a hard electron index of \(p=2.127\). We also consider softer electron indices of \(p=2.5\) and \(p=2.9\), both in the range of observed values for short GRBs (Fong et al., 2015). As \(p\) increases, the X-ray and optical flux decreases, making the afterglow more difficult to detect in these bands (see Table 2). The radio brightness, however, increases because there are more electrons at low energies. This increases the radio detectability from 32% to 38% and 39% at \(p=2.5\) and \(p=2.9\), respectively. For our standard sensitivities and GW horizon, this means there is almost a 40% chance an merger will have a detectable afterglow, either in the X-ray or radio, regardless of electron index.
Figs.11 and 12 show the probability of having a detectable afterglow vs. time for these softer electron indices. In both cases, the probability of radio detection peaks at about 45 days, with earlier peaks for the X-ray and optical. For \(p=2.9\) (Fig. 12), the X-ray and optical detectability drop sharply after 10 days, emphasizing the need for early observations.
Figure 8: Same as Fig. 7, but for our survey depth sensitivities. There is a sharp cutoff in radio detectability beyond 200 Mpc due to synchrotron self-absorption.
## 4 Conclusions
The afterglow evolution of GW170817 is consistent with a structured, short GRB jet seen off-axis. Our updated best-fit parameters (Table 1), using a jet from a hydrodynamical simulation of a short GRB, find the jet observed about 21\({}^{\circ}\) off-axis, in a relatively low-density environment (\(n_{\rm ISM}\sim 10^{-3}\) cm\({}^{-3}\)), consistent with previous models.
By assuming a) all BNS mergers produce a short GRB, b) all short GRB jets have the same structure, and c) all short GRB afterglows have the same shock parameters (\(\epsilon_{e},\epsilon_{b},p\)), we predict what fraction of BNS mergers will have afterglow bright enough to be detected. We find (see Table 2) that:
* 50% of BNS mergers will have an afterglow detectable with current instrumentation in the X-ray, radio, and/or optical, if the location of the merger is known from, e.g, a kilonova localization.
* Without preexisting EM localization, afterglows could still be detected in wide-area surveys. About 13% will have an optical afterglow detectable in deep optical surveys (i.e. Rubin) and 18% would be detectable in radio surveys (i.e. ASKAP and Apertif).
* In the LIGO A+ era (O5), the probability of an afterglow being detectable will increase, even as the distance increases, as next-generation instruments come online. In particular, DSA-2000 and SKA1 will be able to detect \(\sim 50\%\) of radio afterglows, even without a prior EM localization. These facilities will be well matched by future X-ray facilities, such as Athena or AXIS.
* Changes to the assumed ISM density distribution can change the fraction of afterglows that will be detectable by about \(\pm 10\%\).
* Afterglows with a softer electron index are significantly fainter in the X-ray and optical, but brighter in the radio. The combined X-ray and radio detection fraction is close to 40% during O4, regardless of electron index.
* Afterglows are most likely to be detectable between about 10 days and 3 months after the BNS merger, depending on what fraction are ultimately detectable. By 3 months, 90% of all afterglows that will ever detectable will have become bright enough to be detected.
* Afterglows at smaller observer angles or in high-density regions are brighter and peak earlier. Therefore, when a lower detection fraction is expected, e.g. due to less sensitive instruments or farther distances, afterglows are most likely to be detected earlier.
* Early afterglow detections, before the afterglow reaches its peak brightness, are needed to constrain the jet structure and observer angle. For example, for X-rays in O4, about 5% of detectable afterglows will have peaked by 1 day, and 15% will have peaked by 1 week.
Figure 9: Same as Fig. 7, but for our “next gen” depth sensitivities. The radio sensitivity of 2 \(\mu\)Jy is achievable for DSA-2000 over large survey areas, enabling detection and localization of afterglows for a large fraction of BNS mergers, even without prior kilonova localizations.
As the sensitivity of GW detectors, and the number of BNS mergers detected, increases, deep rapid multi-wavelength followup will be critical for detecting relativistic jets and determining the angle between any jet and our line of sight. Even in our worst case estimates, the number of afterglows that are detectable is far larger than the \(\sim 5\%\) of BNS mergers that will be seen "on-axis" (within \(10^{\circ}\)) and are expected to be associated with bright, classical short GRBs. Most relativistic jets associated with mergers will be discovered through afterglow searches for off-axis jets. Modeling of the rates of afterglows detected, and modeling the light curves of individual events, will determine if all BNS mergers make jets, the distribution of energy and energy structure of those jets, and the angle at which individual jets are seen.
## Acknowledgements
The authors thank Dr. Davide Lazzati and Isabel Rodriguez for their many useful discussions and inspiration that contributed to this project. We also thank Dr. Jared Work for his help in developing TRAC. This material is based upon work supported by the National Science Foundation under Grant No. 2218943. BJM and GA were supported in part by U.S. Department of Education PR/Award: P217A170182. This research activity is funded in part by the Stanislaus State STEM Success program through a U.S. Department of Education Title III grant #P031C160070. We gratefully acknowledge receiving support from the CSU-LSAMP Grant funded through the National Science Foundation (NSF) under grant #HRD-1826490 and the Chancellor's Office of the California State University. This work was supported in part by Stanislaus State RSCA grant awards and the Student Engagement in Research, Scholarship, and Creative Activity (SERSCA) Program.
## Data Availability
The observed afterglow brightnesses analyzed in this article were compiled in Makhathini et al. (2021) and available at [https://github.com/Kmooley/GW170817/](https://github.com/Kmooley/GW170817/). VLBI position data can be found in Mooley et al. (2018). Afterglow models were created using the version of TRAC archived in Morsony (2023), available at [https://github.com/morsony/TRAC](https://github.com/morsony/TRAC).
Figure 10: The probability of detecting an afterglow for an observation made at a given time after merger, for our standard sensitivities and a GW horizon of 200 Mpc. The curves peak at about 106 days, 38 days, and 25 days for X-ray, radio, and optical observations, respectively. |
2309.05953 | GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection | Logs play a crucial role in system monitoring and debugging by recording
valuable system information, including events and states. Although various
methods have been proposed to detect anomalies in log sequences, they often
overlook the significance of considering relations among system components,
such as services and users, which can be identified from log contents.
Understanding these relations is vital for detecting anomalies and their
underlying causes. To address this issue, we introduce GLAD, a Graph-based Log
Anomaly Detection framework designed to detect relational anomalies in system
logs. GLAD incorporates log semantics, relational patterns, and sequential
patterns into a unified framework for anomaly detection. Specifically, GLAD
first introduces a field extraction module that utilizes prompt-based few-shot
learning to identify essential fields from log contents. Then GLAD constructs
dynamic log graphs for sliding windows by interconnecting extracted fields and
log events parsed from the log parser. These graphs represent events and fields
as nodes and their relations as edges. Subsequently, GLAD utilizes a
temporal-attentive graph edge anomaly detection model for identifying anomalous
relations in these dynamic log graphs. This model employs a Graph Neural
Network (GNN)-based encoder enhanced with transformers to capture content,
structural and temporal features. We evaluate our proposed method on three
datasets, and the results demonstrate the effectiveness of GLAD in detecting
anomalies indicated by varying relational patterns. | Yufei Li, Yanchi Liu, Haoyu Wang, Zhengzhang Chen, Wei Cheng, Yuncong Chen, Wenchao Yu, Haifeng Chen, Cong Liu | 2023-09-12T04:21:30Z | http://arxiv.org/abs/2309.05953v1 | # GLAD: Content-aware Dynamic Graphs For Log Anomaly Detection
###### Abstract
Logs play a crucial role in system monitoring and debugging by recording valuable system information, including events and states. Although various methods have been proposed to detect anomalies in log sequences, they often overlook the significance of considering relations among system components, such as services and users, which can be identified from log contents. Understanding these relations is vital for detecting anomalies and their underlying causes. To address this issue, we introduce GLAD, a Graph-based Log Anomaly Detection framework designed to detect relational anomalies in system logs. GLAD incorporates log semantics, relational patterns, and sequential patterns into a unified framework for anomaly detection. Specifically, GLAD first introduces a field extraction module that utilizes prompt-based few-shot learning to identify essential fields from log contents. Then GLAD constructs dynamic log graphs for sliding windows by interconnecting extracted fields and log events parsed from the log parser. These graphs represent events and fields as nodes and their relations as edges. Subsequently, GLAD utilizes a temporal-attentive graph edge anomaly detection model for identifying anomalous relations in these dynamic log graphs. This model employs a Graph Neural Network (GNN)-based encoder enhanced with transformers to capture content, structural and temporal features. We evaluate our proposed method1 on three datasets, and the results demonstrate the effectiveness of GLAD in detecting anomalies indicated by varying relational patterns.
Footnote 1: Our code is available at [https://github.com/yul091/GraphLogAD](https://github.com/yul091/GraphLogAD)
log anomaly detection, GNN, transformer
## I Introduction
Anomaly detection is the task of identifying unusual or unexpected behaviors in a system or process. As computer systems become increasingly more sophisticated due to the expansion of new communication technologies and services, they are prone to various adversarial attacks and bugs [1]. Moreover, such attacks are also getting evolved and becoming increasingly sophisticated. As a result, the difficulty of anomaly detection has increased, making many conventional detection approaches no longer effective, and it requires us to look deeper into the system, for example, the interaction among system components.
System logs capture system states and events across time to aid process monitoring and root cause analysis of running services. These log files are ubiquitous in almost all computer systems and contain rich information, including control commands of machine systems, transactions of customer purchases, and logs of a computer program. As a result, they have proven a valuable resource for anomaly detection in both academic research and industry applications [2, 3, 4, 5, 6]. Each log message usually consists of a predefined constant key template (known as a "event", e.g., a login activity) and a few variables (known as "fields", e.g, _services_ and _users_). When the events are arranged chronologically based on the recording time, they form a discrete log sequence. Various methods have been proposed to detect the anomalous sequential patterns in the sequence: (1) _Pattern recognition_ methods consider event sequences with inconsistencies beyond a certain threshold to be anomalous [7, 8, 9, 10, 11]. They treat event alphabet sequence as input in an independent dimension and ignore the sequential patterns between events. (2) _Sequential learning_ methods analyze events sequentially with a defined sliding window in order to forecast the subsequent event based on the observation window [6, 12].
However, the relation between log events and fields, an essential indicator of system anomalies, has often been overlooked. This oversight can lead to missed detection or false alarms, as anomalies may not be apparent from individual events or isolated patterns. Different from previous methods that detect anomalous sequential patterns in log sequences, we focus on a new task that aims at detecting anomalous relational patterns between interconnected events and fields. Take, for instance, a scenario where workers receive an unbalanced number of requests from a coordinator in a period of time, or a coordinator suddenly requests connection to other workers, as illustrated in Figure 1. Traditional methods, without considering the relations, may fall short in detecting such anomalies. Apart from detecting anomalous events, understanding these anomalous relations between events can offer insightful details about the system's dynamics, for example, the underlying causes of an anomaly and its propagation over time.
Fig. 1: Two anomalous relations (anomalous edges highlighted in red): unbalanced request (left) and malicious request (right).
To achieve our goal, there are several challenges: (1) Dynamic graphs need to be built to describe the interactions between log events and fields in different time windows. (2) Instead of merely detecting anomalies on graph level, we aim to detect anomalous edges representing the relations among nodes, which is a more challenging task. (3) In addition to relational patterns, we need to integrate log semantics and sequential patterns as a whole for anomaly detection.
To this end, we propose GLAD, a **G**raph-based **L**og **A**nomaly **D**etection framework, to extract and learn the relations among log events and fields, in addition to log semantics and sequential patterns, for system relation anomaly detection. Our approach proposes a novel method to construct dynamic graphs that describe the relations among log events and fields over time and then leverages a temporal-attentive transformer to capture the sequential patterns implicitly expressed in each time period. Specifically, a field extraction module utilizing prompt-based few-shot learning is first used to extract field information from log contents. Then, with the fields extracted and the log events parsed from a log parser, dynamic graphs can be constructed for sliding windows with events and fields as nodes and the relations between them as edges. Finally, a temporal-attentive graph edge anomaly detection method is proposed to detect anomalous relations from evolving graphs, where a Graph Neural Network (GNN)-based encoder facilitated with transformers is used to learn the structural, content, and sequential features. Experiments on real-world log datasets are conducted to demonstrate the effectiveness of GLAD.
To summarize, in this work, we propose to detect log anomalies from a novel point of view, i.e., the interaction and relation between system components leveraging system logs. In this way, we can dig into more system details and find causes and solutions to the anomalies efficiently. Our main contribution is a framework for constructing dynamic graphs from logs and capturing relational anomalies from dynamic graphs using temporal-attentive transformers, which allows for more granular and accurate log anomaly detection. We believe our proposed approach has the potential to significantly improve the effectiveness of log analysis in detecting more sophisticated anomalies in real applications.
## II Related Work
**Log Sequences Anomaly Detection.** Detecting anomalies in log sequences has recently gained substantial attention. Earlier research hinged upon similarity measurements, wherein test logs are compared with training logs to detect anomalies based on their dissimilarity [13, 14]. Subsequent methods can be categorized into three groups: _pattern frequency-based_[15], _sequence-based_ such as Hidden Markov Model (HMM) [16], and _contiguous subsequence-based_ anomaly detection such as window-based techniques [3, 17]. While certain studies utilize supervised learning for anomaly detection [18, 19, 20, 21], unsupervised learning, which observe only normal event sequences during training, has been proven to be a more efficient learning paradigm [8, 9, 10, 22, 23, 11]. Our research mainly focuses on the latter learning paradigm.
**Log Knowledge Graph Construction.** Raw log files offer a wealth of information pertaining to system states and service interconnection, e.g., whether a computing machine is running under an abnormal state or a user is a malicious attacker. To analyze such data and avoid tedious searching clues or tracing system events across log sources, existing studies have put efforts into identifying and linking entities (log fields) across log sources, thereby enriching them with knowledge graphs [24, 25, 26]. They often apply information extraction techniques such as Named Entity Recognition (NER) to identify log fields within log messages. The resulting fields are considered nodes within a knowledge graph, and rule-based relation linking is used to integrate the log fields into the knowledge graph. However, these methods require a large amount of label data for training, which introduce high cost in real applications. In comparison, we try to solve this problem in a few-shot setting.
**Graph-based Anomaly Detection.** GNNs have become increasingly popular due to their ability to learn relation patterns, making them favorable for anomaly detection. Leading GNN models include GCN [27], GIN [28], SAGE [29], GAT [30], and Transformer Graph (GT) [31]. Existing graph-based anomaly detection methods can be categorized into three types based on the range of anomaly detection: (1) _Node-level auto-encoders_[32, 33, 34, 35] regard nodes with atypical attribute and relation distributions as anomalies. The key idea is to use GNN-based encoder-decoders to reconstruct original graphs and calculate the reconstruction errors for each node. Nodes with above-threshold errors are detected as anomalies. Some further consider temporal relations on dynamic graphs [36, 37] to detect anomalies. (2) _Edge-level auto-encoders_[38, 39] first use graph encoders to learn node feature representations, then determine edge scores for each node pair in the graph to represent how likely it is normal. Some further consider representative structural information from the dynamic graph in each time stamp and their dependencies [36, 40, 41] to detect anomalous edges. (3) _Graph-level auto-encoders_[42, 43, 44, 45, 46] use a graph encoder to learn feature representations and aggregates all node features within each
\begin{table}
\begin{tabular}{c l} \hline Symbol & Description \\ \hline \hline \(e\) & \(e=\{x_{1},...,x_{|E|}\}\) log message is a sequence of tokens \\ \(\mathcal{S}\) & \(S=\{e_{1},...,e_{|E|}\}\) log sequence is a sequential series of logs \\ \(E\) & \(E=\{ent_{1},...,ent_{|E|}\}\) sequence of entities in a log message \\ \(Y\) & \(Y=\{I_{1},|I_{E|}\}\) sequence of entity labels in a log message \\ \(\mathcal{S}\) & \(S=\{S_{1},...,S_{|E|}\}\) post sequences are a set of log sequences \\ \hline \(\mathcal{G}_{t}\) & the dynamic graph at time window \(t\) with \(\mathcal{V}_{t}\) and \(\mathcal{E}_{t}\) \\ \(\mathcal{V}_{t}\) & vertex set in graph \(\mathcal{G}_{t}\) \\ \(\mathcal{E}_{t}\) & edge set in graph \(\mathcal{G}_{t}\) \\ \(\mathcal{X}_{t}\) & attribute matrix in graph \(\mathcal{G}_{t}\) \\ \(\mathbf{A}_{t}\) & adjacency matrix in graph \(\mathcal{G}_{t}\) \\ \(\mathbf{W}^{(t)}\) & learnable weights in the \(t\)-th layer of a model, e.g., \(\mathbf{W}_{ner}\), \(\mathbf{W}_{S}^{(t)}\) \\ \(\mathbf{I}\) & identity matrix \\ \(\mathbf{H}_{t}\) & node representations of graph \(\mathcal{G}_{t}\) learned by GCN \\ \(N\) & total number of graphs in \(\mathcal{S}\) \\ \(\mathcal{H}_{\mathcal{S},t}\) & long-term node representations of graph \(\mathcal{G}_{t}\) learned by transformers \\ \(\mathcal{H}_{\mathcal{H}_{\mathcal{S},t}}\) & short-term node representations of graph \(\mathcal{G}_{t}\) learned by transformers \\ \(\mathcal{H}_{t}\) & node representations of graph \(\mathcal{G}_{t}\) by concatenating \(\mathcal{H}_{\mathcal{S},t}\) and \(\mathcal{H}_{\mathcal{H}_{\mathcal{S},t}}\) \\ \(\mathcal{R}_{t}\) & graph representation of \(\mathcal{G}_{t}\) by unrecognizing node representations \(\mathcal{H}_{t}\) \\ \hline \(\sigma(t)\) & activation function, e.g., ReLU(\(\mathcal{S}\)), Signal(\(\cdot\)) \\ \(\mathcal{C}\) & loss objective, including \(\mathcal{L}_{ner}\), \(\mathcal{L}_{t}\), \(\mathcal{L}_{reg}\) \\ \(\mathbf{P}\) & prompt \(\mathbf{P}=\{p_{1},...,p_{m}\}\), including \(\mathbf{P}^{+}\) and \(\mathbf{P}^{-}\) \\ \hline \end{tabular}
\end{table} TABLE I: Notation Description.
graph as the graph representation. Hypersphere learning is then applied to cluster all normal graphs into a central distribution, distinguishing them from anomalous ones.
## III Log Anomaly Detection Framework
In this section, we introduce GLAD, a graph-based framework that learns structural, content, and sequential features among logs for anomaly detection, as shown in Figure 2.
### _Preliminaries_
We first define several important terminologies pertinent to our work. The notions are summarized in Table I.
A **log** is a sequence of tokens \(e=\left\{x_{1},...,x_{|e|}\right\}\), where \(x_{i}\) denotes the \(i\)-th token and \(|e|\) is the log length.
A **log sequence** is a series of logs ordered chronologically within an observed time window \(S=\left\{e_{1},...,e_{|S|}\right\}\), where \(e_{i}\) represents the \(i\)-th log and \(|S|\) denotes the total number of logs in a time window.
For a log sequence \(S_{t}\) in time window \(t\), we construct a **dynamic graph**\(\mathcal{G}_{t}=(\mathcal{V}_{t},\mathcal{E}_{t},\mathbf{X}_{t},\mathbf{A}_{t})\), where \(\mathcal{V}_{t}\), \(\mathcal{E}_{t}\) denote the union of vertices and the union of edges, \(\mathbf{X}_{t}\in\mathbb{R}^{n\times d}\) and \(\mathbf{A}_{t}\in\mathbb{R}^{n\times n}\) are its attribute and adjacency matrices. Note that the dynamic graph used in this paper is an undirected, weighted, and attributed heterogeneous graph.
### _Log Graph Construction_
To build graph representations from log sequences, we propose a prompt-based model to extract fields from log messages. The extracted fields, along with the parsed log events via a log parser, are interconnected following pre-defined principles to construct dynamic graphs. Subsequently, we employ a pre-trained Sentence-BERT [47] to capture the semantics of each node using its content information. The encoded hidden representations for each node are treated as its attributes, while the adjacency matrix represents the structure of the graph. These node attributes and adjacency matrices are collectively used to detect anomalous edges.
**Prompt-Based Few-Shot Field Extraction.** Real-world log datasets contain substantial log events and log fields with diverse syntactic formats, making manual annotations virtually infeasible. While existing off-the-shelf tools [48, 49] employ either rule-based or search-based algorithms to extract event templates and fields from raw log messages, their effectiveness is limited. They work well with fields exhibiting fixed syntax patterns, such as _IP_, _email_, and _URL_, but falter with those that have flexible syntax patterns, like _user_ and _service_.
To overcome this challenge, we approach log field extraction as a NER task and propose a prompt-based few-shot learning method using BART [50] that excels in identifying log fields in low-resource scenarios. We define 15 common log field types vital for system monitoring by referring to common log ontology [24, 25, 26]. These include _IP_, _email_, _process ID_ (_pid_), _user ID_ (_uid_), _user name_, _timestamp_, _service_, _server_, _file path_, _URL_, _port_, _session_, _duration_, _domain_, and _version_.
We frame the field extraction as a seq2seq learning process, as shown in Figure (a)a. Given a log message \(e=\left\{x_{1},...,x_{|e|}\right\}\), which contains a set of gold fields \(E=\left\{ent_{1},...,ent_{|e|}\right\}\) and a label set \(Y=\left\{l_{1},...,l_{|E|}\right\}\), we create a target sequence (prompt) \(\mathbf{P}_{l_{k},x_{i:j}}=\left\{p_{1},...,p_{m}\right\}\) for each candidate text span \(x_{i:j}\) and its label \(l_{k}\). Specifically, \(\mathbf{P}\) is a positive prompt \(\mathbf{P}^{+}\) if the text span is a gold field (\(x_{i:j}\in E\)), e.g., "\(\langle x_{i:j}\rangle\) is a/an \(\langle l_{k}\rangle\) entity"; otherwise, it is a negative prompt \(\mathbf{P}^{-}\), e.g., "\(\langle x_{i:j}\rangle\) is not a named entity".
During training, we create prompts using gold fields following [51, 52]. For each log message \(e\), we create positive pairs \((e,\mathbf{P}^{+})\) by traversing all its gold fields and negative pairs \((e,\mathbf{P}^{-})\) by randomly sampling non-entity text spans. For efficiency, we limit the number of \(n\)-grams for a span to 1\(\sim\)5, i.e., 5\(\ast n\) negative prompts are created for each log message. After sampling, the number of negative pairs is three times that of positive pairs. Given a sequence pair \((e,\mathbf{P})\), we feed the log message \(e\) to the encoder of BART whose hidden size is \(d_{h}\), and obtain the hidden states \(\mathbf{h}^{enc}\in\mathbb{R}^{d_{h}}\):
\[\mathbf{h}^{enc}=\text{Encoder}(x_{1:|e|}) \tag{1}\]
At the \(c\)-th decoding step, \(\mathbf{h}^{enc}\) and previous output tokens \(p_{1:c-1}\) are used to generate a representation via attention [53]:
\[\mathbf{h}^{dec}_{c}=\text{Decoder}(\mathbf{h}^{enc},p_{1:c-1}) \tag{2}\]
The conditional probability of a word \(p_{c}\) is defined as:
\[P(p_{c}|p_{1:c-1},e)=\text{softmax}(\mathbf{h}^{dec}_{c}\mathbf{W}_{ner}+ \mathbf{b}_{ner}) \tag{3}\]
Fig. 3: Illustration of prompt-based few-shot field extraction.
Fig. 2: Overview of our GLAD framework. GLAD first extracts log fields and events and connects them to construct dynamic log graphs, where node features are text embeddings. These graphs, along with their sequential dependencies, are jointly encoded to identify anomalous edges.
where \(\mathbf{W}_{ner}\in\mathbb{R}^{d_{h}\times|V|}\) and \(\mathbf{b}_{ner}\in\mathbb{R}^{|V|}\). Here \(|V|\) denotes the vocab size of BART. The decoding objective is the Cross-Entropy (CE) loss for prompt with length \(m\):
\[\mathcal{L}_{ner}=-\sum_{c=1}^{m}\log P(p_{c}|p_{1,c-1},e) \tag{4}\]
During inference, we enumerate all possible 1\(\sim\)5-grams text spans \(x_{i:j}\) for a log message \(e\) and compute scores for each prompt \(\mathbf{P}_{l_{k},x_{i:j}}=\{p_{1},...,p_{m}\}\) as follows:
\[f(\mathbf{P}_{l_{k},x_{i:j}})=\sum_{c=1}^{m}\log P(p_{c}|p_{1:c-1},e) \tag{5}\]
For each traversed text span \(x_{i:j}\), we compute the score \(f(\mathbf{P}_{l_{k},x_{i:j}}^{+})\) for every entity type and \(f(\mathbf{P}_{x_{i:j}}^{-})\) for the non-entity type. A resulting type \(l_{k}^{*}\) than garners the highest score is assigned to \(x_{i:j}\). Such iterative process ensures the extraction of all relevant fields, as depicted in Figure 2(b).
**Graph Structure Configuration.** To model the relation between fields and events across different log messages, we use a sliding window with a fixed time interval to snapshot a batch of log messages and construct a corresponding graph. Specifically, each log instance consists of a parsed event template (obtained via a log parser such as Drain [54]), e.g., "FAILED LOGIN for \(\langle*\rangle\) to \(\langle*\rangle\)", along with a list of extracted fields, e.g., ["della", "map://localhost/"] with corresponding types, e.g., [_user_, _server_]. We then interconnect the event template to each extracted field to capture inherent behaviors in the log with the number of connections as edge weight. In the resultant undirected graph, any two log instances that share any of the defined nodes are indirectly connected, thereby indicating their implicit relations.
**Graph Node Attribute Configuration.** We define types of nodes based on the corresponding event and field types, such as _server_ for "imap://localhost/". For each node, we define its input text format and employ a pre-trained Sentence-BERT [47] to learn the sentence embedding as its attribute. Specifically, for log events, we directly use their templates as the encoder input texts, while for log fields we use our defined prompts as the input texts, e.g., "imap://localhost/ is a server entity". The output hidden states for each input text capture the node semantics and are used as node features for constructing attributed graphs.
### _Temporal-Attentive Graph Edge Anomaly Detection_
We now introduce our proposed temporal-attentive graph edge anomaly detection method, as illustrated in Figure 4, which operates on the dynamic graphs constructed for logs within corresponding time slots. Specifically, a Graph Convolutional Network (GCN) is first used to encode the structural information for each graph. Then, a transformer encoder is deployed to learn the temporal dependencies within the sequence of dynamic graphs. For each graph, we sample certain negative edges and compute the edge score using the learned hidden states. The process concludes by employing a pair-wise margin loss to minimize the positive edge scores and maximize the negative edge scores, in adherence with the one-class training objective.
**GCN Shared Encoder.** At time window \(t\), we receive a graph snapshot \(\mathcal{G}_{t}=(\mathcal{V}_{t},\mathcal{E}_{t},\mathbf{X}_{t},\mathbf{A}_{t})\), where \(\mathbf{X}_{t}\in\mathbb{R}^{n\times d}\) and \(\mathbf{A}_{t}\in\mathbb{R}^{n\times n}\) represent its attribute and adjacency matrices respectively. We apply GCN [27] to capture both its attribute and structural features. While there exist advanced GNNs, such as Graph Transformer (GT) [31], we found that GCN offers a blend of efficiency and competitive performance. It considers high-order node proximity when encoding the embedding representations, thereby alleviating network sparsity beyond the observed links among nodes [32]. For an \(L\)-layered GCN, each layer can be expressed with the function:
\[\begin{split}\mathbf{H}_{t}^{(l)}=f_{\sigma}(\mathbf{H}_{t}^{(l-1 )},\mathbf{\hat{A}}_{t}|\mathbf{W}_{g}^{(l)})\\ f_{\sigma}(\mathbf{H}_{t}^{(l)},\mathbf{\hat{A}}_{t}|\mathbf{W}_{g }^{(l)})=\sigma(\mathbf{\hat{D}}_{t}^{-\frac{l}{2}}\mathbf{\hat{A}}_{t}\mathbf{ \hat{D}}_{t}^{-\frac{l}{2}}\mathbf{H}_{t}^{(l)}\mathbf{W}_{g}^{(l)})\end{split} \tag{6}\]
where \(\mathbf{W}_{g}^{(l)}\) is a learnable weight matrix for the \(l\)-th layer, \(l\in[1,L]\). \(\mathbf{\hat{A}}_{t}=\mathbf{A}_{t}+\mathbf{I}\) denotes the adjacency matrix with added self-loops and \(\mathbf{\hat{D}}_{i,i}=\sum_{j=0}^{d}\hat{A}_{i,j}\) represents its diagonal degree matrix. \(\sigma(\cdot)\) is a non-linear activation function, for which we use the ReLU.
We designate the attribute matrix \(\mathbf{X}_{t}\) as the initial hidden state \(\mathbf{H}_{t}^{(0)}\). The resultant embedding \(\mathbf{Z}_{t}=\mathbf{H}_{t}^{(L)}\) captures the nonlinearity of complex interactions between log entities and events within each graph. However, it is still inadequate for detecting anomalies caused by malicious relations due to neglect of temporal features across graph snapshots.
**Temporal-Attentive Transformer.** Given the chronologically generated nature of system logs, and the logical dependencies that exist between past and present log states, we employ a transformer encoder to incorporate the temporal features of entire sequence into the latent space.
We receive a sequence of node embeddings \(\{\mathbf{Z}_{1},...,\mathbf{Z}_{N}\}\) for all graphs. Note that nodes in each graph are an unordered set, \(\mathcal{V}_{t}=\left\{v_{1},...,v_{|\mathcal{V}_{t}|}\right\}\), rather than a sequence. We propose a Set Transformer (ST) to eliminate this order dependencies when encoding node embeddings. Specifically, we first compute the position embeddings based on each graph's position in the sequence, assigning all nodes belonging to each graph the identical position embedding \(\mathbf{E}_{p}\). Subsequently, the embedding
Fig. 4: Overview of our temporal-attentive graph edge anomaly detection framework. Edges highlighted in red are negative edges. \(\mathbf{Y}_{t}\) and \(\mathbf{Y}_{t}^{neg}\) denote the aggregation of positive and negative edge scores, respectively, for a specific graph \(\mathcal{G}_{t}\).
for the graph at time \(t\) (with position \(p\)) is determined as \(\mathbf{E}_{t}=\mathbf{E}_{p}+\mathbf{Z}_{t}\), and the representation sequence as \(\mathbf{E}_{\mathcal{S}}=\{\mathbf{E}_{1},...,\mathbf{E}_{N}\}\). The representation sequence is then fed into self-attention blocks to derive long-term representations \(\boldsymbol{\mathcal{H}}_{\mathcal{S}}\):
\[\boldsymbol{\mathcal{H}}_{\mathcal{S}}^{(l+1)}=\text{FFN}(\text{Attention}( \boldsymbol{\mathcal{H}}_{\mathcal{S}}^{(l)})) \tag{7}\]
where \(l\) denotes the layer index, with the initial hidden state \(\boldsymbol{\mathcal{H}}_{\mathcal{S}}^{(0)}=\mathbf{E}_{\mathcal{S}}\). We formulate subsequences using a sliding window of size \(k\). Consequently, each subsequence comprises unique local information, pivotal in determining whether the entire sequence is anomalous. For a subsequence of graph node embeddings \(\{\mathbf{Z}_{t-k-1},...,\mathbf{Z}_{t}\}\), corresponding to graphs \(\{\mathcal{G}_{t-k-1},...,\mathcal{G}_{t}\}\), its representation can be expressed as \(\mathbf{E}_{k}=\{\mathbf{E}_{t-k-1},...,\mathbf{E}_{t}\}\). The same operations are executed to obtain short-term representations \(\boldsymbol{\mathcal{H}}_{k}\) by considering \(k\) local graphs.
We then concatenate the encoded long-term \(\boldsymbol{\mathcal{H}}_{\mathcal{S}}\) and short-term representations \(\boldsymbol{\mathcal{H}}_{k}\) to form the final node features:
\[\boldsymbol{\mathcal{H}}=[\boldsymbol{\mathcal{H}}_{\mathcal{S}}||\boldsymbol {\mathcal{H}}_{k}]_{dim=1} \tag{8}\]
where \([\cdot||\cdot]_{dim=1}\) represents the concatenation operator of two matrices over the column-wise dimension. Consequently, the final node representations \(\boldsymbol{\mathcal{H}}_{t}\) for graph \(\mathcal{G}_{t}\) captures the structural, content, and temporal features.
**Edge-level Training objective.** Until now, we have established the hidden states of nodes \(\boldsymbol{\mathcal{H}}_{t}\) at time window \(t\). For each edge \((i,j,w)\in\mathcal{E}^{t}\) with weight \(w\), we retrieve the embeddings for the \(i\)-th and \(j\)-th node in \(\boldsymbol{\mathcal{H}}_{t}\). This allows us to calculate its anomalous score as follows:
\[f(i,j,w)=w\cdot\sigma(\mathbf{W}_{1}\mathbf{h}_{i}+\mathbf{W}_{2}\mathbf{h}_{ j}-\mu) \tag{9}\]
where \(\mathbf{h}_{i}\) and \(\mathbf{h}_{j}\) are the hidden states of the \(i\)-th and \(j\)-th node respectively, and \(\sigma(\cdot)\) is the sigmoid function. \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are the weights in two fully-connected layers. \(\mu\) is a hyperparameter in the score function. Note that this single layer network can be replaced by more complex networks.
To overcome the scarcity of anomaly data during training, we build a model to optimize one-class (normal) data instead. In essence, this means that all edges are considered normal during training. Inspired by the sampling method proposed in [55], we apply a Bernoulli distribution with parameter \(\frac{d_{i}}{d_{i}+d_{j}}\) for sampling anomalous edges according to the node degree \(d\). In particular, for each normal edge \((i,j)\) in the graph, we generate an anomalous edge by either replacing node \(i\) with node \(i^{\prime}\) (with a probability of \(\frac{d_{i}}{d_{i}+d_{j}}\)) or replacing node \(j\) with node \(j^{\prime}\) (with a probability of \(\frac{d_{j}}{d_{i}+d_{j}}\)). Here, \(d_{i}\) and \(d_{j}\) are the degrees of the \(i\)-th and \(j\)-th node respectively. Realizing that the generated edges may still be normal [56, 41], we propose a margin-based pairwise edge loss in training rather than a strict objective function such as cross entropy, to distinguish between existing edges and generated edges:
\[\mathcal{L}_{e}=\sum_{t=1,...,N}\min\sum_{(i,j,w)\in\mathcal{E}^ {t}}\sum_{(i^{\prime},j^{\prime},w)\notin\mathcal{E}^{t}}\\ \max\left\{0,\gamma+f(i,j,w)-f(i^{\prime},j^{\prime},w)\right\} \tag{10}\]
where \(\gamma\in(0,1)\) is the margin between the likelihood of normal and anomalous edges, and \(f(\cdot,\cdot,\cdot)\) is the aforementioned anomalous edge score function. Minimizing the loss function \(\mathcal{L}_{e}\) results in a smaller \(f(i,j,w)\) and a larger \(f(i^{\prime},j^{\prime},w)\), thereby achieving our one-class optimization goal.
To enhance efficiency, we aim to select edges of high significance for training. Specifically, for each pair of normal edge \((i,j,w)\) and negatively sampled edge \((i^{\prime},j^{\prime},w)\), we discard it if \(f(i,j,w)>f(i^{\prime},j^{\prime},w)\) and retain it otherwise, for pair-wise optimization. The intuition behind is that some edges in snapshots may not be entirely normal after training, and we aim to increase the reliability of normal edges that are used to learn graph representations. This selective negative sampling paradigm bolsters the stability of GLAD in training.
**Multi-granularity Learning.** Besides the margin loss that differentiates normal and anomalous edges, we introduce an ad-hoc heuristic to form a "soft-margin" decision boundary. This means we select graph representations whose distance to a center ranks at specific percentile as the decision boundary's radius [12]. To this end, we first formulate the graph representation for \(\mathcal{G}_{t}\) by maxpooling its node representations:
\[\boldsymbol{\mathcal{R}}_{t}=\text{maxpooling}(\boldsymbol{\mathcal{H}}_{t}) \tag{11}\]
At the graph-level, anomalous graphs can be detected via one-class classification training. The objective \(\mathcal{L}_{g}\) is to learn a minimized hypersphere that encloses graph representations:
\[\min_{R,\mathbf{c},\mathbf{c}}R^{2}+C\sum_{t=1}^{N}\varepsilon_{t} \tag{12}\] \[s.t.\ ||\boldsymbol{\mathcal{R}}_{t}-\mathbf{c}||^{2}\leq R^{2}+ \varepsilon_{t},\varepsilon_{t}\geq 0,\ \forall t\]
where \(\mathbf{c}\) and \(R\) are the center and radius of the hypersphere respectively, \(||\boldsymbol{\mathcal{R}}_{t}-c||^{2}\) is the distance between a graph representation and the center, \(\varepsilon_{t}\) is a slack variable introduced for \(\boldsymbol{\mathcal{R}}_{t}\) to accommodate outliers during training, and \(C\) is a hyperparameter that balance the trade-off between the errors \(\varepsilon_{t}\) and the volume of the sphere. The objective defined in Eq. 12 aims to cluster all training samples within a minimum hypersphere using Lagrange multipliers, similar to SVDD [42]. We propose a multi-granularity loss function that considers both edge-level and graph-level objectives:
\[\mathcal{L}=\mathcal{L}_{e}+\alpha\mathcal{L}_{g}+\frac{\lambda}{2}\sum(|| \mathbf{W}_{g}||_{2}^{2}+||\mathbf{W}_{a}||_{2}^{2}+||\mathbf{W}_{1}||_{2}^{2}+ ||\mathbf{W}_{2}||_{2}^{2}) \tag{13}\]
where \(\mathbf{W}_{a}\) denotes the weights of temporal-attentive transformers. Hyperparameter \(\alpha\) controls the trade-off between edge-level and graph-level violations, and \(\lambda\) modulates the weight decay L2 regularizer to avoid overfitting.
## IV Experiments
### _Experimental Settings_
We evaluate our method in both the new anomalous relation detection setting and the traditional setting: (1) Edge-level detection: it aims at detecting anomalous relations in a log sequence, which are the edges in a log graph for a given time window. For each dataset, we label edges connected to the
annotated anomalous logs as anomalies under this new setting. (2) Interval-level detection: it aims at detecting anomaly time windows which contains anomalous logs. We use this setting for a fair comparison with traditional log anomaly detection methods and more recent graph-based anomaly detection methods. In this setting, we treat a time window as anomaly if it contains any labeled anomalous logs. This is equivalent to the graph-level detection in our context.
**Datasets.** Among several potential candidates, we choose three publicly available datasets or platforms that have been examined by previous researches. We collect log sequences from these data sources to evaluate the effectiveness of our approach. Below we describe the details of the three datasets, and their statistics is shown in Table II.
* BlueGene/L (BGL) [57]. BGL is an open dataset of logs collected from a BlueGene/L supercomputer system with 131,072 processors and 32,768GB memory. The logs can be categorized into alert (anomalous) and non-alert (normal) messages identified by alert category tags.
* Austrian Institute of Technology (AIT) [58]. AIT (v1.1) is collected from four independent testbeds. Each of the web servers runs Debian and a set of installed services such as Apache2, PHP7, Exim4, Horde, and Suricata. Furthermore, the data includes logs from 11 Ubuntu hosts on which user behaviors were simulated.
* Sock Shop Microservices [59]. Sock Shop is a test bed that can be used to illustrate microservices architectures, demonstrate platforms at talks and meetups, or as a training and education tool. Specifically, we deploy and generate anomalous relations by adding shopping items that have not been browsed by each customer or introducing a large number of items in certain time periods.
**Baselines.** We compare the performance of GLAD with a wide range of baselines. For the edge-level setting, we consider five graph-based anomaly detection baselines. For fairness, except for AddGraph [41] that directly identifies anomalous edges, we use their reconstructed node feature vectors to compute edge scores and evaluate their edge-level performance.
* DOMINANT [32]. DOMINANT contains a GCN encoder, a structure reconstruction decoder and a attribute reconstruction decoder. It learns a weighted reconstruction errors as the node anomalous score.
* CONAD [35]. CONAD first generates augmented graphs based on prior human knowledge of anomaly types, then applies a Siamese GNN to detect node anomalies.
* AnomalyDAE [33]. AnomalyDAE uses a structure encoder-decoder to learn structure reconstruction errors and an attribute encoder-decoder to learn feature reconstruction errors. These errors are balanced to form an anomalous score of each node.
* MLPAE [60]. MLPAE applies a Multi-Layer Perceptron (MLP) autoencoder to detect anomalous nodes without considering structure information in a graph.
* AddGraph [41]. AddGraph incorporates a temporal-attentive RNN into a GCN encoder to learn structure and attribute representations in dynamic graphs. It learns edge scores according to pairwise node latent vectors and detects anomalous edges.
For the interval-level setting, existing works focusing on logs can be divided into two categories: 1) sequence-based methods, including traditional methods such as PCA [11], Isolation Forest [9], OCSVM [7] and deep learning-based methods such as DeepLog [6], LogAnomaly [61], LogBERT [12]; and 2) graph-based methods, including LogGD [62], LogFlash [63], and DeepTraLog [64].
* Principal Component Analysis (PCA) [11]. PCA builds a counting matrix according to the log event frequency and then maps the matrix into a latent space to detect anomalous sequences.
* Isolation Forest (iForest) [9]. An unsupervised learning method that represents features as tree structures for anomaly detection.
* One-class SVM (OCSVM) [7]. A well-known one-class classification method by building a feature matrix based on the norm data for anomaly detection.
* DeepLog [6]. DeepLog uses LSTM to capture patterns of normal log sequences and further identifies anomalous log sequences based on log key predictions.
* LogAnomaly [61]. LogAnomaly proposes template2vec to extract log template semantics and use LSTM to detect sequential and quantitative log anomalies.
* LogBERT [12]. LogBERT uses BERT to encode each log sequence into a feature space by self-supervision, and detect anomalous log sequences via hypersphere learning.
* LogGD [62]. LogGD constructs directed graph by connecting log templates following sequential relations, and identifies anomalies via graph classification based on Graph Transformer network.
* LogFlash [63]. LogFlash builds a time-weighted control flow graph (TCFG), where nodes are log templates and edges represent the transition between them, and compare log streams with TCFG to find deviations.
* DeepTraLog [64]. DeepTraLog constructs trace event graph (TEG) to represent various relations between the span/log events of the trace. It learns a gated GNN-based SVDD representation for each TEG and identifies anomalies via hypersphere learning.
To evaluate the impact of individual components in GLAD on the final performance, we also conduct experiments on different GLAD variants:
* GLAD\({}^{\xi}\). In this variant, GLAD applies a rule-based (instead of prompt-based) field extraction during graph
\begin{table}
\begin{tabular}{l|c c c} \hline \hline & BGL & AIT & Sock Shop \\ \hline \# Log Messages & 4,713,494 & 1,074,902 & 14,674 \\ \# Anomalies & 348,460 & 45,651 & 408 \\ \# Nodes & 4,393,108 & 1,663,188 & 10,340 \\ Avg. degree & 11.80 & 15,855 & 13.84 \\ \# Edges & 25,919,022 & 13,180,752 & 71,540 \\ \# Anomalous edges & 1,572,696 & 567,906 & 2,468 \\ \# Graphs & 36,169 & 15,464 & 270 \\ \# Anomalous graphs & 2,659 & 1,078 & 16 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Statistics of the three datasets.
configuration. This allows us to evaluate the significance of accurate identification of system entities and their interrelations with log events.
* GLAD\({}^{\dagger}\). This version of GLAD is designed without the use of transformer encoder, removing its ability to capture temporal features. The purpose is to investigate the significance of temporal features.
* GLAD\({}^{\dagger}\). This variant removes multi-granularity learning from the training process of GLAD, thereby examining the importance of global features in detecting anomalies.
**Metrics.** We measure the model performance on anomaly detection based on three widely-used classification metrics, including Precision, Recall, and F-1 score, as well as two ranking metrics, including AUC and AUPR score.
**Implementation Details.** All GNN models in our research are built on PyTorch Geometric (PyG) framework. These models are configured with two layers, with input channels set at 768, and output channels at 1,024. For Sentence-BERT and BART, we use their pre-trained models, namely _bert-base-uncased_ and _facebook/bart-base_, from Hugging Face. For field extraction, we either fine-tune BART over 100 epochs using 10-shot training samples or use pre-defined regular expressions. For anomaly detection, we split the log sequences into a ratio of 6:1:3, where 60% as training set, 10% as validation set, and 30% as test set. We apply an unsupervised learning paradigm [61, 64, 7] where only normal log sequences are used for training, and train each model for 100 epochs. Hyperparameters are adjusted via grid search on the validation set. Specifically, we use AdamW optimizer [65] with a learning rate of 1e-3, \(\mu\) of 0.3, \(\gamma\) of 0.5, global weight \(\alpha\) of 1, and weight decay \(\lambda\) of 5e-7. Our analysis operates with a window size of 60 seconds. Our work is conducted in a leading industry company using an NVIDIA RTX A4500 GPU. Our GLAD has been deployed to monitor internal cloud system log data for anomaly detection.
### _Experimental Results_
**Edge-level Performance.** We first compare GLAD with baseline methods in terms of their edge-level performance. As shown in Table III, we observe that: (1) GLAD outperforms all baseline methods in F-1 score, AUC score and AUPR score. This demonstrates the efficacy of our approach in identifying anomalous relations between log fields and log events. (2) While some baseline methods like DOMINANT, CONAD, AddGraph in BGL dataset, and AnomalyDAE, MLPAE in Sock Shop dataset, achieve high recall scores, and AnomalyDAE in AIT dataset, AddGraph in BGL dataset achieve high precision scores, their F-1 scores are relatively low. This suggests that they either adopt an overly cautious stance towards anomalies or produce a high number of false positives by erroneously classifying many samples as anomalies. (3) Edge-level anomaly detection is notably more challenging compared to interval-level anomaly detection, e.g., no method in the three datasets achieved a precision score exceeding 60% or an F-1 score above 70%. (4) Those method optimized on edge-level, i.e., AddGraph and GLAD, achieve better precision, F-1, AUC and AUPR scores across all datasets. Specifically, they exhibit larger advantages over other methods in our generated Sock Shop datasets that contain specific anomalous relations, substantiating our hypothesis that edge-level learning can better detect anomalous relations that are elusive to other methods. (5) While some methods, such as AnomalyDAE, excel in one dataset, achieving a 61.92% F-1 score in the AIT dataset, they flounder in others, dropping to a 21.90% F-1 score in the Sock Shop dataset, for instance. (6) Compared to AddGraph, GLAD achieves superior performance, especially recall scores, across all three datasets. This demonstrates the advantages of our graph configuration and graph-based edge-level anomaly detection method.
**Interval-level Performance.** We further evaluate GLAD on the common interval-level protocol to demonstrate its effectiveness. Due to space limit, we only present the results of the widely used BGL dataset in Table IV. We observe that: (1) Compared to edge-level detection results, GLAD achieves much higher Precision (and F-1). Significantly, GLAD surpasses all methods with a leading F-1 of 92.66%, AUC
\begin{table}
\begin{tabular}{l|c c c c} \hline \hline Method & Precision & Recall & F-1 & AUC & AUPR \\ \hline PCA & 9.04 & **98.12** & 16.56 & 35.64 & 9.03 \\ iForest & **100.00** & 14.74 & 25.70 & 57.37 & 21.64 \\ OCSVM & 1.09 & 12.48 & 2.00 & 28.22 & 7.32 \\ \hline DeepLog & 89.02 & 80.54 & 84.57 & 89.26 & 70.17 \\ LogAnomaly & 91.40 & 79.32 & 84.93 & 92.98 & 75.21 \\ LogBERT & 91.47 & 92.69 & 92.07 & 96.33 & 82.70 \\ \hline LogGD & 90.89 & 93.31 & 92.08 & 96.91 & 81.74 \\ LogFlash & 82.46 & 86.73 & 84.54 & 86.78 & 74.52 \\ DeepFLLog & 79.48 & 97.68 & 87.64 & 84.77 & 70.92 \\ \hline GLAD\({}^{\dagger}\) & 88.35 & 89.86 & 89.10 & 95.63 & 80.44 \\ GLAD\({}^{\dagger}\) & 89.24 & 90.18 & 89.51 & 96.07 & 79.91 \\ GLAD\({}^{\dagger}\) & 89.73 & 61.94 & 90.67 & 96.31 & 81.65 \\ GLAD & 90.82 & 94.57 & **92.66** & **98.18** & **84.69** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Interval-level performance (%) in the BGL dataset. We ran each model 5 times to get the average results.
\begin{table}
\begin{tabular}{l|c c c c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{BGL} & \multicolumn{3}{c|}{ATT} & \multicolumn{3}{c}{Sock Shop} \\ \cline{2-13} & Precision & Recall & F-1 & AUC & AUPR & Precision & Recall & F-1 & AUC & AUPR & Precision & Recall & F-1 & AUC & AUPR \\ \hline DOMINANT & 35.09 & 90.68 & 50.60 & 42.99 & 24.30 & 39.94 & 89.56 & 55.24 & 42.01 & 43.39 & 11.59 & **95.63** & 20.08 & 39.83 & 13.33 \\ CONAD & 32.19 & **97.83** & 48.44 & 48.59 & 39.93 & 89.84 & 55.23 & 44.48 & 45.47 & 16.31 & 88.92 & 27.56 & 43.82 & 17.49 \\ AnomalyDAE & 36.34 & 88.27 & 51.48 & 45.31 & 26.03 & **55.12** & 70.63 & 61.92 & 45.49 & 45.17 & 12.40 & 93.93 & 21.90 & 39.70 & 12.53 \\ MLPAE & 35.53 & 82.41 & 49.65 & 43.76 & 24.65 & 39.94 & 89.58 & 55.25 & 43.71 & 43.60 & 11.33 & 93.87 & 20.22 & 38.22 & 11.34 \\ AddGraph & **48.21** & 66.27 & 55.82 & 54.29 & 33.97 & 48.96 & 85.92 & 62.38 & 46.16 & 49.44 & 34.49 & 84.51 & 48.99 & 51.39 & 58.52 \\ GLAD\({}^{\dagger}\) & 38.94 & 74.26 & 51.09 & 50.73 & 30.06 & 44.91 & 89.87 & 59.89 & 45.16 & 46.30 & 35.79 & 80.14 & 49.88 & 50.74 & 52.88 \\ GLAD\({}^{\dagger}\) & 40.60 & 73.31 & 52.62 & 50.34 & 30.82 & 45.17 & 90.71 & 60.31 & 45.83 & 46.12 & 32.88 & 82.7 & 46.45 & 48.57 & 52.39 \\ GLAD\({}^{\dagger}\) & 39.53 & 89.56 & 54.85 & 52.97 & 31.07 & 50.08 & **93.30** & 65.18 & 46.29 & 46.53 & 50.86 & 82.76 & 63.01 & 61.85 & 65.42 \\ GLAD & 47.09 & 86.06 & **60.87** & **56.56** & **38.99** & 54.15 & 90.81 & **67.84** & **49.09** & **48.66** & **56.02** & 91.00 & **69.35** & **61.93** & **68.37** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Edge-level performance (%) of GLAD and baseline methods. **Bold** numbers denote the best metric among all the methods. We ran each model 5 times to get the average results.
of 98.18%, and AUPR of 84.69%. This demonstrates the proficiency of GLAD in capturing conventional anomalies in addition to relational ones. (2) Traditional sequence-based methods such as PCA, iForest, and OCSVM have significantly lower performance across all metrics. For instance, PCA suffers severely in Precision (9.04%), while iForest has a notably poor Recall (14.74%). OCSVM has the lowest performance among the three, with a negligible F-1 score of 2.00%. (3) DL sequence-based methods significantly outperform the traditional ones. Among these, LogBERT outperforms DeepLog and LogAnomaly with a F-1 score of 92.07% and an AUC of 96.33%. This showcases the effectiveness of transformers in capturing both long-term dependencies and semantic relations in log sequences. (4) Among graph-based methods, LogGD shows competitive results with an F-1 score of 92.08%, even slightly better than LogBERT. This suggests that the inclusion of both transformers and graph structures in hypersphere learning could benefit anomaly detection.
**Ablation Study.** With results of GLAD variants shown in Table III and Table IV, we observe that: (1) The performance gap between GLAD\({}^{\xi}\) and GLAD--roughly 9% F-1 improvement for edge-level detection in BGL--reveals the benefit of employing prompt-based filed extraction in graph configuration, thereby enhancing the effectiveness of GLAD in detecting anomalies. (2) The difference between GLAD\({}^{l}\) and GLAD underscores the significant role of temporal-attentive transformers. With the incorporation of temporal features, GLAD gains over 8% (and 3%) F-1 increases when detecting anomalous relations (and intervals) in BGL dataset. However, GLAD\({}^{l}\) still outperforms numerous baseline methods, asserting the robustness of our graph-based framework. (3) The comparison between GLAD\({}^{\dagger}\) and GLAD indicates the positive impact of incorporating global features in anomaly detection, as evidenced by the superior F-1, AUC and AUPR scores of GLAD.
To further illustrate how multi-granularity learning benefits GLAD during training, we record the normalized global distance (\(\mathcal{L}_{g}\) in Eq. 12), edge loss (\(\mathcal{L}_{e}\) in Eq. 10) and validation F-1 scores after each training epoch in BGL dataset. In Figure 5, we observe that: (1) The comparison between GLAD\({}^{\dagger}\) and GLAD in terms of global distance shows that hypersphere learning effectively clusters normal samples in the graph embedding space, i.e., the converged normalized global distance of GLAD is less than half of that of GLAD\({}^{\dagger}\). (2) The comparison between GLAD\({}^{\dagger}\) and GLAD in terms of edge loss and validation F-1 score shows that hypersphere learning further improves the anomaly detection performance, i.e., the edge loss of GLAD decreases more stably and its validation F-1 outperforms that of GLAD\({}^{\dagger}\) in the later training stage.
We also analyze the impact of using different GNN encoders, i.e., GCN [27], SAGE [29], GIN [28], GAT [30], on GLAD's performance in the BGL dataset. Figure 6 reveals that the performance of GLAD remains consistently robust across diverse GNN encoders, though some models excel in specific evaluation metrics, e.g., GCN and GIN show superior performance in terms of F-1, AUC, and AUPR scores. This resilience against changes in DL models demonstrates that GLAD can be flexibly deployed using various combinations of state-of-the-art architectures.
**Field Extraction.** To investigate the effectiveness of our field extraction method, we annotate \(n\) log messages with two prompts (\(\mathbf{P}_{1}\), \(\mathbf{P}_{2}\) in Table V) for each field type and train corresponding field extraction models. Note that we use \(\mathbf{P}_{1}\) in GLAD for graph construction due to slightly better anomaly detection performance. As shown in Table VI, with only 5-shot learning, our field extraction model is as competitive as hand-crafted rules in terms of F-1 scores. Our method significantly outperforms rule-based method when conducting 10-shot learning, which explains the superiority of GLAD over GLAD\({}^{\xi}\) and suggests the practicability of our few-shot method in low-resource scenarios where annotations are limited.
**Parameter Study.** We investigate the impact of two critical hyperparameters--window size and the number of GNN
\begin{table}
\begin{tabular}{l} \hline Prompt \(\mathbf{P}_{1}\) \\ \hline \(\mathbf{P}^{+}:(candidate\_span)\) is a/an \(\langleentity\_type\rangle\) entity \\ \(\mathbf{P}^{-}:(candidate\_span)\) is not a named entity \\ \hline Prompt \(\mathbf{P}_{2}\) \\ \hline \(\mathbf{P}^{+}:(entity\_type)=\langle candidate\_span\rangle\) \\ \(\mathbf{P}^{-}:(candidate\_span)=\) none \\ \hline \end{tabular}
\end{table} TABLE V: Two proposed prompts for field extraction.
\begin{table}
\begin{tabular}{c c|c c c} \hline \multicolumn{2}{c|}{Technique} & Pre. & Rec. & F-1 \\ \hline \multicolumn{2}{c|}{regex} & 36.48 & 44.28 & 40.00 \\ \hline \multirow{3}{*}{\(\mathbf{P}_{1}\)} & 1-shot & 16.53 & 59.34 & 25.86 \\ & 5-shot & 28.33 & 74.38 & 41.03 \\ & 10-shot & **66.28** & 85.22 & **74.57** \\ \hline \multirow{3}{*}{\(\mathbf{P}_{2}\)} & 1-shot & 17.89 & 58.14 & 27.36 \\ & 5-shot & 28.00 & 73.76 & 40.59 \\ \cline{1-1} & 10-shot & 64.68 & **87.82** & 74.49 \\ \hline \end{tabular}
\end{table} TABLE VI: Performance of rule-based field extraction v.s. prompt-based \(n\)-shot field extraction.
Fig. 5: Ablation study of multi-granularity learning.
Fig. 6: GLAD performance using different GNN encoders.
layers--on GLAD's performance in the BGL dataset. In each experiment, one hyperparameter is altered while the rest are held constant. As shown in Figure 7, the performance of GLAD, especially the F-1 scores, is robust to variations in these hyperparameters. This indicates that GLAD maintains its effectiveness across a range of configurations, highlighting its suitability for deployment in real-world scenarios. Specifically, Figure (a)a suggests that a longer monitoring period leads to higher precision but lower recall. This implies that the window size can be tuned to balance between achieving higher true positive rates and reducing false positive rates. Similar strategy (Figure (b)b) is applicable to configuration of GNN layers.
**Efficiency Analysis.** We compare the training and testing time of different methods in the BGL dataset. As shown in Table VII, traditional methods such as PCA, iForest, and OCSVM show small overheads as they are rather simple, among which OCSVM exhibits a rather high overhead due to construction of a feature matrix based on norm data for anomaly detection. DL sequence-based methods such as DeepLog, LogAnomaly, and LogBERT, generally possess higher overheads due to their complex architectures. Among them, LogAnomaly has the highest overhead due to complex template2vec learning process and low parallelism. While graph-based methods show rather lower overheads, underscoring the computational efficiency of graph structures. Interestingly, GLAD provides a training and testing overhead of 166 and 92 milliseconds per log, which is notably less than that of LogAnomaly and comparable to DeepLog. Given the sophisticated transformer and GNN architectures of GLAD, its relatively small overhead underscores our efficient design. This can be attributed to the direct application of temporal-attentive transformers on graph features, avoiding both tokenization and embedding, thereby increasing parallel computation.
**Case Study.** To provide deeper insight into the performance of our graph-based anomaly detection, we visualize two sample graphs in Figure 8. In this case, all log messages share the same event template "f49657b2", and the anomalous relation manifests as the user "keven" making frequent requests to a server (28 times) compared to other users whose requests are considerably fewer. Existing methods that neglect system interactions cannot identify such anomalies, as they do not consider relations among system components. Our GLAD, however, successfully detects these anomalies by considering both the edge weight values and temporal patterns in a sequence of graphs. The comparison of the two constructed graphs also demonstrate the interpretability of our graph-based approach.
## V Conclusions
In this paper, we proposed a Graph-based Log Anomaly Detection framework, GLAD, which considers relational patterns in addition to log semantics and sequential patterns for system relation anomaly detection. First, a field extraction module utilizing prompt-based few-shot learning is used to extract field information, e.g., _service_, _user_, from log contents. Then, with the log events and fields extracted, dynamic log graphs can be constructed for sliding windows with events and fields as nodes, and the relations between them as edges. Finally, a temporal-attentive graph edge anomaly detection model is introduced for detecting anomalous relations from the dynamic log graphs, where a GNN-based encoder facilitated with transformers is used to model the structural, content, and temporal features. Experiments conducted on three datasets demonstrated the effectiveness of GLAD on system relation anomaly detection using system logs and providing deep insights into the anomalies.
|
2309.14686 | Clump-scale Gas Infall in High-mass Star Formation: a Multi-transition
View with JCMT HCN (4--3) Mapping | Gas infall motions play a crucial role in high-mass star formation and are
characterized by observable signatures in the form of blue-shifted asymmetric
spectral line profiles ("blue profiles"). However, the connection between blue
profiles and infall motions is unclear due to complex gas motions at parsec
scales. In this study, we present the results of an HCN (4-3) mapping survey
conducted with the JCMT, towards 38 massive clumps exhibiting blue profiles in
HCO+ (3-2). We extract 34 HCN cores from the 38 observed fields. The
core-averaged spectra show various line profiles, indicating that blue-profile
HCO+ (3-2) does not guarantee the same in HCN (4-3). Through non-LTE radiation
transfer calculations, we attribute the low detection rate of high-$J$ blue
profiles to a combination of insufficient HCN (4-3) opacity and intricate gas
motion across different density layers. The comparison between the MALT90 and
BGPS line surveys highlights the importance of appropriate tracers, high
spectral resolution, and column density thresholds when searching for blue
profiles. We select 11 reliable infall candidates and adopt the Hill5 model to
fit the infall velocity of 0.2-1.9 km/s, corresponding to 5% to 74% of
free-fall velocity. Assuming a spherically collapsing model, we estimate the
median and mean mass infall rates to be 4.5E-3 and 7.6E-3 Msun/year,
respectively. The consistency of the mass infall rates among different
transitions suggests a steady accretion process from the clump gas envelope to
the inner region. | Fengwei Xu, Ke Wang, Yuxin He, Jingwen Wu, Lei Zhu, Diego Mardones | 2023-09-26T05:45:19Z | http://arxiv.org/abs/2309.14686v1 | # Clump-scale Gas Infall in High-mass Star Formation: a Multi-transition View
###### Abstract
Gas infall motions play a crucial role in high-mass star formation and are characterized by observable signatures of blue-shifted asymmetric spectral line profiles ("blue profiles"). However, the connection between blue profiles and infall motions is unclear due to complex gas motions at parsec scales. In this study, we present the results of an HCN (4-3) mapping survey conducted with the JCMT, towards 38 massive clumps exhibiting blue profiles in HCO\({}^{+}\) (3-2). We extract 34 HCN cores from the 38 observed fields. The core-averaged spectra show various line profiles, indicating that blue-profile HCO\({}^{+}\) (3-2) does not guarantee the same in HCN (4-3). Through non-LTE radiation transfer calculations, we attribute the low detection rate of high-\(J\) blue profiles to a combination of insufficient HCN (4-3) opacity and the intricate gas motion across different density layers. The comparison between the MALT90 and BGPS line surveys highlights the importance of appropriate tracers, high spectral resolution, and column density thresholds when searching for blue profiles. We select 11 reliable infall candidates and adopt the Hill5 model to fit the infall velocity of 0.2-1.6 km s\({}^{-1}\), corresponding to 5% to 74% of free-fall velocity. Assuming spherically collapsing model, we estimate the median and mean mass infall rates to be \(4.5\times 10^{-3}\) and \(7.6\times 10^{-3}\)\(M_{\odot}\) yr\({}^{-1}\), respectively. The consistency of the mass infall rates among different transitions suggests a steady accretion process from the clump gas envelope to the inner region.
stars: formation - ISM: kinematics and dynamics - ISM: molecules - radio lines: ISM +
Footnote †: journal: ApJS
0000-0002-8070-788X]Fengwei Xu
0000-0002-3188-7886]Ke Wang
0000-0002-1888-0886]Yuxin He
0000-0002-4880-0888]Jingwen Wu
0000-0002-1413-0888]Lien Zhu
0000-0002-3133-0888]Diego Mardones
## 1 Introduction
Massive stars (\(>8\,M_{\odot}\)) play a predominant role in the energy budget of galaxies via their radiation, wind, and supernova events, but mass assembly processes including gas accretion or infall motions remain unclear. On the other hand, gravitational infall is a basic step in star formation theory (Larson, 1969; Shu et al., 1987), and expected in both "core-fed" (McLaughlin and Pudritz, 1996; McKee and Tan, 2003) and "clump-fed" massive star formation models (Bonnell et al., 2001; Wang et al., 2010; Vazquez-Semadeni et al., 2019), so identifying and studying the accretion flows which collect the material out of which stars form, either directly or indirectly, is an important aspect of understanding mass assembly of massive stars (Fuller et al., 2005; Sun and Gao, 2009; Jackson et al., 2019). Nevertheless, massive stars form in complex environments and large distances, thus features
of individual cores embedded in the massive star forming clump are averaged together in the single-dish beam, making infall motions harder to observe (e.g. Reiter et al., 2011; Liu et al., 2016; Yuan et al., 2017; Pillai et al., 2019; Huang et al., 2023) and observational evidence of collapse controversial to interpret (Evans, 1991; Myers et al., 2000; Wu & Evans, 2003; Wu et al., 2007).
Self-absorbed, optically thick line profiles serve as phenomenological evidence of the infall within star-forming regions. When examining the emission arising from the infalling envelope positioned on the far side of the protostar, a proportional blue (Doppler) shift emerges, attributable to the velocity gradient toward the core. This blue-shifted emission evades absorption by foreground layers that are warmer or at a substantially different velocity (see fig.1 in Evans, 2003), thereby leading to an excess of emission on the blueward side of the source velocity within the line profile (Walker et al., 1986, 1994; Zhou et al., 1993; Mardones et al., 1997; Evans, 2003). Notably, in instances where the source exhibits moderate optical thickness, a distinct blueward skew characterizes the line profile. Conversely, strongly self-absorbed sources manifest two discernible peaks, with the blue peak outshining the red peak to a moderate or significant degree. The depth of the self-absorption feature intensifies in the presence of substantial temperature gradients within the core, while the asymmetry of the line profile amplifies with pronounced velocity gradients (Reiter et al., 2011). The distinctive line profile, commonly referred to as a "blue asymmetric profile" or simply "blue profile", enables the measurement and quantification of infall motion.
Various molecular species with different transitions, for example CS (2-1) by Sun & Gao (2009), CS (3-2) by Zhang et al. (1998), H\({}_{2}\)CO (2-1) by Fuller et al. (2005); Yoo et al. (2018), HCN (1-0) by Yang et al. (2020), HCN (3-2) by Wu & Evans (2003), HCO\({}^{+}\) (1-0) by Wu et al. (2007); He et al. (2015, 2016); Jackson et al. (2019); Pillai et al. (2023), HCO\({}^{+}\) (3-2) by Reiter et al. (2011), HNC (1-0) by He et al. (2015, 2016); Saral et al. (2018), THz NH\({}_{3}\) by Wyrowski et al. (2012, 2016), CH\({}_{3}\)CN (19-18) by Liu et al. (2020), and CO (1-0) by Xu et al. (2021) have been utilized to search for infall signatures in various environments in massive star-forming regions. In addition, comparisons of different tracers, including multiple transitions of the same tracer, have also been made through both observations (Fuller et al., 2005; Sun & Gao, 2009; Yoo et al., 2018; Xie et al., 2021) and simulations (Chira et al., 2014), to explore which tracers are more efficient in revealing infall signatures in what kinds of sources.
Above all, two major aspects can be improved in previous studies of infall motions in massive star formation: 1) since low-\(J\) transitions can easily suffer from large optical depth and then be limited at low-density gas envelope (Smith et al., 2012), Chira et al. (2014) adopted radiative transfer calculations to show that high-\(J\) transitions of HCO\({}^{+}\) and HCN offer the best combination of detectability of blue line profiles and visibility above typical noise levels, even better being HCN (4-3); 2) single-pointing observations cannot either rule out other possibilities which can also produce blue profiles (e.g., outflow, rotation Wu et al., 2007) or resolve "true" core rather than clump-averaged collapse. Both aspects can be solved with the JCMT heterodyne array receiver program (HARP; Buckle et al., 2009). Designed to rapidly map extensive areas, HARP operates within the 325-375 GHz frequency range and offers enhanced sensitivity in efficiently mapping HCN (4-3), providing an optimal strategy for investigating infall motions across a large sample.
Here, we present a JCMT HARP HCN (4-3) mapping survey of 38 massive clumps with known blue profiles in a pilot single-point HCO\({}^{+}\) (3-2) line survey conducted by Schlingman et al. (2011); Shirley et al. (2013). The paper is organized as follows: Section 2 describes the sample selection, JCMT HARP observations and data reduction, and clump distance estimation. Results are presented in Section 3. The discussions are followed in Section 4. Finally, we give a summary and prospectus of the survey in Section 5.
## 2 Data
### Sample Selection
The Bolocam Galactic Plane Survey (BGPS) imaged 170 deg\({}^{2}\) sky at 1.1 mm using Bolocam(survey description in Aguirre et al., 2011) and cataloged 8358 continuum clumps (version 1.0.1 catalog; Rosolowsky et al., 2010). As a followup work, Schlingman et al. (2011) and Shirley et al. (2013) successively performed a single-pointed spectroscopy survey towards 1882 and 4705 BGPS clumps using the 10 m Submillimeter Telescope (SMT) in HCO\({}^{+}\) (3-2) and N\({}_{2}\)H\({}^{+}\) (3-2) with a spectral resolution of 1.1 km s\({}^{-1}\). Shirley et al. (2013) then integrate and present a complete spectroscopic catalog of HCO\({}^{+}\) (3-2) and N\({}_{2}\)H\({}^{+}\) (3-2) observations for 6194 sources in the BGPS v1.0.1 catalog between 7\(\fdg\)5\(\leq l\leq\)194\({}^{\circ}\). Among the sample, 80 show self-absorbed line profiles where HCO\({}^{+}\) (3-2) shows two peaks and an absorption dip over the span of at least three channels (3.3 km s\({}^{-1}\)) with the N\({}_{2}\)H\({}^{+}\) (3-2) line profile having a single-peak. Then, 48 are identified as blue asymmetric profiles, by comparing the optical thick
HCO\({}^{+}\) (3-2) lines to the optically thin N\({}_{2}\)H\({}^{+}\) (3-2) lines. These sources serve as excellent high-mass large-scale collapse candidates (Shirley et al., 2013), which are the parent sample in our work. Due to the limit of observing time, a subsample of 38 clumps (including one adopted from the JCMT archive) are chosen as target fields (fields hereafter) in this work. The entire sample selection procedure is encapsulated in Figure 1, elucidating the process through which the sample is curated, meticulously avoiding biases in relation to essential physical parameters such as distance, clump mass, or luminosity. It should be noted that the BGPS clumps provide an unbiased representation of the Galactic star-forming regions, affirming that the subsample maintains representativeness and consequently, the outcomes of this study hold a representative character.
All the fields are covered by legacy surveys of _Spitzer_, _Herschel_, and ATLASGAL, enabling us to obtain the infrared properties. We first retrieve the clump parameters including size, dust temperature, luminosity, mass and peak column density from Urquhart et al. (2018), which are then corrected for by the updated distance (see Section 2.3). The corrected clump-scale infrared properties are summarized in columns (8)-(12) of Table 1. The sample expands a wide range in: 1) evolutionary stages from infrared dark clouds (IRDCs) to infrared bright UCHii regions; 2) dust temperature from 9.7-34.4 K; 3) mass from \(1\times 10^{2}\)-\(6\times 10^{3}\) \(M_{\odot}\).
### JCMT HARP Observations of HCN (4-3) and Data Reduction
The observations were carried out towards 38 blue-profile massive clumps with the 15 m JCMT from 2019 October 13th to 2019 December 18th and from 2022 March 15th to 2022 June 5th (Project ID: M19BP033, M22AP051; PI: Ke Wang). The observation of BG012.889+00.490 (also IRAS 18089-1732) is retrieved from the JCMT archive (Project ID: M16AP067; PI: Hyunju Yoo).
We used the 16-pixel heterodyne array receiver program (HARP) for the front-end, and the Auto-Correlation Spectrometer and Imaging System (ACSIS) for the back-end (Buckle et al., 2009). HARP is a single sideband receiver (SSB) comprised of a 16-receptor array arranged on a \(4\times 4\) grid. At the observing frequency, HARP has an angular resolution of \(14^{\prime\prime}\), and a main-beam efficiency of \(\eta_{\rm mb}=0.61\). The footprint of the full array is \(2^{\prime}\times 2^{\prime}\). "HARP5 Jiggle-Chop" scanning mode is used to fill in the \(30^{\prime\prime}\) spacing between the receptors, therefore resulting in a \(2^{\prime}\times 2^{\prime}\) map with the pixel size of \(6^{\prime\prime}\), which is slightly over Nyquist sampling. The resultant scanning coverage for each field is highlighted by the yellow frame in Figure 2. Note that two or three receptors are not operational in our observations, so the frames are usually incomplete squares except for BG012.889+00.490. ACSIS was set for a bandwidth of 250 MHz with 8194 channels, centered at the frequency of HCN (4-3) after Doppler shift. A uniform channel width of \(\sim 0.03\) MHz then leads to a velocity resolution of \(0.026\) km s\({}^{-1}\). The position-switched mode was performed when the whole telescope moves away from the source and onto the reference position, which is specially chosen for each target based on the absence of CO and dust emission. During the observations, the weather condition had a precipitable water vapour (PWV) range of 1.575-2.575 mm or a \(\tau_{\rm 225\,GHz}\)1 of 0.08-0.12 (Band-3). The typical on-source time for each map is 40 minutes, or equivalently, 1.6 minutes for each HARP pixel.
Footnote 1: The conversion from \(\tau_{\rm 225\,GHz}\) to PWV is given in Dempsey et al. (2013): \(\tau_{\rm 225\,GHz}=0.04\times{\rm PWV}+0.017\).
The data were first calibrated and reduced by the pipeline introduced by Jenness et al. (2015). The processed HARP-ACSIS data were converted into FITS format and then downloaded from the CADC's data collection2. The orientations of the maps are determined by the K-mirror rotation which are different between observing fields, depending on the elevation of observation.
Figure 1: The workflow of sample selection. Starting from the the 8358 BGPS sources, Schlingman et al. (2011) and Shirley et al. (2013) respectively performed line surveys, finally covering a total of 6194 BGPS sources. Shirley et al. (2013) catalog 48 sources with blue profiles, of which 38 are observed by our JCMT HCN (4-3) mapping surveys.
To keep consistency, we regrid the maps to make y-axis aligned to the North. We convert the velocity in the barycentric frame to that in the local standard of rest (LSR). We smooth the velocity resolution to a uniform value of 0.2 km s\({}^{-1}\) to enhance the signal-to-noise ratio (SNR) for further spectral line analyses. Achieved RMS (LSR) for the 38 clumps, we find that the 38 clumps are not aligned to the North. We also find that the 38 clumps are aligned to the North. We find that the 38 clumps are aligned to the North.
Figure 2: Continued.
noise level for each field is listed in column (9) of Table 1, with an averaged value of 0.10(\(\pm\)0.02) K at a channel width of 0.2 km s\({}^{-1}\).
### Distance Estimation
Reliable velocity determination is crucial for estimating a set of other physical properties of the clumps. We take advantage of the velocity at the local standard of rest (\(V_{\rm LSR}\)) derived from N\({}_{2}\)H\({}^{+}\) (3-2) or HCO\({}^{+}\) (3-2) in Shirley et al. (2013). We obey the following work flow to obtain the distance estimation for each source. First, we check for each source if any distance is already given in the references. If the distance is donated by kinematic distance or not given at all, we update the distance with the parallax-based Bayesian maximum-likelihood distance estimation approach version 2.4.1 (Reid et al., 2016, 2019). Note that if one source is located outside the solar circle (i.e., the distance from the Galactic Center \(R_{\rm gc}>8.5\) kpc) or is at a tangential point, we will calculate one unique distance. However, if one source is located within the solar circle (i.e. \(R_{\rm gc}<8.5\) kpc), two possible distances are obtained (one near, one far). This degeneracy is commonly referred to as kinematic distance ambiguity (KDA). To address KDA, we follow the methods described in Urquhart et al. (2018), which test several criteria one by one to determine the distance. First, we search the SIMBAD database for any previous distance estimation. We then choose the one closest to the value reported in the literature. If no reference is found, then we check whether the source elevation (\(z\)) to the Galactic mid-plane3 for the farther distance is larger than 120 pc. If this is the case, then the closer one is adopted. After the above workflow, the distances and their references are listed in columns 5-6 of Table 1.
Footnote 3: The Sun is \(10\pm 2\) pc higher than the Galactic mid-plane (Griv et al., 2021), therefore a southward shift is included.
## 3 Results
### Detection of HCN (4-3) Emission
We generate the moment zero (M0) maps to show how the emission of HCN (4-3) is distributed. For each field, we first extract the velocity range of \([V_{\rm LSR}-{\rm d}V,V_{\rm LSR}+{\rm d}V]\) where \({\rm d}V=10\) km s\({}^{-1}\) to cover the majority of HCN line emission. Then we integrate the spectra within the velocity range at each pixel and obtain the M0 maps shown in Figure 1.
Most of the HCN emission shows core-like condensed structures, although some show more extended and irregular ones. We then adopt an automatic source extraction algorithm SExtractor4(Bertin & Arnouts, 1996) on M0 maps to extract HCN emission sources. The advantages of SExtractor in our case are: 1) to reduce background emission; 2) to support local rms noise input to serve as pixelwise thresholds; 3) to deblend the potentially blended sources in one field. The algorithm procedure and the parameter settings are described in Appendix A in detail. As a result, a total of 34 HCN sources are extracted and fitted by 2D Gaussian profiles shown by green solid ellipses in Figure 1. Since HCN sources have physical sizes of 0.08-0.35 pc (column 9 of Table 2) which are much smaller than the massive clumps of \(\sim 1\) pc, we define them as HCN "cores" hereafter. For further spectral line analyses, we also assigned a circle with diameter of \(30\arcsec\) (which is the beam size of SMT at 270 GHz Shirley et al., 2013) to the six fields where no HCN was detected, shown as green dashed circles in Figure 1. The basic fitted parameters of the 34 HCN cores including offsets (along x and y axes) from the field center, major and minor axes (\(\theta_{\rm maj}\) and \(\theta_{\rm min}\)), position angle (PA), and peak flux (\(F_{\rm peak}\)) are listed in Column 3-8 of Table 2.
Footnote 4: [https://sextractor.readthedocs.io/en/latest/Introduction.html](https://sextractor.readthedocs.io/en/latest/Introduction.html).
Following the method of Rosolowsky et al. (2010) and Contreras et al. (2013), the deconvolved angular radius is written as,
\[\theta_{\rm core}=\eta\left[\left(\sigma_{\rm maj}^{2}-\sigma_{\rm bm}^{2} \right)\left(\sigma_{\rm min}^{2}-\sigma_{\rm bm}^{2}\right)\right]^{1/4}, \tag{1}\]
where \(\sigma_{\rm maj}\) and \(\sigma_{\rm min}\) are calculated from \(\theta_{\rm maj}/\sqrt{8\ln 2}\) and \(\theta_{\rm min}/\sqrt{8\ln 2}\), respectively. The \(\sigma_{\rm bm}\) is the averaged dispersion size of the beam (i.e., \(\theta_{\rm bmaj}/\sqrt{8\ln 2}\), where \(\theta_{\rm bmaj}\simeq 14\arcsec\) is the JCMT beam at the frequency of HCN (4-3)). \(\eta\) is a factor that relates the dispersion size of the emission distribution to the determined angular radius of the object. We have elected to use a value of \(\eta=2.4\), which is the median value derived for a range of models consisting of a spherical, emissivity distribution (Rosolowsky et al., 2010). Therefore, the physical size of the core is derived from \(R_{\rm core}=\theta_{\rm core}\times D\) (\(D\) is the distance), which is listed in column 9 of Table 2. Some of the cores have sizes comparable to the beam size, rendering them unresolved. In these cases, Column 9 of the corresponding rows is marked with "-" as a notation.
### Averaged Spectra from HCN Cores
The HCN (4-3) lines are extracted from the defined regions (including 34 HCN cores and 6 circles defined in Section 3.1). We first smooth the velocity resolution to a uniform value of 0.2 km s\({}^{-1}\), to enhance the signal-to-noise ratio (SNR) for further spectral line analyses. Then the baseline of spectra are subtracted and the baseline-free spectra are shown in Figure 3. The SNRs are defined as the ratio of \(T_{\rm peak}\) to \(\sigma\). In our analyses, the spectra with low SNR (\(<2\)) are
classified as non-detections of HCN emission, while others are solid detections. We also visually double-check the spectra to exclude the potential temperature jump at bad channels. We note that although the field BG034.259+00.222 has a detection in the North, the HCN (4-3) line has a large velocity deviation (\(\sim 40\,\mathrm{km\,s^{-1}}\)) from the systematic velocity. In addition, the detected core BG034.259+00.222C1 is near the edge of the field. So, we assume that BG034.259+00.222C1 is not correlated with the clump and excluded in the further discussion. Another notice is that although BG015.123-00.558 has no detection in the field, the averaged spectrum from the central circle shows a SNR\(\sim 2\) detection of emission. The non-detection in the SExtractor algorithm should be due to extended and diluted emission. As a result, 34 of 40 spectra show solid HCN detection and six are designated as non-detection spectra.
For the HCN spectra with solid detection, we fit them with a single Gaussian model by the Python package PySpecKit, which are shown on the upper right corner of each panel in Figure 3. The Gaussian parameters,
Figure 3: Averaged HCN (4-3) lines from the defined HCN cores whose name are labeled on the top left of each panel. For solid detections, the HCN (4-3) lines are fitted by Gaussian profiles. The \(2\sigma\) threshold of the best-fitting model is shown with a green dashed line and a red line, respectively. The results of the Gaussian fitting (amplitude \(A\), centroid velocity \(\Delta x\), and velocity dispersion \(\sigma\)) are shown on the top right. Non-detection spectra are not fitted, and only the baselines (green horizontal lines) are shown. The systematic velocities in previous surveys are marked with orange dashed lines.
including amplitude, centroid velocity, and linewidth, as well as their uncertainties are listed in columns (2)-(4) of Table 3. For the six non-detection spectra, columns (2)-(4) of Table 3 are filled with "-". We also flag the non-detection spectra with "N" in column (9).
### Synergy with Previous Line Surveys
Line surveys conducted by Schlingman et al. (2011); Shirley et al. (2013) not only serve as a guide for our follow-up survey (see Section 2.1), but also provide a large legacy value for spectral analyses in our work. The N\({}_{2}\)H\({}^{+}\) (3-2) lines are observed to be optically thin (Shirley, 2015), which can therefore be used to determined the systematic velocity and velocity dispersion of massive clumps.
Two important caveats warrant consideration in our analysis. Firstly, the N\({}_{2}\)H\({}^{+}\) (3-2) lines were observed using the SMT, whose beam size is approximately twice that of the JCMT. Consequently, the N\({}_{2}\)H\({}^{+}\) (3-2) lines may reflect the systematic velocity of the entire clump or the dense inner region within the clump, rather than the velocity of the central dense core as indicated by the HCN (4-3) lines. In essence, the coherence in velocity between parent clumps and HCN cores should underpin our discussions concerning line profiles, as discussed in Section 3.4.
Secondly, due to the larger physical coverage by the SMT beam, encompassing approximately four times the area of the JCMT beam, more turbulent motion should be included. Consequently, broader line widths are anticipated. This implies that the N\({}_{2}\)H\({}^{+}\) (3-2) lines should be narrower than they would be if observed within the JCMT beam. When comparing with the JCMT results in Section 3.4, we should always keep in mind that the line width of the N\({}_{2}\)H\({}^{+}\) (3-2) line could be overestimated.
Figure 3: Continued.
Here, we check the consistency between the velocity derived from HCN (4-3) lines \(V_{\rm LSR,HCN}\) and that fitted by optical thin lines N\({}_{2}\)H\({}^{+}\) (3-2) \(V_{\rm LSR,thin}\) from Shirley et al. (2013). As shown in Figure 4, \(V_{\rm LSR,HCN}\) and \(V_{\rm LSR,thin}\) always share the same value within the uncertainty, indicating a good correspondence between two surveys, which establishes the basis for the blue profile analyses in this paper.
### Variety of Observed Spectral Line Profiles
As shown in Figure 3, the averaged HCN (4-3) line shapes differ from core to core, some showing asymmetric profiles or double-peak profiles (non-Gaussian). To distinguish line profiles and study the distribution statistically, we adopt the definition of velocity difference by Mardones et al. (1997),
\[\delta V=\frac{V_{\rm HCN,peak}-V_{\rm sys}}{\rm d}V_{\rm thin}, \tag{2}\]
where the difference between the peak velocity of HCN (4-3) \(V_{\rm HCN,peak}\) and systematic velocity derived from optically thin line \(V_{\rm sys}\) are normalized by the FWHM of the thin line \(\rm dV_{\rm thin}\). The normalization makes it convenient and robust to set a uniform criterion to distinguish different line profiles, especially for sample with a wide range of line widths.
We first calculate the velocity at the peak intensity as \(V_{\rm HCN,peak}\), which is listed in column (5) of Table 3. To obtain \(V_{\rm sys}\), we then retrieve the fitting results of N\({}_{2}\)H\({}^{+}\) (3-2) lines from Shirley et al. (2013) where the N\({}_{2}\)H\({}^{+}\) (3-2) lines are thought to be optically thin and taken as tracers of systematic velocity. \(V_{\rm sys}\) for each core is marked as an orange dashed line in Figure 3. If two cores are in one clump, then they share the same \(V_{\rm sys}\) and \(\rm dV_{\rm thin}\), which are listed in columns (6)-(7) of Table 3. By Eq. 2, normalized \(\delta V\) is then calculated and listed in column (8). We designate those with \(\delta V<-0.25\) as significant blue profile (donated with "BP" hereafter), those with \(\delta V>0.25\) as significant red profile ("RP" hereafter), and those with \(-0.25<\delta V<0.25\) as single component ("S" hereafter) that have insignificant asymmetric profiles. The designation is listed in column (9) of Table 3.
We note that the second caveat in Section 3.3 can cause underestimation of \(\delta V\) due to systematic overestimation of \(\rm dV_{\rm thin}\) as described in Eq. 2. Consequently, there exists the possibility of bias, where the criteria for defining red or blue profiles (i.e., \(\delta V>0.25\) or \(<-0.25\)) might be more stringent than intended. This could potentially classify marginally satisfactory line profiles as non-asymmetric, resulting in a bias that reinforces the definition of pronounced line profiles but may also elevate the false negative rate for weak line profiles. To address this potential bias, a secondary assessment should be conducted through visual inspection. Two instances, BG009.212-00.202C1 and BG023.968-00.110C1, exhibit blue-shifted double peaks with \(\delta V\) values of \(-0.13\) and \(-0.14\) respectively. Despite not meeting the \(\delta V\) threshold, they are designated as "BP" due to their distinctive characteristics. Additionally, BG027.317+00.175C1, while satisfying the blue-profile criterion, possesses a low signal-to-noise ratio, which is then labeled as "S". Besides, BG030.772-00.801C1 and BG049.210-00.342C1, despite having \(\delta V<-0.25\), each features only a single peak. As a result, they are then classified as "S". Consequently, the final identification designates 14 cores as "BP" (referred to as HCN-BP cores) and four cores as "RP".
### Infall Candidates Identified by Line Mapping
Statistically, infall motion is the most likely interpretation of the observed blue profiles. However, in individual cases, it is not the only possibility. Rotation and outflows can also produce blue profiles (e.g., Wu & Evans, 2003; Wu et al., 2007). Resolved mapping observations are needed to investigate the nature of blue profiles. Rotation of core always exhibits blue and red profiles at different spatial positions, which
Figure 4: Consistency between \(V_{\rm LSR,HCN}\) and \(V_{\rm LSR,thin}\). Red: red-profile spectral lines; blue: blue-profile spectral lines; black: no clear line profiles. The errorbars at two directions are given by spectral line fitting errors.
are mistaken blue profiles at single-pointed observation. In a similar way, outflow lobes are easily ruled out if red-shifted emission is predominately from an extended wing. A profile that survives these tests provides a strong indication of infall, and the source can be seen as a candidate for collapse.
To provide with a better visualization of mapping observations, we present spectral line grids for each HCN-BP cores in Figure 10 to exclude other possibilities to produce blue profiles. The spectra located in the core mask are first averaged from the \(2\times 2\) pix\({}^{2}\) box and smoothed to a velocity resolution \(\sim 0.4\,\mathrm{km\,s}^{-1}\). Then the spectra are overlaid on the green elliptical footprints of HCN-BP cores.
The mapping of three HCN cores BG023.968-00.110C1, BG029.397-00.095C1 and BG030.719-00.081C1 all perform various but coherent line profiles among the core. In other words, although the averaged spectrum over the core shows a significant blue profile, the individual spectra at different positions can change from blue to red profiles continuously. The variety can also be seen from the moment one (M1) maps in color map (Figure 10). The details of rotation axis calculation can be found Appendix B.
Finally, a total of 11 HCN-BP cores survive the "mapping" tests and provide a strong indication of infall motion. We also check whether there are central heating sources to build the temperature gradient in those massive clumps. Although it has the lowest luminosity-to-mass ratio of approximately 0.2 among the infrared dark clouds, BG028.565-00.236 still exhibits molecular outflows and H\({}_{2}\)O/CH\({}_{3}\)OH masers at higher angular resolution (Lu et al., 2015). These findings suggest the presence of active star formation and central heating sources, not to mention other sources with bright point-like or even extended infrared emission. The discussion in Section 4.1 will further strengthen the argument here, since the high-\(J\) transition trace the denser (therefore inner) regions where the temperature gradient is guaranteed. Therefore, the blue profiles in the 11 HCN cores are most likely to be induced by infall motion. The subsample hereafter serve as promising candidates of infall in massive star-forming regions.
## 4 Discussion
### What Leads to Variety of Line Profiles at Multi-\(J\) Transitions?
As demonstrated in Section 3.4, only 14 out of 38 clumps have blue profiles seen in HCN (4-3) lines, contributing to a profile retention rate of 36.8% from low to high-level transitions (low-/high-\(J\) for abbreviation where "\(J\)" represents the quantum number of the rotation transition). In addition, there are 4 other clumps with red profiles in HCN (4-3), while others have only one peaked or even no detection. Since all the clumps have evident blue profiles in HCO\({}^{+}\) (3-2), it is natural to ask what leads to the inconsistency of line profiles at dual-\(J\) transitions.
We attribute the main factor for the inconsistency of profiles at multi-\(J\) transitions to the difference of critical densities 5. In our case, the critical density of HCO\({}^{+}\) (3-2) is \(1.6\times 10^{6}\,\mathrm{cm}^{-3}\) at 10 K and \(1.4\times 10^{6}\,\mathrm{cm}^{-3}\) at 20 K. On the other hand, the critical density of HCN (4-3) is \(3.0\times 10^{7}\,\mathrm{cm}^{-3}\) at 10 K and \(2.3\times 10^{7}\,\mathrm{cm}^{-3}\) at 20 K, which is approximately twenty times higher. Thus, different infall tracers, such as low-/high-\(J\) transitions of HCO\({}^{+}\) or HCN species, should trace different parts or layers of dense star-forming clumps (Xie et al., 2021). As such, the infall profiles will be presented in the best way when the opacity of the source and the critical density of the tracer are well matched, as argued in Wu and Evans (2003).
Footnote 5: Here, we use the same definition of critical density as Shirley (2015): the critical density \(n_{\mathrm{crit}}\) is defined as the molecular hydrogen density for which the net radiative decay rate from \(j\to k\) equals the rate of collisional depopulation out of the upper level \(j\) for a multi-level system.
#### 4.1.1 Two Possible Scenarios
Given the different critical densities between HCO\({}^{+}\) (3-2) and HCN (4-3), there are two possible scenarios for our observed variety/inconsistency of line profiles at multi-\(J\) transitions:
\(\bullet\) While gas infalls in the outer envelope of massive clumps, the bulk motion can become more complex or even prohibited due to feedback from stars, such as outflows and stellar winds, or other dynamic processes occurring at a certain density layer. In some cases, the motion may even be reversed, resulting in an expanding motion. Consequently, there are multiple possibilities for bulk motion at the layer that HCN (4-3) traces, leading to a low detection rate of blue profiles at high-\(J\) transitions.
\(\bullet\) The optical depth of molecular lines is determined primarily by the kinetic temperature \(T_{\mathrm{kin}}\) and the column density of the molecule \(N_{\mathrm{mol}}=N_{\mathrm{H_{2}}}\times X_{\mathrm{mol}}\), where \(N_{\mathrm{H_{2}}}\) represents the column density of molecular hydrogen and \(X_{\mathrm{mol}}\) denotes the abundance of the molecule. Due to variations in both \(N_{\mathrm{mol}}\) and \(T_{\mathrm{kin}}\) within our sample, the optical depth of the high-\(J\) transition \(\tau\)(HCN (4-3)) can vary significantly. Consequently, in some clumps, \(\tau\)(HCN (4-3)) may not be sufficiently high to produce asymmetric line profiles, even if there is still gas infall motion present.
To distinguish between two scenarios, we can compare the predicted line profiles with the observed ones. For the first scenario, the fraction of distinct profiles should be determined by the likelihood of different types of bulk motions (infall, outflow/expansion, and static). For the second scenario, the detection rate of line profiles should be lower in clumps with lower column density, while the high-\(J\) transition line should maintain the same profile as the low-\(J\) transition line in clumps with a higher column density. Figure 5 displays that the fractions of both red profiles and non-asymmetric profiles systematically decrease while the fraction of blue profiles increases. Since all clumps have a blue profile HCO\({}^{+}\) (3-2), the rising trend of the fraction of HCN (4-3) with blue profiles suggests that the high-\(J\) transition still conveys the same bulk motion information, but only in high-density clumps.
_Caveats_. We acknowledge that the peak column density \(N_{\rm H_{2}}\) is based on an angular resolution of 21\({}^{\prime\prime}\), which is coarser than that of the JCMT. If the source has a centralized density distribution, the column density at the higher angular resolution should be higher than that at the lower angular resolution. Considering a Gaussian distribution of density and assuming that dust emission is optically thin, we can calculate how much column density is underestimated by \(\mathcal{R}_{N}\),
\[\mathcal{R}_{N}=\frac{\iint_{\Omega_{1}}\mathcal{G}(x,y;\sigma)\mathrm{d} \Omega}{\iint_{\Omega_{2}}\mathcal{G}(x,y;\sigma)\mathrm{d}\Omega}-1, \tag{3}\]
where \(\Omega_{1}\) and \(\Omega_{2}\) are the JCMT and ATLASGAL beam solid angles respectively, \(\mathcal{G}(x,y;\sigma)\) is a Gaussian density model with a dispersion of \(\sigma\). For a typical value \(\sigma=20^{\prime\prime}\) in our sample, \(\mathcal{R}_{N}=0.31\), indicating that there can be a moderate systematic underestimation of column density if observed in the JCMT beam. But inversely speaking, we can smooth the JCMT lines into the same resolution of 21\({}^{\prime\prime}\), which is the same as that of column density. Since the profiles we discuss are from the averaged lines inside the cores where the profiles should be coherent (see Section 3.4), it's safe to compare two in the context of 21\({}^{\prime\prime}\) resolution.
#### 4.1.2 Large Variations In Optical Depths of HCN (4-3)
The optical depth of HCN (4-3) is calculated by RADEX6, a computer program that calculates the strengths of the atomic and molecular lines of interstellar clouds, which are assumed to be homogeneous (van der Tak et al., 2007). The presumed excitation conditions are: 1) background temperature of 2.73 K; 2) collision partner (H\({}_{2}\)) volume density of 10\({}^{5}\)-10\({}^{6}\) cm\({}^{-3}\) in the HCN cores (see details in Appendix C; 3) HCN (4-3) line width of 8.8 km s\({}^{-1}\), which is the mean value of observed spectra. We perform a \(100\times 100\) grid to calculate the optical depth of HCN (4-3), \(\tau_{\rm pred}\), based on two variables, the kinetic temperature \(T_{\rm kin}\) and the column density of the HCN molecule \(N_{\rm HCN}\), within the parameter space defined by the observed values.
Footnote 6: [https://personal.sron.nl/](https://personal.sron.nl/) vdtak/radex/index.shtml
The calculation grids are presented in Figure 6, where contour levels of \(\tau=\)1 and 10 are represented by solid white lines. We utilize the sample of 38 observed massive clumps to predict the optical depths \(\tau_{\rm pred}\). In our calculations, we assume that the kinetic temperature \(T_{\rm kin}\) is equal to the dust temperature \(T_{\rm dust}\). The column density of HCN \(N_{\rm HCN}\) is determined using the relation \(N_{\rm HCN}=N_{\rm H_{2}}\times X_{\rm HCN}\), where \(N_{\rm H_{2}}\) is the H\({}_{2}\) column density and \(X_{\rm HCN}\) is the abundance of HCN relative to H\({}_{2}\), which depends on the evolutionary stage (Martinez & Paron, 2023). We assign different \(X_{\rm HCN}\) values to clumps with distinct evolutionary types based on column (7) of Table 1. For type 0, \(X_{\rm HCN}=5.6(1.1)\times 10^{-10}\); for type 1, \(X_{\rm HCN}=2.2(0.4)\times 10^{-9}\); for type 2, \(X_{\rm HCN}=5.9(0.3)\times 10^{-9}\); for type 3, \(X_{\rm HCN}=3.0(0.6)\times 10^{-9}\). Additionally, considering the observed HCN (4-3) lines exhibit various velocity widths (d\(V_{\rm HCN}\)) ranging from 5 to 15 km s\({}^{-1}\), we calibrate the \(N_{\rm HCN}\) values to account for the effect of d\(V_{\rm HCN}\) using Eq. 3 in Remijan et al. (2004).
Figure 6 illustrates that the optical depths of clumps exhibiting blue profiles (\(\tau_{\rm B}\)) consistently exceed those of non-asymmetric profiles (\(\tau_{\rm NA}\)) or red profiles (\(\tau_{\rm RP}\)). Furthermore, in the regime of \(n_{\rm H_{2}}=10^{6}\) cm\({}^{-3}\), all
Figure 5: The number distribution (histogram) and the fraction (line-connected scatter plot) of different line profiles change with peak column density bins \(\log N({\rm H_{2}})\,({\rm cm^{-2}})\) at the beam of 21\({}^{\prime\prime}\). The blue, red, and grey colors stand for blue profiles, red profiles, and non-asymmetric profiles (including single peaked and non-detection), respectively.
clumps demonstrate \(\tau_{\rm B}\gtrsim 1\), and even in the regime of \(n_{\rm H_{2}}=10^{5}\,{\rm cm^{-3}}\), the top five clumps (indicated with labels) maintain a high level of opacity. Conversely, for a significant proportion of clumps exhibiting red and non-asymmetric profiles, the optical depths do not exceed 1 under any given conditions. The outcome is in alignment with the findings presented in He et al. (2016), where the identified infall candidates exhibit elevated H\({}_{2}\) column densities and H\({}_{2}\) volume densities in contrast to the clumps where infall motions were not detected. We acknowledge that there are still several clumps, especially for BG013.882-00.143C1 with comparable \(\tau_{\rm pred}\), but displaying a non-asymmetric profile or a red profile. In other words, even with enough optical depth, these clumps don't show blue profiles any longer. Therefore, the observations are likely to support hybrid scenarios, where an adequate optical depth is crucial for inducing blue profiles but the inner motion can also be complicated. Blue profile in low-\(J\) transition should not guarantee blue profiles in high-\(J\) transition.
_Caveats_. First, \(T_{\rm kin}\) is assumed to be the dust temperature averaged through the clump \(T_{\rm dust}\), which is a rough estimate. But as shown in Figure 6, \(T_{\rm kin}\) has much smaller variations in \(\tau_{\rm pred}\) than \(N_{\rm H_{2}}\) does. This suggests that the potential bias arising from the uncertainty in \(T_{\rm kin}\) is mitigated. Second, there is no one-to-one correspondence between \(X_{\rm HCN}\) and each individual source. Consequently, although these caveats result in a relatively rough estimation of \(\tau_{\rm HCN(4-3)}\), the relative values of \(\tau_{\rm HCN(4-3)}\) are reliable, allowing for qualitative analysis and further investigation.
#### 4.1.3 Triple-\(J\) Transition Lines In a Subsample
We cross match the 48 blue-profile clumps (in HCO\({}^{+}\) (3-2) lines as reported by Shirley et al., 2013) with MALT90 surveys7(Jackson et al., 2013) of its low-\(J\) transition counterpart HCO\({}^{+}\) (1-0) lines (reported by He et al., 2015, 2016). Since the SMT and the Mopra are located in different hemispheres, we only have six sources overlapped with both the \(J=1-0\) and \(J=3-2\) transitions as a subsample.
Footnote 7: The MALT90 survey is a large international project that exploited the fast-mapping capability of the ATNF Mopra 22-m telescope, [http://atoa.atnf.csiro.au/MALT90](http://atoa.atnf.csiro.au/MALT90)
Table 4 provides a compilation of line profiles from a subsample consisting of sources from Shirley et al. (2013), He et al. (2015, 2016), and our work. Among the six sources with blue-profile HCO\({}^{+}\) (3-2) lines, four sources consistently exhibit blue-profile HCO\({}^{+}\) (1-0) lines, while the remaining two sources BG009.212-00.202 and BG012.889+01.480 display red-profile HCO\({}^{+}\) (1-0) lines. For the two sources with red-profile HCO\({}^{+}\) (1-0) lines, the classification in column (7) in Table 1 indicates that they are both in a more evolved stage, which aligns with their extended infrared emission shown in Figure 2. BG012.889+01.480 (also I18089-1732) was reported to contain a nearly face-on disk (Sanhueza et al., 2021) with collimated SiO (5-4) bipolar outflow (Beuther et al., 2004). If the outflow direction is perpendicular to the disk plane, then the inclination angle of the outflow axis should be small and the outflow motion can provide enough expanding effects along the
Figure 6: The simulated grids of predicted optical depth (\(\tau_{\rm pred}\)) based on the kinetic temperature (\(T_{\rm kin}\)) and the column density of HCN (\(N_{\rm HCN}\)), with three different volume densities of collisional partner H\({}_{2}\) (\(n_{\rm H_{2}}\)). The white lines outline the contour levels of \(\tau=\)1 and 10. The 38 clumps are shown as filled circles, with blue, red, and gray colors representing blue profiles, red profiles, and non-asymmetric profiles, respectively. Five clumps with the highest \(\tau_{\rm pred}\) are labeled. The colorbars are shown in the lower right.
line of sight. The argument can be further verified in the case of BG009.212-00.202 by high-resolution observation. Once verified, it is likely that the red profiles are a result of outflows and bulk expanding expansion. Previous studies have demonstrated that HCO\({}^{+}\) (3-2) lines are capable of tracing infall motion in both the early (Xie et al., 2021) and late stages of massive star-forming regions (Fuller et al., 2005; Reiter et al., 2011; Klaassen et al., 2012). However, our work, although based on a limited sample size, suggests that low-\(J\) transitions such as HCO\({}^{+}\) (1-0) may be more susceptible to contamination from other bulk motions present in the outer low-density layers. On the other hand, high-\(J\) transitions like HCO\({}^{+}\) (3-2) appear to be more reliable for tracing infall motion in a more evolved stage of high-mass star-forming regions.
For the four sources with HCN (4-3) observations, we calculate the optical depths of HCN (4-3) lines based on the input parameters including column density of HCN (\(N_{\rm HCN}\)), kinetic temperature (\(T_{\rm kin}\)), line width (d\(V_{\rm HCN}\)), and volume density of collision partner (\(n_{\rm H_{2}}\)). The first three parameters are directly retrieved from Table 1 and 3, while the last one \(n_{\rm H_{2}}\) is given a range of (10\({}^{5}\), 10\({}^{5.5}\) and 10\({}^{6}\), cm\({}^{-3}\)), which is similar to what has been done in Figure 6. Overall, higher \(n_{\rm H_{2}}\) results in higher level of thermalization due to collision process, and then result in higher excitation temperature \(T_{\rm ex}\) and \(\tau_{\rm HCN(4-3)}\) until the critical density of \(n_{\rm crit,HCN(4-3)}\) of 2.3\(\times\)10\({}^{7}\) cm\({}^{-3}\). Although \(\tau_{\rm HCN(4-3)}\) varies by magnitude along with \(n_{\rm H_{2}}\), BG008.458-00.222 and BG011.083-00.536 with single-peaked profiles have \(\tau_{\rm HCN(4-3)}\ll\) 1; BG009.212-00.202 and BG012.889+01.480 with blue profiles always have \(\tau_{\rm HCN(4-3)}\gtrsim 1\). Therefore, the optical depth is the main reason for the variations of line profiles in HCN (4-3).
However, we still note that the conclusion is not significant because of the limited sample size. Therefore, it's encouraging to survey multi-\(J\) transitions in a much larger and less biased sample and check whether the conclusion holds or not.
#### 4.1.4 Low Detection Rate of Blue Profiles and Their Connection to Infall Motions
As summarized in Figure 7, two systematic investigations have been undertaken to identify blue profiles within massive star-forming clumps, employing the ATLASGAL and BGPS follow-up line surveys, respectively. However, the detection rate of the blue profiles in BGPS clumps is found to be more than ten times lower than that observed in ATLASGAL clumps. This substantial discrepancy can be attributed to two primary factors.
Firstly, ATLASGAL clumps are observed using low-\(J\) transitions, whereas the BGPS clumps employ high-\(J\) transitions. Furthermore, it is important to consider that the ATLASGAL line survey implemented a flux threshold of 0.25 Jy at 870 \(\mu\)m. This threshold was applied to ensure the inclusion of clumps with a mass of 200 M\({}_{\odot}\), assuming a distance of 10 kpc and a temperature of 10 K (Jackson et al., 2013). In contrast, the BGPS line survey did not impose any specific flux threshold. Consequently, the absence of a threshold in the BGPS survey results in the inclusion of a broader range of clumps, including those with lower flux values. Consequently, the opacity of the high-\(J\) transition line, especially in low-flux clumps, may not be sufficiently high to produce the characteristic self-absorption signature, as discussed in Section 4.1.2, therefore diluting the overall detection rate of blue profiles in the BGPS sample. Second, the MALT90 line survey has a spectral resolution that is ten times greater than that of the BGPS line survey. It should be noted that a low spectral resolution is inadequate for detecting blue profiles induced by low infall velocities. These
Figure 7: The low detection rate of blue profiles in star forming clumps guided by ATLASGAL and BGPS. 3246 ATLASGAL clumps with 870 \(\mu\)m flux larger 0.25 Jy are observed in HCO\({}^{+}\)/N\({}_{2}\)H\({}^{+}\) (1-0) at a spectral resolution of 0.11 km s\({}^{-1}\), by the Mopra MALT90 line survey. 732 clumps have solid detection of SNR\(>\)3, among which 231 clumps have blue profiles by canonical criterion of \(\delta V<-0.25\). The detection rate of blue profile is then 31.6% (He et al., 2015, 2016). 6194 BGPS clumps without flux threshold are observed in HCO\({}^{+}\)/N\({}_{2}\)H\({}^{+}\) (3-2) at a spectral resolution of 1.1 km s\({}^{-1}\), by the SMT line survey (Schlingman et al., 2011; Shirley et al., 2013). 1795 clumps have solid detection of SNR\(>\)3, among which 48 have blue profiles, donating a detection rate of 2.8% (Shirley et al., 2013).
factors contribute significantly to the disparity observed in the detection rates of blue profiles between the two surveys. Moreover, such a comparison of detection rate also encourages further investigations with enhanced spectral resolution and the use of appropriate transition lines.
Nevertheless, it's essential to recognize that studying blue profile is a phenomenology after all, and establishing a direct link between such profiles and infall motions remains a challenging endeavor. Despite advancements in observational capabilities, the comprehension of blue profiles is impeded by limited insights into the physical conditions prevailing within these regions, such as the distribution of temperature and density. Additionally, the intricate nature of infall motions within these clumps, coupled with potential influences from feedback mechanisms, contributes to a notable false positive rate when inferring infall motion from blue profiles. To disentangle the intricate implications of blue profiles, investigations of high-resolution are imperative, encompassing thorough analyses of gas kinematics (refer to Section 4.4 in Xu et al., 2023).
On another front, systematic examinations of blue profiles also grapple with a significant false negative rate. For example, the low detection rate of blue profiles in BGPS clumps does not necessarily imply a
Figure 8: The grid (2\(\times\)2 pix\({}^{2}\)) of HCN (4-3) line profiles overlaid on the continuum map (either ATLASGAL or SCUBA-2) for BG012.889, BG081.721, BG133.748, and BG133.949, respectively. The beam size of HARP receiver at 350 GHz is shown on the bottom left. The scale bar of 0.2 pc is shown on the bottom right.
low detection rate of infall motion, underscoring the importance of using suitable tracers for identifying infall motion. A careful balance must be maintained between the detectability of blue profiles and the ability of a tracer to probe the desired depth. Low-\(J\) transitions, for instance, tend to produce blue profiles more readily due to sufficient opacity, albeit primarily tracing the gas envelope. On the other side, high-\(J\) transitions effectively capture inner gas motion but may be too optically thin to generate blue profiles in clumps with low column densities.
### Mapping Infall Motions in Massive Star-forming Clumps
#### 4.2.1 Mapping Clump-scale Global Collapse
Figure 9: The averaged HCN (4-3) lines are indicated in black solid line while the best-fit Hill5 model are in red solid line, with five parameters shown on the upper left corner. We highlight the velocity range used for Hill5 model fitting in orange color at the bottom. The red band indicates the systematic velocity \(v_{\rm LSR}\). The blue band indicates the velocity span of the infall motion, that is, from \(v_{\rm LSR}-v_{\rm infall}\) to \(v_{\rm LSR}+v_{\rm infall}\).
JCMT mapping observations with an angular resolution of \(14\arcsec\) have provided valuable insights into the resolution of infall motions. Among the observed clumps with HCN (4-3) blue profiles, there are five clumps with the highest predicted optical depths (\(\tau_{\rm pred}\)), namely BG081.721, BG133.748, BG030.719, BG012.889, and BG133.949. It is worth noting that these five clumps also exhibit consistently highest SNRs as illustrated in Figure 3, suggesting the robustness of RADEX line simulation. Furthermore, they show the most extended emission patterns, as depicted in Figure 11, further emphasizing their significance in the study of infall motion. However, it should be noted that BG030.719 presents a unique case in which two separated HCN cores with opposite profiles are observed, resulting in a limited number of pixels available for mapping the infall motion with sufficient sampling. Therefore, excluding BG030.719, the remaining four clumps serve as a subsample that can be effectively utilized for infall motion mapping analyses.
As shown in all four clumps in Figure 8, the HCN (4-3) lines show strong spatial correlation with submillimeter continuum (dust) emission and blue-profile spectra in most of the clumps. Such line profiles are expected for an optically thick tracer of idealized collapsing clouds in which the excitation temperature is rising towards the center. What is important to note here is the extent (over at least 9 independent beams) over which this spectral signature is observed, and the absence of any other line asymmetries (cf. Section 3.5), strongly suggesting that all the four clumps are undergoing global collapse (see a typical example of SDC335 in Peretto et al., 2013; Xu et al., 2023). A radiation transfer model combined with temperature and density profile derived from the far-infrared data can be used to fit the map of line profiles, inferring the infall velocity and mass infall rate (Xie et al. in preparation).
#### 4.2.2 Infall Parameters Fitted by Hill5 Model
According to Section 3.5, a sample of 11 HCN cores are undergoing infall motion. To estimate the infall velocity, we used the "Hill5" model first introduced by De Vries & Myers (2005). The "Hill5" model assumes that the excitation temperature in the front of the cloud increases inward as a linear function of optical depth. Compared to the traditional "two-layer" model where excitation temperature is constant, "Hill5" model is more suitable for real physical scenarios where young stellar objects (YSOs) heat the core from inside out. Besides, De Vries & Myers (2005) demonstrate that two-peak profiles are best matched by the "Hill5" model while "Hill5" and "two-layer" perform equally well for red-shoulder blue profiles without double peaks (red-shoulder hereafter). The ten infall candidates identified in Section 3.5 are either double-peak or red-shoulder blue profiles, so the "Hill5" model is a better choice.
The model has five free parameters to fit: (1) the peak excitation temperature \(T_{\rm peak}\), (2) the velocity dispersion of the molecular line \(\sigma\), (3) the optical depth of the core \(\tau_{\rm core}\), (4) the systematic velocity \(v_{\rm LSR}\), and (5) the infall velocity of the gas in the core \(v_{\rm infall}\). Formula derivations of the model are presented in detail in Appendix D.
To determine the global accretion rate towards these cores, we fit the average spectra of the HCN (4-3) emission across the cores. The signal-to-noise ratio of the ten spectra has an average of \(\sim 30\), satisfying the criterion for the "Hill5" model fitting. Although two spectra of BG028.565-00.236C1 and BG039.267-00.589C1 have relatively lower SNR \(\gtrsim 6\), the high spectral resolution of \(0.2\,{\rm km\,s}^{-1}\) assure enough effective data points for model fitting. Most cores have extended velocity wings that are assumed to be induced by molecular outflows. To reduce the contamination from the wings, we cut out the velocity channels which are non-Gaussian parts in Gaussian fitting process (see Section 3.2). The preserved channels are fitted channels marked as orange bands in Figure 9, with a bandwidth of 30-60 channels to cover the blue profile features. The uncertainties in the fitting are given by Python package lmfit that explicitly explore the parameter space and determine confidence levels. As initial guesses of the fit we assume \(\tau_{\rm core}\) ranging from 0.1 to 30, a \(v_{\rm LSR}\) between \(v_{\rm LSR}-5\,{\rm km\,s}^{-1}\) and \(v_{\rm LSR}+5\,{\rm km\,s}^{-1}\), \(v_{\rm infall}\) between 0.1 and \(4\,{\rm km\,s}^{-1}\), \(\sigma\) between 0 and \(\sigma_{\rm HCN}\) (from Gaussian fitting), and \(T_{\rm peak}\) between 2.73 and 30 K.
The fitted spectra are shown in Figure 9 with five parameters shown on the top left of each panel. The blue band indicates the velocity range of the infall motion, that is, from \(v_{\rm LSR}-v_{\rm infall}\) to \(v_{\rm LSR}+v_{\rm infall}\). The fitted parameters as well as the velocity channels used for fitting are sorted in columns (3)-(7) of Table 5.
#### 4.2.3 Infall Velocity vs. Free-fall Velocity
Since the free-fall velocity represents the typical timescale of gravitational collapse of a star-forming clump, comparing observed infall velocity to free-fall velocity helps to understand how fast the star formation proceeds in these massive clumps. The free-fall velocity \(v_{\rm ff}\) is calculated by:
\[v_{\rm ff}=\sqrt{\frac{2GM_{\rm enc}}{R}}, \tag{4}\]
where \(M_{\rm enc}\) is the mass enclosed within radius of \(R\). Since the HCN cores are mostly even smaller than the
clumps, we need to scale down \(v_{\rm ff}\) at the HCN core scale. Substituting Eq. 3 into Eq. 4, free-fall velocity should be constant over the self-gravitating clumps. Therefore, we can directly compare the infall velocity and the free-fall velocity of the HCN cores. As shown in column (5) of Table 5, the infall velocity of the 11 clumps has a range of 0.2-1.6 km s\({}^{-1}\), with mean and median values of 1.0 and 1.1 km s\({}^{-1}\). Adopting the clump radius and mass in column (8) and (11) of Table 1 into Eq. 4, the free-fall velocity has a range of 2.0-6.8 km s\({}^{-1}\), with mean and median values of 3.6 and 3.2 km s\({}^{-1}\). Therefore, the infall velocity fraction \(\mathcal{F}_{\rm infall}\), defined as the ratio of the infall velocity to free-fall velocity, ranges from 5% to 74%, with both mean and median values of 32%. The minimum value is consistent with what has been found in Wyrowski et al. (2016), but the maximum and mean/median values are systematically larger. However, the large fraction should be due to the different distances in our sample, because the radii of clumps with smaller distance tend to have lower mass (\(M_{\rm clump}\propto D^{2}\) where \(D\) is the distance). If we exclude the three nearest clumps BG081.721+00.57, BG133.748+01.19, and BG133.949+01.06, then the median fraction is \(\sim 20\%\).
Since the timescale is directly related to the velocity at a given radius, the ratio of the infall timescale (\(\tau_{\rm infall}\propto 1/v_{\rm infall}\)) to the free-fall timescale (\(\tau_{\rm ff}\propto 1/v_{\rm ff}\)) is inversely proportional to \(\mathcal{F}_{\rm infall}\). This means that the dense region of the clump, as indicated by the HCN cores, will undergo collapse within a few to several tens of free-fall timescales.
#### 4.2.4 Mass Infall Rate
Assuming a simplified spherical model, the mass infall rate is calculated by (Lopez-Sepulcre et al., 2010),
\[\dot{M} =4\pi R^{2}\rho v_{\rm infall}=4\pi m_{\rm p}\mu_{\rm H_{2}}n_{\rm H _{2}}R^{2}v_{\rm infall}\] \[=8.9\times 10^{-4}\left(\frac{\mu_{\rm H_{2}}}{2.809}\right) \left(\frac{n_{\rm H_{2}}}{10^{5}\,{\rm cm}^{-3}}\right)\left(\frac{R_{\rm deconv }}{0.1\,{\rm pc}}\right)^{2}\] \[\left(\frac{v_{\rm infall}}{1\,{\rm km}\,{\rm s}^{-1}}\right)M_{ \odot}\,{\rm yr}^{-1}, \tag{5}\]
where \(\mu_{\rm H_{2}}\) is the molecular weight per hydrogen molecule (\(\mu_{\rm H_{2}}=2.809\); Evans et al., 2022), \(n_{\rm H_{2}}\) and \(R_{\rm deconv}\) are the volume density and physical radius of the defined HCN core, and \(v_{\rm infall}\) is the infall velocity fitted from "Hill5" model. The \(n_{\rm H_{2}}\) is estimated by \(N_{\rm H_{2}}/(2R_{\rm deconv})\). We note that the HCN core BG039.267-00.589C1 has not been resolved so we use the physical scale of the beam size as an upper limit, and therefore the derived mass infall rate is an upper limit as well. The calculated mass infall rate is then listed in Column (8) of Table 5.
The mass infall rate exhibits a wide range, ranging from 0.15 to 32.1 \(\times 10^{-3}\)\(M_{\odot}\,{\rm yr}^{-1}\), which aligns with typical values observed in high-mass clumps Yu et al. (2022); He et al. (2016). The mean and median values of the mass infall rate are \(7.6\times 10^{-3}\) and \(4.5\times 10^{-3}\)\(M_{\odot}\,{\rm yr}^{-1}\), respectively, which are in good agreement with the values derived from HCO\({}^{+}\) (1-0) lines in a sample of 11 IRDCs (Xie et al., 2021) and HCO\({}^{+}\)/HNC (1-0) lines in a sample of 33 IRDCs (Pillai et al., 2023). It should be noted that the mass infall rate obtained from HCN (4-3) lines primarily traces the inner regions of massive clumps, while the mass infall rate derived from HCO\({}^{+}\) (1-0) lines predominantly represents the outer parts or envelopes. Nevertheless, the remarkable consistency in mass infall rates between these two tracers suggests a continuous accretion process from the clump envelope to the inner region. This finding is supported by previous studies indicating minimal variations in mass infall rates during the evolution of high-mass clumps (He et al., 2016). Utilizing multi-\(J\) comparisons allows us to establish a connection between mass infall rates at various scales. Therefore, it is crucial to conduct follow-up high-resolution observations (e.g., by ALMA, NOEMA, or SMA) to precisely quantify the amount of mass ultimately transferred to the protostars. For instance, Xu et al. (2023) collected three-scale observations and revealed a consistent accretion process from clump-scale global collapse to core-scale gas feeding in the case of SDC335. Furthermore, high-resolution observations can improve our understanding of the concept of global collapse. Although a spherical model featuring a collapsing shell can adequately explain the blue profiles, most observations indicate that the inflows manifest as gas streams or elongated filamentary structures (Peretto et al., 2013; Kirk et al., 2013; Lu et al., 2018; Liu et al., 2016; Xu et al., 2023; Yang et al., 2023). The ongoing SMA observations of six of our samples are promising for deepening our understanding of the inner dense gas distribution and kinematics as well.
## 5 Conclusions
Leveraging the efficient-mapping advantages of the JCMT HARP instrument, we perform an HCN (4-3) mapping survey of 38 representative massive star-forming clumps in the Bolocam Galactic Plane Survey (BGPS) guided with HCO\({}^{+}\) (3-2) "blue asymmetric line profile" (blue profile). The high-\(J\) transition with critical density of \(>10^{7}\) cm\({}^{-3}\), combined with previous low-\(J\) transition data, mapping observational mode, and a wide range of physical properties in such a large
sample, help deepen our understanding of blue profiles and their connection to gas infall motion in massive star-forming clumps. Our main findings are summarized as follow.
1. We integrate the line intensity of HCN (4-3) lines and produce 38 HCN (4-3) moment 0 (M0) maps, of which 32 have detection and six have no detection. 30 M0 maps show isolated emission regions while two M0 maps show double emission regions. In total, 34 HCN emission cores (HCN cores) are identified by the SExtractor algorithm. HCN (4-3) spectra extracted from the HCN cores have consistent velocities with that of N\({}_{2}\)H\({}^{+}\) (3-2) lines, justifying the usage of N\({}_{2}\)H\({}^{+}\) (3-2) as the systematic velocity tracer.
2. The averaged HCN (4-3) lines rather show various line profiles, including 14 blue, 4 red, and 22 non-asymmetric profiles, rather than keeping the same blue profile as the lower-\(J\) transition HCO\({}^{+}\) (3-2) performs. Adopting the HCN (4-3) maps, we found the intrinsic variations of the line profile in three HCN cores, suggesting potential rotation motion. The rest of 11 HCN cores serve as a promising candidates of infall motion in massive star-forming regions.
3. We find an increasing rate of blue profiles along the H\({}_{2}\) column density and the opacity of HCN (4-3) lines calculated from the non-LTE radiation transfer code RADEX, suggesting insufficient opacity should be the main reason for the low profile retention rate of 36.8% (14 blue profiles out of 38 massive clumps). However, even with sufficient HCN (4-3) opacity, there are still some detections of red or non-asymmetric profiles, which suggest gas undergoing different motion at different density layers, traced by different transitions.
4. A six-source subsample has three transitions, HCO\({}^{+}\) (1-0), HCO\({}^{+}\) (3-2), and HCN (4-3), with critical density ranging from \(4.5\times 10^{4}\) cm\({}^{-3}\) to \(2.3\times 10^{7}\) cm\({}^{-3}\). Although limited by sample size, single peaked line profiles systematically have low opacity \(\tau\ll 1\) while blue profiles have enough high opacity \(\tau\gtrsim 1\). Additionally, we find that two sources, namely BG009.212-00.202 and BG012.889+00.490, which exhibit bipolar outflows at relatively small inclination angles, display red profiles in the lowest-\(J\) transition of HCO\({}^{+}\) (1-0). These profiles can be attributed to the presence of expanding gas envelopes along the line of sight.
5. Comparison between two line surveys guided by ATLASGAL (He et al., 2015, 2016) and BGPS (Schlingman et al., 2011; Shirley et al., 2013) highlights the importance of appropriate tracer, high spectral resolution, and column density threshold of searching for blue profiles in a large sample. We also caution that the blue profile is a phenomenology after all, and the connection between the blue profile and the infall remains to be calibrated by a multi-\(J\) transition line survey for a large sample.
6. If all 11 blue profiles are produced by infall motions, we adopt the "Hill5" model to fit the infall velocity of the HCN cores, ranging from 0.2 to 1.6 km s\({}^{-1}\), with mean and median value of 1.0 and 1.1 km s\({}^{-1}\). The infall velocities account for a fraction of 5% to 74% to free-fall velocity, indicating the HCN cores will collapse within a few to several tens of free-fall timescales.
7. Assuming a simplified spherical model, the mass infall rate can be calculated, ranging from 0.15 to 32.1\(\times 10^{-3}\) \(M_{\odot}\) yr\({}^{-1}\), with mean and median values of \(7.6\times 10^{-3}\) and \(4.5\times 10^{-3}\) \(M_{\odot}\) yr\({}^{-1}\), which is consistent with what has been found in the low-\(J\) transition HCO\({}^{+}\) (1-0). The consistency of the mass infall rate among different transitions (i.e., difference density layers) suggests a steady accretion process from the clump gas envelope to the inner region, as proposed by Xu et al. (2023).
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline HCN Core\({}^{\mbox{\small\bf a}}\) & Flag & Xoffset & Yoffset & \(\theta_{\rm maj}\) & \(\theta_{\rm min}\) & PA & \(F_{\rm peak}\) & \(R_{\rm core}\) \\ & & (arcsec) & (arcsec) & (arcsec) & (arcsec) & (deg) & (K\(\cdot\)km s\({}^{-1}\)) & (pc) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline BG008.458-00.224C1 & 1 & 14.1 & -16.5 & 26.15 & 16.78 & 141.9 & 3.7 & 0.13 \\ BG009.212-00.202C1 & 1 & 4.7 & -12.4 & 30.95 & 22.24 & 12.4 & 4.6 & 0.2 \\ BG010.214-00.324C1 & 1 & 2.8 & -16.2 & 34.45 & 26.0 & 92.4 & 7.4 & 0.26 \\ BG010.681-00.028C1 & 1 & 1.3 & -15.9 & 21.58 & 15.35 & 61.1 & 6.1 & 0.08 \\ BG011.083-00.536C1 & 1 & -1.7 & -9.5 & 29.42 & 22.63 & 82.5 & 6.2 & 0.13 \\ BG012.889+00.490C1 & 1 & 5.1 & -1.7 & 33.61 & 25.9 & 90.4 & 27.6 & 0.13 \\ BG013.882-00.143C1 & 1 & 5.4 & -11.8 & 16.31 & 11.05 & 143.7 & 4.2 & – \\ BG014.606+00.012C1 & 1 & -1.9 & -13.2 & 32.87 & 18.51 & 79.6 & 11.9 & 0.11 \\ BG015.021-00.620C1 & 1 & 8.8 & -11.9 & 36.58 & 31.46 & 13.4 & 5.7 & 0.12 \\ BG016.894+00.486C1 & 1 & 12.4 & -23.1 & 22.74 & 11.14 & 147.1 & 1.5 & – \\ BG023.875+00.534C1 & 1 & 11.3 & -4.9 & 25.3 & 20.91 & 14.3 & 15.9 & 0.2 \\ BG023.968-00.110C1 & 1 & -0.6 & -17.7 & 26.79 & 21.38 & 136.3 & 11.3 & 0.19 \\ BG024.010+00.488C1 & 1 & 4.3 & 0.1 & 15.53 & 11.59 & 107.7 & 4.9 & – \\ BG024.329+00.142C1 & 1 & 5.0 & -2.4 & 23.92 & 19.35 & 26.1 & 32.5 & 0.2 \\ BG024.414+00.102C1 & 2 & 14.2 & -5.2 & 23.88 & 12.67 & 117.9 & 3.9 & – \\ BG024.414+00.102C2 & 2 & -12.2 & -8.6 & 17.05 & 12.9 & 127.3 & 3.3 & – \\ BG025.400-00.141C1 & 1 & 8.8 & -12.9 & 34.45 & 26.65 & 143.7 & 18.3 & 0.32 \\ BG027.317+00.175C1 & 1 & 2.9 & -17.6 & 16.92 & 12.37 & 12.1 & 5.0 & – \\ BG028.341+00.140C1 & 1 & -1.2 & -11.0 & 20.85 & 12.49 & 42.2 & 3.2 & – \\ BG028.565-00.236C1 & 1 & 2.0 & -15.0 & 34.14 & 30.41 & 142.2 & 3.5 & 0.28 \\ BG029.397-00.095C1 & 1 & 1.2 & -9.0 & 30.07 & 16.58 & 59.7 & 8.6 & 0.26 \\ BG030.719-00.081C1 & 2 & 14.7 & -19.9 & 33.09 & 23.37 & 9.2 & 16.6 & 0.26 \\ BG030.719-00.081C2 & 2 & -15.7 & -20.1 & 28.33 & 19.9 & 24.6 & 18.1 & 0.2 \\ BG030.772-00.801C1 & 1 & 13.5 & -17.2 & 28.92 & 24.59 & 112.3 & 10.4 & 0.19 \\ BG033.740-00.017C1 & 1 & 21.1 & -12.2 & 37.5 & 15.79 & 139.3 & 8.4 & 0.2 \\ BG034.259+00.222C1 & 1 & 17.3 & 25.0 & 12.09 & 8.87 & 150.2 & 2.0 & – \\ BG034.712-00.596C1 & 1 & 11.1 & -4.9 & 25.27 & 21.14 & 15.1 & 15.9 & 0.1 \\ BG036.840-00.022C1 & 1 & 15.2 & -15.3 & 13.04 & 11.61 & 54.2 & 4.8 & – \\ BG039.267-00.589C1 & 1 & 3.5 & -14.4 & 17.35 & 13.12 & 98.1 & 4.7 & – \\ BG044.661+00.351C1 & 1 & 11.1 & -16.9 & 13.34 & 10.55 & 51.9 & 3.8 & – \\ BG049.210-00.342C1 & 1 & 6.8 & -8.1 & 41.93 & 27.75 & 48.2 & 7.1 & 0.35 \\ BG081.721+00.572C1 & 1 & 6.3 & -7.8 & 34.97 & 32.47 & 136.6 & 80.8 & 0.1 \\ BG133.748+01.197C1 & 1 & 11.8 & -9.4 & 28.04 & 18.09 & 128.6 & 17.0 & 0.08 \\ \hline \end{tabular}
\end{table}
Table 2: Properties of HCN Cores
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline HCN Corea & Flag & Xoffset & Yoffset & \(\theta_{\rm maj}\) & \(\theta_{\rm min}\) & PA & \(F_{\rm peak}\) & \(R_{\rm core}\) \\ & & (arcsec) & (arcsec) & (arcsec) & (arcsec) & (deg) & (K\(\cdot\)km s\({}^{-1}\)) & (pc) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline BG133.949+01.063C1 & 1 & 9.7 & -7.9 & 33.44 & 29.27 & 62.7 & 62.8 & 0.13 \\ \hline \end{tabular} Note. – Core name and flag are listed in (1)–(2). Offsets along the x and y axes in the equatorial coordinate are listed in (3)–(4). Fitted parameters including FWHM major axis, minor axis, position angle, and peak flux are listed in (5)–(8). The core size deconvolved with the beam is listed in (9).
\end{table}
Table 2: _(continued)_
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline HCN Core & \(T_{\rm b,HCN}^{\rm pk}\) & \(V_{\rm LSR,HCN}\) & d\(V_{\rm HCN}\) & \(V_{\rm peak,HCN}\) & \(V_{\rm sys}\) & d\(V_{\rm thin}\)a & \(\delta V\) & Flagb \\ & (K) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline BG008.458-00.224C1 & 0.29(0.01) & 37.84(0.09) & 5.55(0.2) & 37.88 & 38.0 & 5.1 & -0.024 & S \\ BG009.212-00.202C1 & 0.23(0.01) & 42.10(0.14) & 6.83(0.32) & 41.14 & 41.9 & 5.8 & -0.131 & BP \\ BG010.214-00.324C1 & 0.25(0.01) & 12.69(0.08) & 6.61(0.18) & 13.39 & 11.7 & 5.3 & 0.319 & RP \\ BG010.416-00.030C1 & – & – & – & – & 67.8 & 4.2 & – & N \\ BG010.681-00.028C1 & 0.27(0.01) & 51.41(0.1) & 5.83(0.24) & 50.95 & 50.8 & 3.8 & 0.04 & S \\ BG011.083-00.536C1 & 0.09(0.0) & 31.03(0.35) & 13.32(0.83) & 30.04 & 29.8 & 4.4 & 0.054 & S \\ BG012.889+00.490C1 & 0.46(0.0) & 33.04(0.03) & 7.12(0.07) & 32.43 & 33.4 & 3.0 & -0.323 & BP \\ BG013.816+00.003C1 & – & – & – & – & 47.6 & 2.3 & – & N \\ BG013.882-00.143C1 & 0.20(0.01) & 17.29(0.16) & 9.65(0.38) & 17.78 & 18.1 & 4.5 & -0.071 & S \\ BG014.606+00.012C1 & 0.33(0.01) & 25.45(0.12) & 10.45(0.29) & 23.93 & 26.8 & 5.0 & -0.574 & BP \\ BG014.708-00.224C1 & – & – & – & – & 37.4 & 2.6 & – & N \\ BG015.021-00.620C1 & 0.42(0.01) & 19.02(0.06) & 5.57(0.14) & 19.03 & 19.9 & 4.2 & -0.207 & S \\ BG015.123-00.558C1 & 0.04(0.01) & 20.43(0.38) & 5.31(0.9) & 18.71 & 18.7 & 2.2 & 0.004 & S \\ BG016.894+00.486C1 & 0.08(0.01) & 24.14(0.34) & 6.41(0.81) & 24.4 & 23.9 & 2.7 & 0.185 & S \\ BG023.875+00.534C1 & 0.46(0.01) & 96.98(0.05) & 6.36(0.11) & 97.89 & 94.7 & 4.5 & 0.709 & RP \\ BG023.968-00.110C1 & 0.12(0.0) & 71.94(0.19) & 12.98(0.46) & 71.0 & 71.9 & 6.4 & -0.141 & BP \\ BG024.010+00.488C1 & 0.14(0.01) & 95.34(0.37) & 15.22(0.86) & 96.41 & 94.1 & 3.9 & 0.592 & RP \\ BG024.329+00.142C1 & 0.23(0.01) & 114.31(0.11) & 10.73(0.27) & 114.69 & 114.8 & 4.4 & -0.025 & S \\ \hline \end{tabular}
\end{table}
Table 3: Parameters of HCN Emission Lines
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline HCN Core & \(T_{\rm b,HCN}^{\rm pk}\) & \(V_{\rm LSR,HCN}\) & d\(V_{\rm HCN}\) & \(V_{\rm peak,HCN}\) & \(V_{\rm sys}\) & d\(V_{\rm thin}\)1 & \(\delta V\) & Flag2 \\ & (K) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & (km s\({}^{-1}\)) & \\ \hline BG024.414+00.102C1 & 0.15(0.01) & 113.48(0.22) & 8.48(0.53) & 113.29 & 113.2 & 4.5 & 0.02 & S \\ BG024.414+00.102C2 & 0.14(0.01) & 112.63(0.3) & 10.39(0.71) & 112.49 & 113.2 & 4.5 & -0.158 & S \\ BG025.400-00.141C1 & 0.66(0.01) & 94.87(0.06) & 7.01(0.15) & 93.67 & 95.6 & 2.8 & -0.689 & BP \\ BG027.317+00.175C1 & 0.15(0.01) & 33.13(0.24) & 11.68(0.58) & 31.62 & 33.5 & 5.6 & -0.336 & S \\ BG028.341+00.140C1 & 0.07(0.01) & 81.40(0.52) & 14.31(1.22) & 83.76 & 80.1 & 3.4 & 1.076 & RP \\ BG028.565-00.236C1 & 0.09(0.0) & 86.28(0.29) & 14.48(0.69) & 84.85 & 86.5 & 4.5 & -0.367 & BP \\ BG029.397-00.095C1 & 0.11(0.0) & 105.85(0.22) & 11.58(0.52) & 102.91 & 105.8 & 4.8 & -0.602 & BP \\ BG030.719-00.081C1 & 0.59(0.01) & 92.82(0.05) & 9.45(0.13) & 90.75 & 93.2 & 7.6 & -0.322 & BP \\ BG030.719-00.081C2 & 0.61(0.01) & 92.14(0.05) & 9.26(0.11) & 90.55 & 93.2 & 7.6 & -0.349 & BP \\ BG030.772-00.801C1 & 0.13(0.0) & 76.34(0.18) & 12.65(0.42) & 76.94 & 79.2 & 4.4 & -0.514 & S \\ BG033.740-00.017C1 & 0.20(0.01) & 104.51(0.11) & 8.48(0.25) & 103.57 & 105.5 & 4.3 & -0.449 & BP \\ BG034.259+00.222C1 & – & – & – & – & 57.7 & 4.3 & – & N \\ BG034.591+00.244C1 & – & – & – & – & -23.9 & 2.9 & – & N \\ BG034.712-00.596C1 & 0.46(0.01) & 43.61(0.05) & 6.36(0.11) & 44.53 & 44.6 & 3.0 & -0.023 & S \\ BG036.840-00.022C1 & 0.34(0.01) & 57.59(0.08) & 7.23(0.2) & 58.35 & 58.3 & 4.5 & 0.011 & S \\ BG039.267-00.589C1 & 0.18(0.01) & 61.08(0.16) & 7.86(0.38) & 59.79 & 62.9 & 4.5 & -0.691 & BP \\ BG043.121+00.033C1 & – & – & – & – & 7.6 & 3.9 & – & N \\ BG044.661+00.351C1 & 0.22(0.01) & 18.46(0.17) & 7.89(0.4) & 19.82 & 19.1 & 3.7 & 0.195 & S \\ BG049.210-00.342C1 & 0.51(0.01) & 64.43(0.05) & 5.22(0.11) & 65.11 & 66.6 & 3.3 & -0.452 & S \\ BG081.721+00.572C1 & 2.45(0.03) & -6.77(0.04) & 7.86(0.1) & -7.75 & -4.5 & 5.1 & -0.637 & BP \\ BG133.748+01.197C1 & 0.49(0.01) & -41.45(0.06) & 5.87(0.15) & -41.78 & -39.0 & 4.0 & -0.695 & BP \\ BG133.949+01.063C1 & 1.70(0.01) & -50.46(0.03) & 6.77(0.06) & -51.28 & -48.4 & 4.9 & -0.588 & BP \\ \hline \end{tabular} Note. – Core name is listed in (1). Fitted HCN spectral parameters including peak brightness temperature, velocity at local standard of rest, FWHM line width, and velocity at peak value are listed in (2)–(5) respectively. Systematic velocity and FWHM line width derived from the optical thin lines (Shirley et al., 2013) are in (6)–(7). Asymmetric parameter calculated by Eq. 2. HCN line profile identification is listed in (9). “BP” = blue profile, “RP” = red profile, and “S” = single-peaked profile, “N” = non-detection.
\end{table}
Table 3: _(continued)_
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{Multi-\(J\) Transition Lines} & \multicolumn{2}{c}{\(\tau_{\rm HCN(4-3)}\)} \\ \multicolumn{1}{c}{} & \multicolumn{2}{c}{Critical density at 20 K (cm\({}^{-3}\))} & \multicolumn{2}{c}{at collision partner density of} \\ \cline{2-7} \multicolumn{1}{c}{Source Names} & HCO\({}^{+}\) (1-0) & HCO\({}^{+}\) (3-2) & HCN (4-3) & \multicolumn{2}{c}{\(n_{\rm H_{2}}\) (cm\({}^{-3}\))} \\ \cline{2-7} & \(4.5\times 10^{4}\) & \(1.4\times 10^{6}\) & \(2.3\times 10^{7}\) & \(10^{5}\) & \(10^{5.5}\) & \(10^{6}\) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) \\ \hline BG008.458-00.222 & BP & BP & S & 7.8(-3) & 2.5(-2) & 7.7(-2) \\ BG008.670-00.356 & BP & BP & – & – & – & – \\ BG009.212-00.202 & RP & BP & BP & 3.9(-1) & 1.4(0) & 3.2(0) \\ BG011.083-00.536 & BP & BP & S & 6.0(-3) & 1.8(-2) & 5.6(-2) \\ BG012.889+00.490 & RP & BP & BP & 5.6(0) & 1.1(1) & 1.3(1) \\ BG014.633-00.574 & BP & BP & – & – & – & – \\ \hline \end{tabular} Note. – Source names are inherited from Column (1) of Table 1. Profiles in multi-\(J\) transition lines are shown in columns (2)–(4). The critical density at 20 K is listed below the transition lines (Shirley, 2015). “BP” = blue profile, “RP” = red profile, and “S” = single-peaked profile. Opacity of HCN (4-3) lines at three different volume densities of the collision partner H\({}_{2}\) is shown in columns (5)–(6). The value of opacity is shown in the form of “a(b)”, donating \(a\times 10^{b}\).
\end{table}
Table 4: Line Profiles at Multi-\(J\) Transition Lines
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{ HCN Core} & Range & \multicolumn{4}{c}{Hill5 Fitting Results} & \multicolumn{2}{c}{\(\hat{M}\)} \\ \multicolumn{1}{c}{} & (km s\({}^{-1}\)) & \(\tau_{\rm core}\) & \(v_{\rm LSR}\) (km s\({}^{-1}\)) & \(v_{\rm infall}\) (km s\({}^{-1}\)) & \(\sigma\) (km s\({}^{-1}\)) & \(T_{\rm peak}\) (K) & (\(10^{-3}\,M_{\odot}\,\)yr\({}^{-1}\)) \\ \multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline BG009.212-00.202C1 & [40,45] & 2.52(0.21) & 42.39(0.05) & 0.25(0.06) & 1.18(0.05) & 5.07(0.04) & 0.58(0.15) \\ BG012.889+00.490C1 & [29,37] & 1.55(0.12) & 33.46(0.07) & 1.20(0.15) & 1.78(0.04) & 5.95(0.03) & 4.47(0.72) \\ BG014.606+00.012C1 & [22,30] & 2.61(0.22) & 25.91(0.07) & 0.19(0.09) & 1.82(0.07) & 5.64(0.05) & 0.15(0.07) \\ BG025.400-00.141C1 & [92,98] & 1.52(0.16) & 94.98(0.06) & 0.61(0.13) & 1.51(0.06) & 6.92(0.04) & 1.41(0.32) \\ BG028.565-00.236C1 & [82,92] & 2.63(0.48) & 87.31(0.26) & 1.26(0.35) & 1.98(0.17) & 4.00(0.05) & 32.1(9.4) \\ BG030.719-00.081C2 & [88,98] & 1.83(0.10) & 92.81(0.05) & 1.06(0.10) & 2.14(0.04) & 6.59(0.02) & 7.91(1.10) \\ BG033.740-00.017C1 & [100,112] & 1.73(1.35) & 105.58(0.12) & 1.59(1.75) & 1.75(0.63) & 4.70(0.20) & 4.58(5.04) \\ BG039.267-00.589C1 & [58,66] & 1.40(0.43) & 61.62(0.14) & 1.00(0.45) & 1.74(0.16) & 4.66(0.08) & \(<\)1.60(0.74) \\ BG081.721+00.572C1 & [-10,-2] & 2.45(0.09) & -6.04(0.04) & 1.19(0.07) & 1.57(0.03) & 11.78(0.07) & 21.5(2.46) \\ BG133.748+01.197C1 & [-44,-38] & 1.09(0.30) & -41.24(0.08) & 0.74(0.32) & 1.39(0.10) & 6.35(0.22) & 4.26(1.89) \\ BG133.949+01063C1 & [-54,-46] & 2.08(0.42) & -49.63(0.06) & 1.46(0.46) & 1.45(0.19) & 9.55(0.10) & 5.48(1.79) \\ \hline \end{tabular} Note. – Core name is listed in (1). Velocity range used for model fitting is listed in (2). Hill5 fitting results including optical depth, velocity at local standard of rest, infall velocity, velocity dispersion, and peak excitation temperature are listed in (3)–(7). The mass infall rate is listed in (8).
\end{table}
Table 5: Hill5 Fitting Result of Blue-profile HCN (4-3) lines
## Acknowledgment
We thank the anonymous referee for the constructive comments.
FWX and KW acknowledge support from the National Science Foundation of China (11721303, 11973013, 12033005), the China Manned Space Project (CMS-CSST-2021-A09), National Key R&D Program of China (2022YFA1603102), and the High-Performance Computing Platform of Peking University. YXH acknowledges support from the Chinese Academy of Sciences (CAS) "Light of West China" Program (2020-XBQNXZ-017).
This research used the facilities of the Canadian Astronomy Data Centre operated by the National Research Council of Canada with the support of the Canadian Space Agency. The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; the National Astronomical Research Institute of Thailand; Center for Astronomical Mega-Science (as well as the National Key R&D Program of China with No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities and organizations in the United Kingdom and Canada. The data used in this paper are from project M16AP067, M19BP033, and M22AP051. We thank Junhao Liu for great help on the JCMT observations.
_Software._ This research uses Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018, 2022). This research makes use of Montage, funded by the National Science Foundation under Grant Number ACI-1440620, and previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. This research has used the SIMBAD database, operated at CDS, Strasbourg, France (Wenger et al., 2000). This research has used the program SExtractor, which builds a catalog of objects from an astronomical image (Bertin & Arnouts, 1996). This research has used Python based package PySpecKit to fit spectral lines (Ginsburg & Mirocha, 2011; Ginsburg et al., 2022). This research has used RADEX, a computer program for fast non-LTE analysis of interstellar line spectra (van der Tak et al., 2007).
## Appendix A The definition of HCN cores by SExtractor
The automatic source extraction program SExtractor(Bertin & Arnouts, 1996) is used to extract sources from the observing fields. For each field, we first generate a RMS noise map from the line-free channels pixel-wise. The moment 0 (M0) maps, together with the corresponding RMS maps, serve as two inputs for SExtractor. Before the program runs, nthresh = 3 is set to cut out the low SNR (nthresh\(\times\)local RMS) pixels. The deblending parameters (number of thresholds for deblending deblend_nthresh = 512, deblending contrast deblend_cont = 10\({}^{-5}\)), and the parameter to control the minimum pixels in a core (min_npix = 10, that is, the number of pixels in a JCMT beam) are set for the source extraction procedure. Note that some of the fields, for example BG013.882-00.143, BG014.708-00.224, and BG015.021-00.620, have outlier tiles with abrupt background emission and RMS. However, such artifacts have no effect on source extraction, mostly due to a good background reduction and a local RMS map input of the SExtractor program. When the program finishes, we discard the sources at the edge of fields, whose emission is not fully observed.
As a result, 32 fields have source detection while six have no detection. Among the 32 fields, two have detection of two sources. Since most sources are in shape of ellipses and centrally peaked, we call them HCN cores hereafter.
All basic fitted parameters of HCN cores, including offsets (along the x and y axes) from the field center, FWHM along the major and minor axes (\(\theta_{\rm maj}\) and \(\theta_{\rm min}\)), position angle (PA) and peak flux (\(F_{\rm peak}\)) are listed in column 3-8 of Table 2.
Shown in Figure 11, the HCN cores are marked with green ellipses whose major and minor axes are the FWHM of the source profile along the direction of the maximum and minimum dispersion, respectively. Green texts "C1" and "C2" indicate core ID(s) in one field. For those without detection of HCN cores, green dashed circles with diameter of 5 pixels (\(\sim 30\arcsec\), i.e., the beamsize of SMT at 270 GHz; Shirley et al., 2013) are used to outline the regions for extraction of spectra in Section 3.2. The non-detection fields have no fitted parameters, which are indicated by "-" in columns 3-8 of Table 2.
Figure 11: The moment0 maps of HCN (4-3) in 38 fields. The names are labeled at the top left of each panel. The green solid ellipses mark the HCN cores by SExtractor, while the green dashed circles outline the regions where the spectra are extracted in the non-detection fields. The JCMT beam and its physical scale in each map are shown in the bottom left.
Figure A1: Continued.
## Appendix B Core rotation identified by gradient of HCN (4-3)
We present the spectral line maps of the 14 HCN cores exhibiting blue profiles in Figure 11. As deliberated in Redman et al. (2004), rotation would manifest blue-red asymmetries, leading to brighter blue peaks on one side of the core's rotation axis and brighter red peaks on the opposing side. Consequently, a pure rotational movement would yield a consistent velocity gradient perpendicular to the rotation axis. We compute the gradient of HCN (4-3) peak velocity within the core by employing the following formula,
\[\Delta x_{\rm ij} =\left(\frac{\partial V_{\rm peak,HCN}(x,y)}{\partial x}\right)_{ \rm ij}\] (B1) \[\Delta y_{\rm ij} =\left(\frac{\partial V_{\rm peak,HCN}(x,y)}{\partial y}\right)_{ \rm ij},\]
where (i,j) indicates pixel location. The position angle of local maximum of gradient can be further calculated by,
\[\theta_{\rm ij}=\arctan\left(\frac{\Delta y_{\rm ij}}{\Delta x_{\rm ij}} \right).\] (B2)
For an ideal core rotation model, the rotation axis should be perpendicular to the velocity gradient \(\theta_{i,j}\). We identify three candidates of core rotation, BG023.968-00.110C1, BG029.397-00.095C1, and BG030.719-00.081C1, whose rotation axes have the position angles of 79\(\fdg\), 113\(\fdg\)8, and 101\(\fdg\)7. The rotation axes are labeled with green dashed lines in the panels of the three cores in Figure 11.
Figure 11: For the 11 infall candidates, the line grids are overlaid on M0 color maps. Green ellipses mark the footprints of HCN cores. The lines shown are averaged from the \(2\times 2\) pix\({}^{2}\) box and smoothed to 0.4 km s\({}^{-1}\). For the three core rotation candidates, black dashed lines show the rotation axes, which are overlaid on moment1 color maps. The JCMT beam and the scale bar of 0.2 pc are shown on the bottom left and right, respectively.
Figure B1. Continued.
## Appendix C Estimation of HCN Core Volume Density
A statistical study of 27 IRDCs by Peretto et al. (2023) found that the self-gravitating massive star-forming clumps have dynamically decoupled from their surrounding molecular clouds below the parsec scale, exhibiting a steeper density profile \(\rho\propto r^{-2}\). Adopting the density profile, we can derive the enclosed mass within a given radius \(R\):
\[M_{\rm enc}(<r)=\int_{0}^{r}4\pi r^{2}\rho{\rm d}r\propto r,\] (C3)
in agreement with the IR-quiet protostellar MDCs found in Cygnus X and their hosted high-mass Class 0-like protostars (Motte et al., 2007; Bontemps et al., 2010; Motte et al., 2018). Therefore, the mean volume density of the enclosed mass inside radius \(r\) writes,
\[n_{\rm H_{2}}(<r)=\frac{M_{\rm enc}(<r)}{(4/3)\pi\mu m_{\rm H}r^{3}}\propto\frac {1}{r^{2}},\] (C4)
where \(\mu=2.81\) is the molecular weight per hydrogen molecule (Evans et al., 2022) and \(m_{\rm H}\) is the mass of a hydrogen atom. Using Eq. C4, we can scale the volume density from the clump (\(R_{\rm cl}\)) down to the HCN cores (\(R_{\rm core}\)) as follows,
\[n_{\rm H_{2}}(<R_{\rm core})=\frac{R_{\rm cl}^{2}}{R_{\rm core}^{2}}n_{\rm H _{2}}(<R_{\rm cl}),\] (C5)
where \(n_{\rm H_{2}}(<R_{\rm cl}\) can be directly derived from clump mass \(M_{\rm cl}\) and radius \(R_{\rm cl}\) in columns (11) and (8), respectively.
Figure C1 shows the distribution of HCN core volume density. The majority (\(\sim 90\%\)) of HCN cores have a volume density within the range of \(10^{5}\) to \(10^{6}\,{\rm cm^{-3}}\), which sets the input of the RADEX mock grid in Section 4.1.2 and 4.1.3.
## Appendix D The "Hill5" Fitting Model
The "Hill5" fitting model assumes a core with a peak excitation temperature \(T_{\rm peak}\) at the center and an excitation temperature of \(T_{0}=T_{bg}=2.73\,\)K at the proximal and distal of the core. The Planck temperature \(J(T)\) is defined as,
\[J(T)=\frac{h\nu}{k_{\rm B}}\frac{1}{\exp(h\nu/k_{\rm B}T)-1}.\] (D6)
We assume the \(J(T)\) should drop linearly from \(J(T_{\rm peak})\) at the center to \(J(T_{\rm bg})\) at edges of the core, forming a hill in the \(J(T)\) profile.
In the unit of brightness temperature, the equation of radiation transfer writes as,
\[T_{B}=T_{i}e^{-\tau_{0}}+\int_{0}^{\tau_{0}}J(T)e^{-\tau}\mathrm{d}\tau,\] (D7)
where \(T_{i}\equiv(c^{2}/2\nu^{2}k_{\rm B})I_{\nu,i}\) is the incident specific intensity of radiation in unit of brightness temperature and \(\tau_{0}\) is optical depth. Assuming a simple linear function \(J(T)=J_{1}+[(J_{2}-J_{1})/\tau_{0}]\), we can integrate the equation of radiation transfer to obtain,
\[T_{B}=T_{i}e^{-\tau_{0}}+(J_{2}-J_{1})\frac{1-e^{-\tau_{0}}}{\tau_{0}}+J_{1}-J _{2}e^{-\tau_{0}}.\] (D8)
In order to solve the equation of radiative transfer, we separate the core into two parts along the line of sight: (1) the front part of the core in which the excitation temperature rises along the line of sight, whose optical depth is \(\tau_{f}\); and (2) the rear part of the core in which the excitation temperature falls along the line of sight, whose optical depth is \(\tau_{r}\). If both two parts infall with a velocity of \(v_{\rm infall}\) relative to the systematic velocity \(v_{\rm LSR}\), then the optical depth \(\tau_{f}\) and \(\tau_{r}\) as a function of line-of-sight velocity is written as,
\[\begin{split}\tau_{f}(v)&=\tau_{\rm core}\exp\left[ -(v-v_{\rm LSR}-v_{\rm infall})^{2}/2\sigma^{2}\right],\\ \tau_{r}(v)&=\tau_{\rm core}\exp\left[-(v-v_{\rm LSR }+v_{\rm infall})^{2}/2\sigma^{2}\right],\end{split}\] (D9)
where \(\tau_{\rm core}\) is the optical depth of the core and \(\sigma\) is the velocity dispersion of each part of the core.
Substituting the first row of Eq. D9 into Eq. D8, the outgoing brightness temperature from the rear part writes,
\[T_{B,r}=J(T_{\rm bg})e^{-\tau_{r}(v)}+[J(T_{\rm bg})-J(T_{\rm peak})]\,\frac{ 1-e^{-\tau_{r}(v)}}{\tau_{r}(v)}+J(T_{\rm peak})-J(T_{\rm bg})e^{-\tau_{r}(v)},\] (D10)
which then serves as the incident brightness temperature. Then the outgoing brightness temperature from the front part writes,
\[\Delta T_{B,f}=[J(T_{\rm peak})-J(T_{\rm bg})]\times\left[\frac{1-e^{-\tau_{f} (v)}}{\tau_{f}(v)}-\frac{(1-e^{-\tau_{r}(v)})}{\tau_{r}(v)}e^{-\tau_{f}(v)}\right]\] (D11)
where the emission of reference point has been eliminated. Above all, the "hill" model contains five free parameters \(\tau_{\rm core}\), \(\sigma\), \(T_{\rm peak}\), \(v_{\rm LSR}\), and \(v_{\rm infall}\) in total, which is the reason why it's called the "Hill5" model. |
2309.15598 | On the uniqueness of solutions to the isotropic $L_{p}$ dual Minkowski
problem | We prove that the unit sphere is the only smooth, strictly convex solution to
the isotropic $L_p$ dual Minkowski problem
\begin{align*}
h^{p-1} |D h|^{n+1-q}\mathcal{K}=1,
\end{align*} provided $(p,q)\in (-n-1,-1]\times [n,n+1)$. | Yingxiang Hu, Mohammad N. Ivaki | 2023-09-27T12:00:09Z | http://arxiv.org/abs/2309.15598v1 | # On the uniqueness of solutions to the isotropic \(L_{p}\) dual Minkowski problem
###### Abstract.
We prove that the unit sphere is the only smooth, strictly convex solution to the isotropic \(L_{p}\) dual Minkowski problem
\[h^{p-1}|Dh|^{n+1-q}\mathcal{K}=1,\]
provided \((p,q)\in(-n-1,-1]\times[n,n+1)\).
## 1. Introduction
An important question in convex geometry is the uniqueness or the non-uniqueness of the origin-centred spheres as solutions to the isotropic \(L_{p}\) dual Minkowski problem
\[h^{p-1}|Dh|^{n+1-q}\mathcal{K}=c,\quad c\in(0,\infty). \tag{1.1}\]
The \(L_{p}\) dual Minkowski problem was first introduced by Lutwak, Yang and Zhang [17], acting as a bridge which connects the \(L_{p}\)-Minkowski problem to the dual Minkowski problem. The former, the \(L_{p}\)-Minkowski problem, was introduced by Lutwak in his influential paper [14] three decades ago, and since then has been extensively investigated; e.g., [1, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. The latter, the dual Minkowski problem, was proposed recently by Huang et. al in [11], and further studied in [26, 27, 28, 29, 30, 31, 32]. There has been significant progress on the \(L_{p}\) dual Minkowski problem after the paper [17] such as [26, 27, 28, 29, 30, 31, 32]; however, the complete answer to the uniqueness and non-uniqueness question, as stated above, has been elusive in the most interesting case: without the _origin-symmetry_ assumption.
Here are the known uniqueness and non-uniqueness results for the isotropic \(L_{p}\) dual Minkowski problem:
* [1], uniqueness of solutions for \(-(n+1)\leq p<1\) and \(q=n+1\) (see also [1, 2, 3, 3]);
* [26], uniqueness of solutions for \(p>q\);
* [1], uniqueness of origin-symmetric solutions for \[-(n+1)\leq p<q\leq\min\{n+1,n+1+p\};\]
* [1], uniqueness of solutions for \(1<p<q\leq n+1\), or \(-(n+1)\leq p<q<-1\), or the uniqueness of solutions up to rescaling for \(p=q\);
* [2], complete classification for \(n=1\);
* [1], non-uniqueness of solutions under any of the following assumptions: 1. \(q-2(n+1)>p\geq 0\); 2. \(q>0\) and \(-q^{*}<p<\min\{0,q-2n-2\}\), where \[q^{*}:=\begin{cases}\dfrac{q}{q-n},&\text{if }q\geq n+1\\ \dfrac{nq}{q-1},&\text{if }1<q<n+1\\ +\infty,&\text{if }0<q\leq 1;\end{cases}\] (iii) \(p+2(n+1)<q\leq 0\).
In the recent work [2], employing the local Brunn-Minkowski inequality, the following uniqueness was proved.
**Theorem**.: _Let \(n\geq 2\) and assume \(-(n+1)\leq p\) and \(q\leq n+1\), with at least one being strict. Suppose \(\mathcal{M}^{n}\) is a smooth, strictly convex, origin-centred hypersurface such that \(h^{p-1}|Dh|^{n+1-q}\mathcal{K}=c\) with \(c>0\). Then \(\mathcal{M}^{n}\) is an origin-centred sphere._
Here, we also employ the local Brunn-Minkowski inequality as our main tool to establish the following uniqueness result.
**Theorem 1.1**.: _Let \(n\geq 2\). Suppose \(\mathcal{M}^{n}\) is a smooth, strictly convex hypersurface with \(h>0\), such that \(h^{p-1}|Dh|^{n+1-q}\mathcal{K}=1\). Suppose either_
1. \(-(n+1)<p\leq-1\) _and_ \(n\leq q\leq n+1\)_,_
2. _or_ \(-(n+1)\leq p\leq-n\) _and_ \(1\leq q<n+1\)_._
_Then \(\mathcal{M}^{n}\) is the unit sphere._
## 2. Background
### Convex geometry
Let \((\mathbb{R}^{n+1},\delta:=\langle\,,\rangle,D)\) denote the Euclidean space with its standard inner product and flat connection, and let \((\mathbb{S}^{n},\bar{g},\bar{\nabla})\) denote the unit sphere equipped with its standard round metric and Levi-Civita connection.
Suppose \(K\) is a smooth, strictly convex body in \(\mathbb{R}^{n+1}\) with the origin in its interior. Write \(\mathcal{M}=\mathcal{M}^{n}=\partial K\) for the boundary of \(K\). The Gauss map of \(\mathcal{M}\), denoted by \(\nu\), takes the point \(p\in\mathcal{M}\) to its unique
unit outward normal \(x=\nu(p)\in\mathbb{S}^{n}\). The support function of \(K\) is defined by
\[h(x):=\max\{\langle x,y\rangle:\ y\in K\},\quad x\in\mathbb{S}^{n}.\]
The inverse Gauss map \(X=\nu^{-1}:\mathbb{S}^{n}\to\mathcal{M}\) is given by
\[X(x)=Dh(x)=\bar{\nabla}h(x)+h(x)x,\quad x\in\mathbb{S}^{n}.\]
The support function can also be expressed as
\[h(x)=\langle X(x),x\rangle=\langle\nu^{-1}(x),x\rangle,\quad x\in\mathbb{S}^{ n}.\]
The radial function of \(K\) is defined by
\[r(x):=|X(x)|=(|\bar{\nabla}h(x)|^{2}+h^{2}(x))^{\frac{1}{2}}.\]
Moreover, the Gauss curvature of \(\mathcal{M}\) is defined by
\[\frac{1}{\mathcal{K}(x)}:=\left.\frac{\det(\bar{\nabla}^{2}h+\bar{g}h)}{\det( \bar{g})}\right|_{x},\quad x\in\mathbb{S}^{n}.\]
Note that the matrix \(A[h]:=\bar{\nabla}^{2}h+h\bar{g}=D^{2}h|_{T\mathbb{S}^{n}}\) is positive-definite. The eigenvalues of the matrix \(A[h]\) with respect to the metric \(\bar{g}\), denoted by \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\), are the principal radii of curvature at the point \(X(x)\in\mathcal{M}\). Then \(\sigma_{n}=\mathcal{K}^{-1}=\Pi_{i}\lambda_{i}\). The curvature equation (1.1) can be reformulated as the following Monge-Ampere equation:
\[h^{1-p}|Dh|^{q-n-1}\det(\bar{\nabla}^{2}h+\bar{g}h)=c.\]
The polar body of \(K\) is defined by
\[K^{*}:=\{y\in\mathbb{R}^{n+1}:\langle x,y\rangle\leq 1\ \forall x\in K\}.\]
It is well-known that \(K^{*}\) is also a smooth, strictly convex body \(K^{*}\) in \(\mathbb{R}^{n+1}\) with the origin in its interior. Moreover, the following identity holds
\[\frac{h^{n+2}(x)(h^{*}(x^{*}))^{n+2}}{\mathcal{K}(x)\mathcal{K}^{*}(x^{*})}=1. \tag{2.1}\]
Here \(h^{*}\) and \(\mathcal{K}^{*}\) denote respectively the support function and Gauss curvature of \(K^{*}\), and \(x^{*}:=X(x)/|X(x)|\).
Finally, let us introduce the measure \(dV:=h\sigma_{n}d\mu\), where \(\mu\) is the spherical Lebesgue measure of the unit sphere \(\mathbb{S}^{n}\). Then the measure \(\sigma_{n}d\mu\) is the surface-area measure of \(K\), and \(dV=h\sigma_{n}d\mu\) is a constant multiple of the cone-volume measure of \(K\). We refer to [13] for additional background.
### Centro-affine geometry
In this section, we recall some basics from centro-affine geometry. For the related concepts, we refer the reader to [10, 11] and, in particular, to the excellent paper by Milman [13].
Let \(X:\mathbb{S}^{n}\to\mathcal{M}\) be a smooth embedding of \(\mathcal{M}\) (which we consider it to be \(Dh\) as in the previous section), and consider the transversal normal field \(\xi(x):=X(x)\) (the centro-affine normal). The transversal vector \(\xi\) induces the volume form \(V\) (as in the previous section), a connection \(\nabla\), as well as a metric \(g^{\xi}\) on \(\mathbb{S}^{n}\) as follows:
\[V(e_{1},\dots,e_{n})=\det(dX(e_{1}),\dots,dX(e_{n}),\xi),\quad e_{i}\in T \mathbb{S}^{n},\]
\[D_{u}dX(v)=dX(\nabla_{u}v)-g^{\xi}(u,v)\xi,\quad u,v\in T\mathbb{S}^{n}. \tag{2.2}\]
Note that \(g^{\xi}\) is symmetric and positive-definite. Moreover, while \(\nabla\) is not the Levi-Civita connection of \(g\), it is torsion-free and
\[\nabla V\equiv 0. \tag{2.3}\]
The conormal field \(\xi^{*}:\mathbb{S}^{n}\to(\mathbb{R}^{n+1})^{*}\sim\mathbb{R}^{n+1}\) is the unique smooth vector field to the dual space of \(\mathbb{R}^{n+1}\), such that \(\langle\xi^{*},dX\rangle=0\) and \(\langle\xi,\xi^{*}\rangle=1\). Moreover, \(\xi^{*}\) is an immersion and transversal to its image, and it induces a bilinear form and a torsion-free connection on \(\mathbb{S}^{n}\),
\[D_{u}d\xi^{*}(v)=d\xi^{*}(\nabla_{u}^{*}v)-g^{\xi^{*}}(u,v)\xi^{*},\quad u,v\in T \mathbb{S}^{n}.\]
We furnish all geometric quantities associated with \(\xi^{*}\) with \(*\).
It is known that \(g^{\xi}=g^{\xi^{*}}\) and that the two connections \(\nabla^{*}\) and \(\nabla\) are conjugate with respect to \(g^{\xi}\):
\[ug^{\xi}(v_{1},v_{2})=g^{\xi}(\nabla_{u}v_{1},v_{2})+g^{\xi}(v_{1},\nabla_{u} ^{*}v_{2})\quad u,v_{1},v_{2}\in T\mathbb{S}^{n}.\]
Moreover, by [13, Proposition 4.2] (or taking the inner production of (2.2) with \(\nu\)), we find
\[g^{\xi}=g^{\xi^{*}}=\frac{A[h]}{h}:=g.\]
For a smooth function \(f:\mathbb{S}^{n}\to\mathbb{R}\), the Hessian and Laplacian with respect to \((\nabla,g)\) are defined as
\[\operatorname{Hess}f(u,v)=\nabla df(u,v)=v(uf)-df(\nabla_{v}u)\]
and \(\Delta f=\operatorname{div}_{g}(\nabla f)=\sum_{i}g(\nabla_{e_{i}}\nabla f,e_ {i})\), where \(\{e_{i}\}_{i=1}^{n}\) is a local \(g\)-orthonormal frame of \(T\mathbb{S}^{n}\).
We write \(\operatorname{Hess}^{*}\) and \(\Delta^{*}\) respectively for the Hessian and Laplacian with respect to \((\nabla^{*},g)\). Since \(\nabla,\nabla^{*}\) are conjugate, we have
\[v(uf)=vg(\nabla f,u)=g(\nabla_{v}\nabla f,u)+df(\nabla_{v}^{*}u).\]
Therefore, we obtain
\[\Delta f=\operatorname{tr}_{g}\operatorname{Hess}^{*}f,\quad\Delta^{*}f= \operatorname{tr}_{g}\operatorname{Hess}f.\]
By [16, Proposition 4.2], we have
\[\operatorname{Hess}^{*}f+gf=\frac{1}{h}\left(\bar{\nabla}^{2}(hf)+ \bar{g}hf\right)=\frac{A[hf]}{h}.\]
Let us define
\[Q(u,v)=\nabla_{v}^{*}u-\nabla_{v}u\quad\forall u,v\in T\mathbb{S}^{n}.\]
Then by [14, (6.2)],
\[\operatorname{tr}_{g}Q=-\nabla\log\left(\frac{h^{n+2}}{\mathcal{K}}\right).\]
In particular, we have
\[(\Delta-\Delta^{*})f=-\sum_{i}Q(e_{i},e_{i})f=d\log\frac{h^{n+2}}{ \mathcal{K}}(\nabla f). \tag{2.4}\]
We conclude this section by recalling the local Brunn-Minkowski inequality, reformulated in the language of centro-affine geometry (cf. [16]): Let \(f\in C^{1}(\mathbb{S}^{n})\). Then
\[n\int f^{2}dV\leq\int|\nabla f|_{g}^{2}dV+n\frac{(\int fdV)^{2}}{ \int dV}. \tag{2.5}\]
The equality holds if and only if for some \(w\in\mathbb{R}^{n+1}\)
\[f(x)=\langle\frac{x}{h(x)},w\rangle\quad\forall x\in\mathbb{S}^{n}.\]
Moreover, by [16, (5.9)] we also have
\[n\int|\nabla f|_{g}^{2}dV\leq\int(\Delta f)^{2}dV\quad\forall f \in C^{2}(\mathbb{S}^{n}).\]
## 3. Uniqueness
The following identity is at the heart of our approach to employing the local Brunn-Minkowski inequality.
**Theorem 3.1**.: _There holds_
\[\Delta X+nX=h\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}}. \tag{3.1}\]
_In particular,_
\[n\int XdV=\int h\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}}dV.\]
Proof.: Let \(w\in\mathbb{R}^{n+1}\) be a fixed vector. By the centro-affine Gauss equation for \(\xi=X\) (cf. [16, Section 3.8]), we have
\[\Delta^{*}\langle X,w\rangle+n\langle X,w\rangle=0.\]
Now let \(\{v_{i}\}_{i=1}^{n}\) be a local orthonormal frame of \(T\mathbb{S}^{n}\) that diagonalizes \(A[h]\) at \(x_{0}\) and \(A[h]|_{x_{0}}(v_{i},v_{j})=\delta_{ij}\lambda_{i}\). Define \(e_{i}=\sqrt{\frac{h}{\lambda_{i}}}v_{i}\), \(i=1,\ldots,n\). Then we have \(g|_{x_{0}}(e_{i},e_{j})=\delta_{ij}\). Hence, by (2.4), at \(x_{0}\) we have
\[\Delta\langle X,w\rangle+n\langle X,w\rangle =(\Delta-\Delta^{*})\langle X,w\rangle\] \[=g(\nabla\log\frac{h^{n+2}}{\mathcal{K}},\nabla\langle X,w\rangle)\] \[=g(\nabla\log\frac{h^{n+2}}{\mathcal{K}},\lambda_{i}\langle e_{i },w\rangle e_{i})\] \[=\lambda_{i}\langle e_{i},w\rangle d\log\frac{h^{n+2}}{\mathcal{ K}}(e_{i})\] \[=\langle\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}},hw\rangle.\]
The second identity follows from integrating (3.1) against \(dV\).
**Lemma 3.2**.: _Let \(0<f\in C^{2}(\mathbb{S}^{n})\). Then_
\[\int f^{2}\left(\langle\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}},hX\rangle- |X|^{2}|\nabla\log f|_{g}^{2}\right)dV\leq n\frac{|\int fXdV|^{2}}{\int dV}.\]
Proof.: Let \(\{E_{k}\}_{k=1}^{n+1}\) be an orthonormal basis of \(\mathbb{R}^{n+1}\). We define
\[f_{k}=f\langle X,E_{k}\rangle\quad k=1,\ldots,n+1.\]
In view of Theorem 3.1, we have
\[\Delta f_{k}+nf_{k}= f\langle\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}},hE_{k} \rangle+\langle X,E_{k}\rangle\Delta f+2g(\nabla f,\nabla\langle X,E_{k}\rangle).\]
Therefore,
\[\sum_{k}f_{k}(\Delta f_{k}+nf_{k})= f^{2}\langle\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}},hX \rangle+f|X|^{2}\Delta f\] \[+fg(\nabla f,\nabla|X|^{2}). \tag{3.2}\]
Moreover, by integration by parts (cf. (2.3)), there holds
\[\int|X|^{2}f\Delta f+fg(\nabla f,\nabla|X|^{2})dV=-\int|X|^{2}|\nabla f|_{g}^{ 2}dV. \tag{3.3}\]
By the local Brunn-Minkowski inequality (see (2.5)), we have
\[\sum_{k}\int f_{k}(\Delta f_{k}+nf_{k})dV\leq n\sum_{k}\frac{\langle\int fXdV, E_{k}\rangle^{2}}{\int dV}.\]
Thus the claim follows from (3.2) and (3.3).
**Lemma 3.3**.: _Suppose \(\varphi:(0,\infty)\to(0,\infty)\) is \(C^{1}\)-smooth and \(f=\varphi(r)\). Then we have_
\[\int f^{2}\langle\bar{\nabla}\log\frac{h^{n+2}}{\mathcal{K}}-(r(\log\varphi)^{ \prime})^{2}\bar{\nabla}\log r,hX\rangle dV\leq n\frac{|\int fXdV|^{2}}{\int dV}.\]
Proof.: Let \(\{v_{i}\}_{i=1}^{n}\) and \(\{e_{i}\}_{i=1}^{n}\) be as in the proof of Theorem 3.1. We calculate
\[e_{i}(\log f)=(\log\varphi)^{\prime}e_{i}r=\frac{(\log\varphi)^{\prime}}{r} \lambda_{i}\langle e_{i},X\rangle=\frac{(\log\varphi)^{\prime}}{r}\sqrt{h \lambda_{i}}\langle v_{i},X\rangle,\]
and
\[r^{2}|\nabla\log f|_{g}^{2}=((\log\varphi)^{\prime})^{2}h\lambda_{i}(v_{i}h)^ {2}=(r(\log\varphi)^{\prime})^{2}\langle\bar{\nabla}\log r,hX\rangle.\]
Now the inequality follows from Lemma 3.2.
Proof of Theorem 1.1.: Let \(\alpha=q-n-1\). Due to Lemma 3.3 with \(\varphi(r)=r^{q-n-1}\), and our assumption \(h^{n+2}\mathcal{K}^{-1}=h^{n+1+p}r^{n+1-q}\), we obtain
\[(n+1+p)\int r^{2\alpha}|\bar{\nabla}h|^{2}dV\] \[\leq \alpha(\alpha+1)\int r^{2\alpha}\langle\bar{\nabla}\log r,h\bar{ \nabla}h\rangle dV+n\frac{|\int r^{\alpha}XdV|^{2}}{\int dV}.\]
Assuming \(\alpha^{2}+\alpha\leq 0\) (i.e. \(n\leq q\leq n+1\)) we obtain
\[(n+1+p)\int r^{2\alpha}|\bar{\nabla}h|^{2}dV\leq n\frac{|\int r^{\alpha}XdV|^ {2}}{\int dV}.\]
Moreover, by using \(\bar{\Delta}x+nx=0\),
\[\int r^{\alpha}XdV=\int Xh^{p}d\mu=\frac{n+1+p}{n}\int r^{\alpha}\bar{\nabla} hdV.\]
Hence, due to \(n+1+p>0\),
\[\int r^{2\alpha}|\bar{\nabla}h|^{2}dV\leq\frac{n+1+p}{n}\frac{|\int r^{\alpha} \bar{\nabla}hdV|}{\int dV}.\]
We may rewrite this inequality as
\[\int\left|r^{\alpha}\bar{\nabla}h-\frac{\int r^{\alpha}\bar{\nabla}hdV}{\int dV }\right|^{2}dV\leq\frac{p+1}{n}\frac{|\int r^{\alpha}\bar{\nabla}hdV|^{2}}{ \int dV}.\]
Thus \(h\) is constant, provided \(-(n+1)<p\leq-1\) and \(n\leq q\leq n+1\).
In view of (2.1), the polar body \(K^{*}\) satisfies the following isotropic \(L_{-q}\) dual Minkowski problem:
\[(h^{*})^{-1-q}|Dh^{*}|^{n+1+p}\mathcal{K}^{*}=1.\]
Hence, the uniqueness result also holds when \(n\leq-p\leq(n+1)\) and \(-(n+1)<-q\leq-1\).
## Acknowledgment
The first author's work was supported by the National Key Research and Development Program of China 2021YFA1001800 and the National Natural Science Foundation of China 12101027. Both authors were supported by the Austrian Science Fund (FWF): Project P36545.
|
2305.00532 | Even pairs in Berge graphs with no balanced skew-partitions | Let $G$ be a Berge graph that has no odd prism and no antihole of length at
least six as an induced subgraph. We show that every such graph $G$ with no
balanced skew-partition is either complete or has an even pair. | Tara Abrishami, Maria Chudnovsky, Yaqian Tang | 2023-04-30T17:11:55Z | http://arxiv.org/abs/2305.00532v3 | # Even pairs in Berge graphs with no balanced skew-partitions
###### Abstract.
Let \(G\) be a Berge graph that has no odd prism and no antihole of length at least six as an induced subgraph. We show that every such graph \(G\) with no balanced skew-partition is either complete or has an even pair.
\({}^{*}\)Princeton University, Princeton, NJ, USA
\({}^{\dagger}\)Supported by NSF-EPSRC Grant DMS-2120644.
\({}^{\clubsuit}\)Supported by NSF-EPSRC Grant DMS-2120644 and by AFOSR grant FA9550-22-1-0083.
## 1. Introduction
All graphs in this paper are finite and simple. Let \(\chi(G)\) and \(\omega(G)\) denote the chromatic number and the clique number of a graph \(G\), respectively. A graph \(G\) is _perfect_ if every induced subgraph \(H\) of \(G\) satisfies \(\chi(H)=\omega(H)\). The _complement_ of a graph \(G\), denoted by \(\overline{G}\), has the same vertex set as \(G\), and two distinct vertices in \(\overline{G}\) are adjacent if and only if they are not adjacent in \(G\). A _hole_ in a graph \(G\) is an induced subgraph isomorphic to a cycle on at least five vertices, and an _antihole_ is an induced subgraph whose complement is a hole in \(\overline{G}\). The _length_ of a hole (antihole) is equal to the number of its vertices. A graph is _Berge_ if it contains no odd hole and no odd antihole as an induced subgraph. In the 1960s, Berge [1] conjectured that a graph is perfect if and only if it is _Berge_. The study of perfect graphs became a major area of research in structural graph theory after Berge's conjecture. In 2002, Chudnovsky, Robertson, Seymour, and Thomas [4] proved the conjecture, which then became known as the _Strong Perfect Graph Theorem (SPGT)_.
An _even pair_ in a graph \(G\) is a pair \(\{u,v\}\) of nonadjacent vertices such that every induced path from \(u\) to \(v\) in \(G\) has an even number of edges. Before the SPGT was proved, many results focused on properties of _minimal imperfect graphs_: imperfect graphs \(G\) such that every proper induced subgraph of \(G\) is perfect. In particular, Meyniel [14] proved that minimal imperfect graphs do not have an even pair. Also, the proof of the SPGT was simplified by Chudnovsky and Seymour in 2007 using even pairs [6].
A graph \(G\) is _complete_ if every pair of vertices in \(G\) is adjacent. For a vertex \(v\in V(G)\), we denote the set of vertices adjacent to \(v\) by \(N_{G}(v)=N(v)\). We say a graph \(G^{\prime}\) is obtained by _contracting an even pair_\(\{u,v\}\) in \(G\) if:
* \(V(G^{\prime})=(V(G)\setminus\{u,v\})\cup\{w\}\);
* \(G^{\prime}\setminus\{w\}=G\setminus\{u,v\}\); and
* \(N_{G^{\prime}}(w)=N_{G}(u)\cup N_{G}(v)\)
We denote the graph obtained by contracting the even pair \(\{u,v\}\) by \(G/\{u,v\}\). A _sequence of contraction_ for a graph \(G\) is a sequence of graphs \(G_{0},\cdots,G_{k}\) such that \(G_{0}=G\), \(G_{k}\) has no even pair, and for all \(0\leq i\leq k-1\), there exists an even pair \(\{u,v\}\) in \(G_{i}\) such that \(G_{i+1}=G_{i}/\{u,v\}\). A graph is _even-contractile_ if it has a sequence of contraction with \(G_{k}\) being a complete graph. Fonlupt and Uhry [9] observed that if \(G\) is Berge with an even pair \(\{u,v\}\), then \(G/\{u,v\}\) is also Berge and \(\omega(G/\{u,v\})=\omega(G)\). In particular, given a \(\chi(G/\{u,v\})\)-coloring of \(G/\{u,v\}\), one can obtain a \(\chi(G)\)-coloring of \(G\) by preserving the same colors for vertices in \(G\setminus\{u,v\}\) and assigning
Introduction
Let \(G\) be a graph with no induced subgraph isomorphic to an antihole of length at least six or an odd prism. A _balanced skew-partition_ is a type of decomposition that appears in the proof of the SPGT. In 2003, Chudnovsky [2] proved a structural decomposition theorem for _trigraphs_, which is a generalization of graphs with possible "undecided" edges called _switchable pairs_. In particular, the theorem implies that a Berge graph either belongs to some "basic" class, or has a balanced skew-partition, or a \(2\)_-join_, or a \(2\)-join in the complement. Our result is based on this decomposition theorem, and the notion of trigraph is very helpful to the proof.
The remainder of the paper is organized as follows. In Section 2, we introduce the definitions related to trigraphs and present relevant theorems that have been proved. We also define _basic_ trigraphs and decompositions, namely balanced skew-partition, \(2\)-join, and the complement of \(2\)-join. In Section 3, we define a class \(\mathcal{F}\) of Berge trigraphs and a subclass called _favorable trigraphs_ that interact well with the \(2\)-join decomposition. In particular, we will show that almost all trigraphs in \(\mathcal{F}\) are favorable when forbidding antihole of length six and balanced skew-partition. In Section 4, we show that basic trigraphs have even pairs, and favorable basic trigraphs have even pairs in certain desirable location. In Section 5, we apply the technique of _block of decompositions_ introduced in [7] to handle \(2\)-join and its complement. This technique allows us to decompose any trigraph in \(\mathcal{F}\) with no balanced skew-partition into basic trigraphs while keeping track of even pairs. Finally, we prove a generalization of our main theorem 1.2 for trigraphs.
## 2. Trigraphs
In this paper, we mainly adopt the notation regarding trigraphs from the work by Chudnovsky, Trotignon, Trunck, and Vuskovic [7]. For the sake of clarity, we restate relevant definitions and introduce new definitions that will appear in the paper.
For a set \(X\), we denote by \(\binom{X}{2}\) the set of all subsets of \(X\) of size \(2\). For brevity, an element \(\{u,v\}\) of \(\binom{X}{2}\) is also denoted by \(uv\), or equivalently, \(vu\). A _trigraph_\(T\) consists of a finite vertex set \(V(T)\), called the _vertex set_ of \(T\), and a map \(\theta:\binom{V(T)}{2}\to\{-1,0,1\}\), called the _adjacency function_ of \(T\). Two distinct vertices of \(T\) are _strongly adjacent_ if \(\theta(uv)=1\), _strongly antiadjacent_ if \(\theta(uv)=-1\), and _semiadjacent_ if \(\theta(uv)=0\). We say that \(u\) and \(v\) are _adjacent_ if \(\theta(uv)\in\{0,1\}\) and _antiadjacent_ if \(\theta(uv)\in\{0,-1\}\). If \(u\) and \(v\) are adjacent (antiadjacent), we also say that \(u\) is _adjacent_ (_antiadjacent_) to \(v\), or \(u\) is a _neighbor_ (_antineighbor_) of \(v\). Similarly, if \(u\) and \(v\) are strongly adjacent (strongly antiadjacent), we say \(u\) is a _strong neighbor_ (_strong antineighbor_) of \(v\). For \(v\in V(T)\), let \(N(v)\) denote the set of all vertices in \(V(T)\setminus\{v\}\) that are adjacent to \(v\), and let \(N[v]\) denote \(N(v)\cup\{v\}\). An _edge_ (_antiedge_) is a pair of adjacent (antiadjacent) vertices. A _switchable pair_ is a pair of semiadjacent vertices, and a _strong edge_ (_antiedge_) is a pair of strongly adjacent (strongly antiadjacent) vertices. An edge \(uv\) (antiedge, strong edge, strong antiedge, switchable pair) is _between_ two sets \(A\subseteq V(T)\) and \(B\subseteq V(T)\) if \(u\in A\) and \(v\in B\), or if \(u\in B\) and \(v\in A\).
Let \(T\) be a trigraph. The _complement_ of \(T\), denoted by \(\overline{T}\), is a trigraph with \(V(\overline{T})=V(T)\) and the adjacency function \(\overline{\theta}=-\theta\). Let \(A\subset V(T)\) and \(b\in V(T)\setminus A\). We say that \(b\) is _strongly complete_ (_strongly anticomplete_) to \(A\) if \(b\) is strongly adjacent (strongly antiadjacent) to every vertex of \(A\); \(b\) is _complete_ (_anticomplete_) to \(A\) if \(b\) is adjacent (antiadjacent) to every vertex of \(A\). For two disjoint subsets \(A\subset V(T)\) and \(B\subset V(T)\), \(B\) is _strongly complete_ (_strongly anticomplete_, _complete_, _anticomplete_) to \(A\) if every vertex of \(B\) is strongly complete (strongly anticomplete, complete, anticomplete) to \(A\).
A _clique_ of \(T\) is set of pairwise adjacent vertices of \(T\), and a _strong clique_ is a set of pairwise strongly adjacent vertices of \(T\). A trigraph \(T\) is _complete_ if \(V(T)\) is a clique. A _stable set_ of \(T\) is a set of pairwise antiadjacent vertices of \(T\). For \(X\subseteq V(T)\), the trigraph _induced by \(T\) on \(X\)_, denoted by \(T|X\), has vertex set \(X\) and adjacency function \(\theta|_{X}\), the restriction of \(\theta\) to \(\binom{X}{2}\). We denote by \(T\setminus X\) the trigraph \(T|(V(T)\setminus X)\). Isomorphism between trigraphs is defined in the natural way. For two trigraphs \(T\) and \(H\), \(H\) is an _induced subtrigraph_ of \(T\) (or _T contains H as an induced subtrigraph_) if \(H\) is isomorphic to \(T|X\) for some \(X\subseteq V(T)\). Since this paper mainly considers the induced subtrigraph containment relation, we say that \(T\)_contains_\(H\) if \(T\) contains \(H\) as an induced subtrigraph.
Let \(\eta(T)\) denote the set of all strong edges of \(T\), \(\nu(T)\) the set of all strong antiedges of \(T\), \(\sigma(T)\) the set of all switchable pairs of \(T\). If \(\sigma(T)\) is empty, \(T\) is a _graph_. A _semirealization_ of \(T\) is a trigraph \(T^{\prime}\) with vertex set \(V(T)\) that satisfies \(\eta(T)\subseteq\eta(T^{\prime})\) and \(\nu(T)\subseteq\nu(T^{\prime})\). A _realization_ of \(T\) is any graph that is semirealization of \(T\). For \(S\subseteq\sigma(T)\), we denote by \(G_{S}^{T}\) the realization of \(T\) with edge set \(\eta(T)\cup S\). The realization \(G_{\sigma(T)}^{T}\) is called the _full realization_ of \(T\).
Let \(T\) be a trigraph. For \(X\subseteq V(T)\), we say that \(X\) and \(T|X\) are _connected_ (_anticonnected_) if the graph \(G_{\sigma(T|X)}^{T|X}\) (\(\overline{G_{\emptyset}^{T|X}}\)) is connected. A _connected component_ (or simply _component_) of \(X\) is maximal connected subset of \(X\), and an _anticonnected component_ (or simply _anticomponent_) of \(X\) is a maximal anticonnected subset of \(X\).
A _path_\(P\) of \(T\) is a sequence of distinct vertices \(p_{1},\cdots,p_{k}\) such that either \(k=1\), or for \(i,j\in\{1,\cdots,k\}\), \(p_{i}\) is adjacent to \(p_{j}\) if \(|i-j|=1\) and \(p_{i}\) is antiadjacent to \(p_{j}\) if \(|i-j|>1\). We say that \(P\) is a path _from_\(p_{1}\)_to_\(p_{k}\), and the _endpoints_ of \(P\) are \(p_{1}\) and \(p_{k}\). Under these conditions, let \(V(P)=\{p_{1},\cdots,p_{k}\}\), the _interior_ of \(P\), denoted by \(P^{*}\), is the induced subtrigraph of \(P\) with \(V(P^{*})=V(P)\setminus\{p_{1},p_{k}\}\), and the _length_ of \(P\) is \(k-1\). We say \(P\) is _even_ (_odd_) if it has even (odd) length. Two paths \(P_{1}\) and \(P_{2}\) are _disjoint_ if \(V(P_{1})\cap V(P_{2})=\emptyset\), and they are _internally disjoint_ if \(V(P_{1}^{*})\cap V(P_{2}^{*})=\emptyset\); \(P_{1}\) is a _subpath_ of \(P_{2}\) if \(P_{1}\) is a connected induced subtrigraph of \(P_{2}\). Sometimes we denote \(P\) by \(p_{1}\)-\(\cdots\)-\(p_{k}\). Notice that, as a graph is also a trigraph, our definition of a path of a graph here is equivalent to a _chordless path_ of a graph in some literature.
A _cycle_ in a trigraph \(T\) is an induced subtrigraph \(H\) of \(T\) with vertices \(h_{1},\cdots,h_{k}\) such that \(k\geq 3\), and for \(i,j\in\{1,\cdots,k\}\), \(h_{i}\) is adjacent to \(h_{j}\) if \(|i-j|=1\) or \(|i-j|=k-1\); a _hole_ is a cycle that further satisfies that \(h_{i}\) is antiadjacent to \(h_{j}\) if \(1<|i-j|<k-1\). The _length_ of a hole (cycle) is the number of vertices in it. Sometimes we denote \(H\) by \(h_{1}\)-\(\cdots\)-\(h_{k}\)-\(h_{1}\). An _antipath_ (_antihole_) in \(T\) is an induced subtrigraph of \(T\) whose complement is a path (hole) in \(\overline{T}\).
A _prism_ in a trigraph \(T\) is an induced subtrigraph \(H\) such that the full realization of \(H\) is a prism. A trigraph \(T\) is _Berge_ if it contains no odd hole and no odd antihole. By this definition, \(T\) is Berge if and only if \(\overline{T}\) is Berge. Also, \(T\) is Berge if and only if every realization (semirealization) of \(T\) is Berge. An _even pair_ in \(T\) is a strongly nonadjacent pair \(uv\in\binom{V(T)}{2}\) such that every path from \(u\) to \(v\) in \(T\) is even.
### Basic Trigraphs
Here, we define the classes of basic trigraphs. A trigraph \(T\) is _bipartite_ if its vertex set can be partitioned into two strongly stable sets, called a _bipartition_. A trigraph \(T\) is a _line trigraph_ if its full realization is the line graph of a bipartite graph and every clique of size at least \(3\) in \(T\) is a strong clique. A trigraph is a _doubled graph_ if it has a _good partition_. A good partition is a partition \((X,Y)\) of \(V(T)\) satisfying the following:
* Every component of \(T\mid X\) has at most two vertices, and every anticomponent of \(T\mid Y\) has at most two vertices.
* No switchable pair of \(T\) is between \(X\) and \(Y\).
* For every component \(C_{x}\) of \(T|X\) and every anticomponent \(C_{y}\) of \(T|Y\), every vertex \(v\) of \(C_{x}\cup C_{y}\) is incident with at most one strong edge and at most one strong antiedge between \(C_{x}\) and \(C_{y}\).
A trigraph is _basic_ if it is either a bipartite trigraph, the complement of a bipartite trigraph, a line trigraph, the complement of a line trigraph, or a doubled trigraph. The following is Theorem 2.3 from [7]:
**Theorem 2.1** ([7]).: _Basic trigraphs are Berge, and are closed under taking induced subtrigraphs, semirealizations, realizations, and complementation._
### Decompositions
We now describe the decompositions for trigraphs. First, a \(2\)_-join_ in a trigraph \(T\) is a partition \((X_{1},X_{2})\) of \(V(T)\) such that there exist disjoint sets \(A_{1},B_{1},C_{1},A_{2},B_{2},C_{2}\subseteq V(T)\) satisfying:
* \(X_{1}=A_{1}\cup B_{1}\cup C_{1}\) and \(X_{2}=A_{2}\cup B_{2}\cup C_{2}\);
* \(A_{1},A_{2},B_{1}\) and \(B_{2}\) are non-empty;
* no switchable pair is between \(X_{1}\) and \(X_{2}\);
* every vertex of \(A_{1}\) is strongly adjacent to every vertex of \(A_{2}\), and every vertex of \(B_{1}\) is strongly adjacent to every vertex of \(B_{2}\);
* there are no other strong edges between \(X_{1}\) and \(X_{2}\);
* for \(i=1,2\)\(|X_{i}|\geq 3\); and
* for \(i=1,2\), if \(|A_{i}|=|B_{i}|=1\), then the full realization of \(T|X_{i}\) is not a path of length two containing the members of \(A_{i}\) and \(B_{i}\).
Under these conditions, we say that \((A_{1},B_{1},C_{1},A_{2},B_{2},C_{2})\) is a _split_ of \((X_{1},X_{2})\). A \(2\)-join is _proper_ if for \(i=1,2\), every component of \(T|X_{i}\) meets both \(A_{i}\) and \(B_{i}\). A _complement \(2\)-join_ of a trigraph \(T\) is a \(2\)-join of \(\overline{T}\). We need the following fact about \(2\)-joins (Theorem 2.4 of [7]):
**Theorem 2.2** ([7]).: _Let \(T\) be a Berge trigraph and \((A_{1},B_{1},C_{1},A_{2},B_{2},C_{2})\) a split of a proper \(2\)-join of \(T\). Then all paths with one end in \(A_{i}\), one end in \(B_{i}\) and interior in \(C_{i}\), for \(i=1,2\), have lengths of the same parity._
Next, a partition \((A,B)\) of \(V(T)\) is a _skew-partition_ if \(A\) is not connected and \(B\) is not anticonnected. A skew-partition \((A,B)\) is _balanced_ if there is no odd path of length greater than one with ends in \(B\) and interior in \(A\), and there is no odd antipath of length greather than one with ends
in \(A\) and interior in \(B\). Given a balanced skew-partition \((A,B)\), the \(4\)-tuple \((A_{1},A_{2},B_{1},B_{2})\) is a _split of \((A,B)\)_ if \(A_{1},A_{2},B_{1}\), and \(B_{2}\) are disjoint non-empty sets, \(A_{1}\cup A_{2}=A\), \(B_{1}\cup B_{2}=B\), \(A_{1}\) is strongly anticomplete to \(A_{2}\), and \(B_{1}\) is strongly complete to \(B_{2}\). Note that there exists at least one split for every balanced skew-partition.
When \((A,B)\) is a skew-partition of a trigraph \(T\), we say that \(B\) is a _star cutset_ of \(T\) if at least one anticomponent of \(B\) has size one. The following is Theorem 5.9 from [2].
**Theorem 2.3** ([2]).: _If a Berge trigraph admits a star cutset, then it admits a balanced skew-partition._
We will often use the following corollary:
**Theorem 2.4** ([2]).: _If \(T\) is a Berge trigraph with no balanced skew-partition, then \(T\) does not admit a star cutset._
## 3. Decomposing Trigraphs
### Decomposing Trigraphs from \(\mathcal{F}\)
In order to handle \(2\)-join partitions and their complements in Section 5, we define a class of trigraphs that will be useful.
Let \(T\) be a trigraph. Denote by \(\Sigma(T)\) the graph with vertex set \(V(T)\) and edge set \(\sigma(T)\) (the switchable pairs of \(T\)). The connected components of \(\Sigma(T)\) are called the _switchable components_ of \(T\). Let \(\mathcal{F}\) be the class of Berge trigraphs \(T\) such that the following hold:
1. \(T\) has at most one switchable component, and the switchable component \(D\) of \(T\) has at most two edges.
2. If \(D\) contains exactly one edge \(xy\), then \(N(x)\cap N(y)=\emptyset\) in the trigraph \(T\). In this case, we say it is a _small_ switchable component.
3. Next, assume that \(D\) has two edges. Let \(v\in V(T)\) be the vertex of degree two in \(\Sigma(T)\), denote its neighbors by \(x\) and \(y\). Then \(v\) is strongly anticomplete to \(V(T)\setminus\{v,x,y\}\) in \(T\), \(x\) is strongly antiadjacent to \(y\) in \(T\), and \(N(x)\cap N(y)=\{v\}\) in \(T\). In this case, we say that the switchable component is _light_.
Our class \(\mathcal{F}\) of trigraph is a subclass of the class of the same name studied in [7], and we make use of several of their results.
**Theorem 3.1** ([7]).: _Let \(T\) be a trigraph from \(\mathcal{F}\) with no balanced skew-partition, and let \((A_{1},B_{1},C_{1},A_{2},B_{2},C_{2})\) be a split of a \(2\)-join \((X_{1},X_{2})\) in \(T\). Then the following hold:_
1. \((X_{1},X_{2})\) _is a proper_ \(2\)_-join;_
2. _if_ \(C_{i}=\emptyset\)_, then_ \(|A_{i}|\geq 2\) _and_ \(|B_{i}|\geq 2\)_,_ \(i=1,2\)_;_
3. \(|X_{i}|\geq 4\)_,_ \(i=1,2\)_._
**Theorem 3.2** ([7]).: _Every trigraph in \(\mathcal{F}\) is either basic, or admits a proper \(2\)-join, or admits a proper \(2\)-join in the complement._
### Favorable Trigraphs
Let \(T\) be a trigraph in \(\mathcal{F}\). We say a pair \(uv\) of vertices of \(T\) is _disjoint from its switchable component_ if \(D\) is the switchable component of \(T\) and \(V(D)\cap\{u,v\}\) is empty. In particular, if the switchable component \(D\) of \(T\) is empty, every pair of vertices is disjoint from its switchable component. A trigraph \(T\in\mathcal{F}\) is _favorable_ if it satisfies the following conditions:
1. \(|V(T)|\geq 5\);
2. \(T\) has at least one pair of strongly nonadjacent vertices \(uv\) disjoint from \(D\); and
3. if \(D\) is small and \(V(D)=\{x,y\}\), then at least one of \(T\setminus(D\cup N(x))\) or \(T\setminus(D\cup N(y))\) is not a clique.
A trigraph is _unfavorable_ if it is not favorable. By this definition, if \(T\) is complete, then \(T\) is unfavorable; if \(T\) is a graph with at least five vertices and is not complete, then \(T\) if favorable as it has empty switchable component. Notice that condition (2) and (3) of being a favorable
trigraph are also necessary conditions for trigraphs to have even pairs disjoint from the switchable component.
Next, we will show that, with a few exceptions, a trigraph \(T\) in \(\mathcal{F}\) with no balanced skew-partition and no antihole is favorable. Further, we prove in section 4 that a basic favorable trigraph has an even pair disjoint from its switchable component. Both results are essential for handling \(2\)-joins in section 5.
**Theorem 3.3**.: _Let \(T\) be a trigraph in \(\mathcal{F}\) with no balanced skew-partition and no antihole of length six. If \(T\) is unfavorable, then either \(T\) is complete or \(|V(T)|<5\)._
**Proof.** We may assume that \(|V(T)|\geq 5\). Let \(D\) be the switchable component of \(T\), and let \(T^{\prime}=T\setminus V(D)\) be the induced subtrigraph of \(T\). If \(D\) is small, we denote the pair by \(x\) and \(y\); if \(D\) is light, we denote the vertex of degree two in \(\Sigma(T)\) by \(v\) and its neighbors by \(x\) and \(y\). Therefore, we can partition \(V(T^{\prime})\) into four sets: \(T_{1}=T^{\prime}\setminus(N(x)\cup N(y))\), \(T_{2}=T^{\prime}|N(x)\), \(T_{3}=T^{\prime}|N(y)\).
First, suppose that \(D\) is a light switchable component. Since \(D\) is unfavorable, it follows that \(V(T)\setminus V(D)\) is a clique. If both \(T_{2}\) and \(T_{3}\) are nonempty, then \(x\)-\(v\)-\(y\)-\(a\)-\(b\)-\(x\) with \(a\in T_{2}\) and \(b\in T_{3}\) is a hole of length five, contradicting that \(T\) is Berge, so we may assume up to symmetry that \(T_{3}=\emptyset\). Since \(|V(T)|\geq 5\), it follows that \(T_{1}\cup T_{2}\) contains two distinct vertices \(s\) and \(t\). Since \(T\) is connected, we may assume \(t\in T_{2}\). Now, \(V(T)\setminus\{v,y,s\}\) is a star cutset, contradicting 2.4.
Therefore, \(D\) is a small switchable component. Now, if \(T_{1}\neq\emptyset\), \(T_{2}\) is strongly complete to \(T_{3}\) since \(T\) contains no hole of length five. In this case, \(T_{2}\cup T_{3}\) is a star cutset, contradicting 2.4. Thus, \(T_{1}=\emptyset\). By the definition of unfavorable, since \(T\) is not complete, it follows that both \(T_{2}\) and \(T_{3}\) are cliques. As \(T\) has at least five vertices, at least one of \(T_{2}\) or \(T_{3}\) has more than two vertices. Without loss of generality, suppose \(T_{2}\) contains two distinct vertices. Let \(s\) be the vertex in \(T_{2}\) such that \(|N(s)\cap T_{3}|\) is the maximum, and let \(t\) be a vertex in \(T_{2}\) distinct from \(s\). By 2.4, we may assume \(N(s)\cup\{s\}\setminus\{t\}\) is not a star cutset. It follows that there exists a vertex \(p\in T_{3}\setminus N(s)\) adjacent to \(t\). By maximality of \(|N(s)\cap T_{3}|\), there exists \(q\in N(s)\cap T_{3}\) such that \(q\) is not connected to \(t\). Now, \(T|\{x,y,s,t,p,q\}\) is an antihole of length of six, a contradiction. This completes the proof.
## 4. Even Pairs in Basic Trigraphs
The goal of this section is to prove the following theorem by analyzing each class of basic trigraph:
**Theorem 4.1**.: _Let \(T\) be a basic trigraph in \(\mathcal{F}\) with no odd prism and no antihole. Then the following statements hold:_
1. \(T\) _is either complete or has an even pair._
2. _If_ \(T\) _is favorable, then_ \(T\) _has an even pair disjoint from its switchable component._
### Bipartite Trigraph
Let \(T\) be a bipartite trigraph with bipartition \((A,B)\), where \(A\) and \(B\) are strongly stable sets. We have the following observation.
**Theorem 4.2**.: _Let \(T\) be a bipartite trigraph in \(\mathcal{F}\). Then the following statements hold:_
1. \(T\) _is either complete or has an even pair._
2. _If_ \(T\) _is favorable, then_ \(T\) _has an even pair disjoint from its switchable component._
**Proof.** By the definition of bipartite trigraph, it holds that \(T\) is complete or has an even pair, so the first statement holds. For the second statement, suppose that \(T\) is favorable and has a nonempty switchable component \(D\). If either \(A^{\prime}=A\setminus V(D)\) or \(B^{\prime}=B\setminus V(D)\) contains at least two vertices, then any two vertices \(a_{1},a_{2}\in A^{\prime}\) (or \(b_{1},b_{2}\in B^{\prime}\)) form an even pair disjoint from the switchable component, so we may assume that \(|A^{\prime}|=|B^{\prime}|=1\). Since \(T\) is favorable, it follows that \(|V(T)|\geq 5\). Thus \(|V(D)|\geq 3\), and so \(T\) has a light switchable component, and \(|V(T)|=5\). Assume up to symmetry that \(A=\{v,a\}\) and \(B=\{x,y,b\}\), where \(v\) is the vertex of degree two in \(D\), and \(x\) and \(y\) are neighbors of \(v\) in \(D\). Since \(T\) is favorable, it follows that \(ab\) is a strong antiedge and
strongly anticomplete to \(V(T)\setminus\{b\}\). Now, \(a\) and \(b\) are in disjoint connected components of \(T\), so \(ab\) is an even pair of \(T\) disjoint from the switchable component.
### Line Trigraph
Let \(T\) be a line trigraph, and let \(H\) be the bipartite graph such that its line graph, denoted by \(L(H)\), is the full realization of \(T\). Let \((A,B)\) be a bipartition of \(H\). A pair \((a_{1}b_{1},a_{2}b_{2})\) of disjoint edges in \(H\) with \(a_{1},a_{2}\in A\) and \(b_{1},b_{2}\in B\) is a _good pair_ of \(H\) if both of the followings are satisfied:
* Every path \(P_{1}\) with endpoints \(a_{1}\) and \(a_{2}\) satisfies \(V(P_{1})\cap\{b_{1},b_{2}\}\neq\emptyset\); and
* every path \(P_{2}\) with endpoints \(b_{1}\) and \(b_{2}\) satisfies \(V(P_{2})\cap\{a_{1},a_{2}\}\neq\emptyset\).
We prove that a good pair in \(H\) corresponds to an even pair in \(T\). This is analogous to a result by Hougardy in [11].
**Proposition 4.3**.: _Let \(H\) be a bipartite graph, let \((a_{1}b_{1},a_{2}b_{2})\) be a good pair of \(H\), and let \(u\) and \(v\) be the vertices in \(L(H)\) that represent \(a_{1}b_{1}\) and \(a_{2}b_{2}\), respectively. Let \(T\) be a trigraph such that \(L(H)\) is the full realization of \(T\). Then, \(uv\) is an even pair in \(T\)._
**Proof.** First, note that \(uv\) is a strong antiedge in \(T\), as \(a_{1}b_{1}\) and \(a_{2}b_{2}\) are disjoint in \(H\). Suppose that there is an odd path \(P\) from \(u\) to \(v\) in \(T\). Then, \(P\) corresponds to an inclusion-wise minimal path \(Q\) in \(H\) with one end in \(\{a_{1},b_{1}\}\) and one end in \(\{a_{2},b_{2}\}\) of even length. Therefore, up to symmetry, we may assume that \(Q\) has endpoints \(a_{1}\) and \(a_{2}\). As \(Q\) is minimal and even, \(V(Q)\cap\{b_{1},b_{2}\}=\emptyset\). However, this contradicts that \((a_{1}b_{1},a_{2}b_{2})\) is a good pair.
Next, we show that forbidding odd prisms guarantees even pairs in line trigraphs. Let \(H\) be a bipartite graph. (Note that the following theorems consider all subgraphs of \(H\), which are not necessarily induced subgraphs.) A path \(Q\) of \(H\) is a _chord path_ if its endpoints are contained in the vertex set of a cycle \(C\) in \(H\) and \(V(Q^{*})\cap V(C)=\emptyset\). An _even theta_ is a graph composed of three internally disjoint even paths with the same endpoints. A _path along the cycle_\(C\) is an induced subgraph of \(C\) that is a path in \(H|C\). A graph is _series-parallel_ if and only if it has no subgraph isomorphic to a \(K_{4}\)-minor.
**Proposition 4.4**.: _Let \(T\) be a line trigraph with no odd prism, and let \(H\) be a bipartite graph such that \(L(H)\) is the full realization of \(T\). Then, \(H\) has no subgraph isomorphic to an even theta, and \(H\) is series-parallel._
**Proof.** Since the line graph of an even theta is an odd prism, it follows that \(H\) contains no even theta as a subgraph. As \(H\) is bipartite, all cycles in \(H\) have even length. It follows that a chord path \(P\) of a cycle \(C\) in \(H\) must has odd length, and the endpoints of \(P\) in \(C\) divide \(C\) into two odd paths along the cycle. Suppose the contrary that \(H\) is not series-parallel and thus has a subgraph isomorphic to a \(K_{4}\)-minor. Since \(K_{4}\) has maximum degree three, it follows that \(H\) has a subgraph \(J\) isomorphic to a \(K_{4}\)-subdivision. Let \(a,b,c,d\) be the vertices of degree three of \(J\), and let \(P_{1}\), \(P_{2}\), \(P_{3}\), \(P_{4}\), \(P_{5}\), \(P_{6}\) denote the paths with endpoints \((a,b)\), \((b,c)\), \((c,d)\), \((d,a)\), \((b,d)\), and \((a,c)\), respectively, in \(J\). Notice that each \(P_{i}\) is a chord path, so they are all odd. Now, \(P_{1}\cup P_{4}\cup P_{5}\) is an odd cycle, contradicting that \(H\) is bipartite.
Finally, we prove the main result of this subsection.
**Theorem 4.5**.: _Let \(T\) be a line trigraph in \(\mathcal{F}\) with no odd prism. The following statements hold:_
1. \(T\) _is either complete or has an even pair._
2. _If_ \(T\) _is favorable, then_ \(T\) _has an even pair disjoint from its switchable component._
**Proof.** Let \(H\) be a bipartite graph such that \(L(H)\) is the full realization of \(T\). We may assume that \(H\) is connected and that \(T\) is not complete. If \(T\) has a nonempty switchable component \(D\), let \(J\) be the subgraph of \(H\) such that \(T|V(L(J))=D\). In particular, \(J\) is a path \(p_{1}\)-\(\cdots\)-\(p_{k}\) of length either two or three. Thus, following the notation for a path, we call \(p_{1}\) and \(p_{k}\) the endpoints of
and denote \(V(J)\setminus\{p_{1},p_{k}\}\) by \(V(J^{*})\). Also, note that any vertex \(v\in V(J^{*})\) has degree at most two in \(H\): Otherwise, the line graph induced by the edges adjacent to \(v\) is a clique \(K\) of size at least three, and \(T|V(K)\) contains a switchable pair, which contradicts the definition of a line trigraph.
By 4.3, to prove the first statement, it suffices to find a good pair in \(H\). Also, to prove the second statement, it suffices to find a good pair in \(H\setminus V(J^{*})\). Thus, in the following discussion, the proof is completed when the corresponding good pair is found.
**Case 1: \(H\) is a tree.** Since \(T\) is not complete, it follows that \(H\) is not a star, so \(H\) has a path \(a_{1}\)-\(b_{1}\)-\(a_{2}\)-\(b_{2}\) of length three. Now, \((a_{1}b_{1},a_{2}b_{2})\) is a good pair. This proves the first statement for this case.
Next, suppose that \(T\) is favorable and has a nonempty switchable component \(D\). Let \(x\) and \(y\) be the endpoints of \(J\), and let \(H_{x}\) and \(H_{y}\) be the components of \(H\setminus V(J^{*})\) containing \(x\) and \(y\) correspondingly. It suffices to show that \(H_{x}\cup H_{y}\) contains a good pair. If either \(H_{x}\) or \(H_{y}\) contains a path \(a_{i}\)-\(b_{i}\)-\(a_{j}\)-\(b_{j}\) of length three, then \((a_{i}b_{i},a_{j}b_{j})\) is a good pair. Thus, we may assume both \(H_{x}\) and \(H_{y}\) are either empty or isomorphic to a star. If \(D\) is small, then \(T\) contradicts the third condition of being favorable. So we may assume \(D\) is light. By the second condition of being favorable, both \(H_{x}\) and \(H_{y}\) are nonempty. In particular, \(H_{x}\) contains an edge \(xx^{\prime}\), and \(H_{y}\) contains an edge \(yy^{\prime}\). Now, \((xx^{\prime},yy^{\prime})\) is a good pair. This completes the proof of the second statement for this case.
**Case 2: \(H\) has a cycle of length at least six**. Let \(C=a_{1}\)-\(b_{1}\)-\(\cdots\)-\(a_{k}\)-\(b_{k}\)-\(a_{1}\) where \(k\geq 6\) be a cycle (not necessarily induced) of maximum length in \(H\). If \(C\) has no chord path, then every pair of disjoint edges \((a_{i}b_{i},a_{j}b_{j})\) is a good pair. In particular, as \(|E(J)|\leq 3\), there is a good pair in \(C\setminus V(J^{*})\). Thus, we may assume that \(C\) has a chord path \(P\). By 4.4, \(P\) is odd and has ends \(a_{i}\) and \(b_{j}\) for \(1\leq i\leq j\leq k\). Let \(Q_{1}\) and \(Q_{2}\) be the two disjoint paths along the cycle \(C\) with endpoints \(a_{i}\) and \(b_{j}\). We may assume by symmetry that \(E(J)\cap E(Q_{1})=\emptyset\) as any \(v\in V(J^{*})\) has degree two. Now, to prove both statements for this case, it suffices to show that \(Q_{1}\) contains a good pair.
Let \(S_{1}\) be a minimal subpath of \(Q_{1}\) such that the endpoints of \(S_{1}\) are joined by a chord path of \(C\), and let this chord path be \(P^{\prime}\). Thus, \(S_{1}\) has odd length. If \(S_{1}\) has length one, then \(P^{\prime}\cup(C\setminus S_{1})\) is a longer cycle, a contradiction. So \(S_{1}=a_{t}\)-\(b_{t}\)-\(\cdots\)-\(a_{s}\)-\(b_{s}\) has length at least three. Further, if there is a chord path \(P^{\prime\prime}\) of \(C\) with exactly one endpoint in \(V(S_{1}^{*})\), then \(C\cup P^{\prime}\cup P^{\prime\prime}\) forms a \(K_{4}\) minor, contradicting 4.4. Therefore, there is no path in \(H\setminus\{a_{s},a_{t}\}\) with endpoints \(b_{s}\) and \(b_{t}\), and there is no path in \(H\setminus\{b_{s},b_{t}\}\) with endpoints \(a_{s}\) and \(a_{t}\). So \((a_{t}b_{t},a_{s}b_{s})\) is a good pair of \(H\) contained in \(Q_{1}\). This completes the proof.
**Case 3: All the cycles in \(H\) have length four.** Let \(C=a_{1}\)-\(b_{1}\)-\(a_{2}\)-\(b_{2}\)-\(a_{1}\) be a cycle of length four in \(H\). By 4.4, there is no chord path of \(C\) with endpoints \(a_{1}\) and \(a_{2}\) (or \(b_{1}\) and \(b_{2}\)). Also, if there is a chord path with endpoints \(a_{i}\) and \(b_{j}\) with \(i,j\in\{1,2\}\), then \(G\) contains a cycle of length greater than four, a contradiction. Thus, there is no path in \(H\setminus\{a_{1},a_{2}\}\) with endpoints \(b_{1}\) and \(b_{2}\), and there is no path in \(H\setminus\{b_{1},b_{2}\}\) with endpoints \(a_{1}\) and \(a_{2}\). So \((a_{1}b_{1},a_{2}b_{2})\) is a good pair of \(H\). In particular, this proves that in this case every cycle \(C\) in \(H\) contains a good pair, and thus the first statement follows.
Now, suppose \(T\) is favorable with nonempty switchable component \(D\). We may assume that \(E(J)\cap E(C)\neq\emptyset\), and \(H\setminus V(J^{*})\) contains no cycle. As any vertex \(v\in V(J^{*})\) has degree two, we have \(E(J)\subseteq E(C)\) and \(V(J)\subseteq V(C)\). Thus, we may assume that \(H\setminus V(C)\) is a tree. If \(D=\{x,y,v\}\) (where \(v\) has degree two in the switchable component) is a light switchable component, then \(N(x)\cap N(y)\neq\emptyset\) as the endpoints of \(J\) are adjacent, contrary to the fact that \(T\in\mathcal{F}\). Therefore, we may assume that \(D\) is small and \(J=a_{1}\)-\(b_{1}\)-\(a_{2}\). As \(T\) is favorable, \(T\setminus V(D)\) is not a clique, which means that there is an edge \(a_{t}b_{t}\) in \(H\setminus V(J^{*})\) such that \(b_{t}\neq b_{2}\). Also, since \(b_{1}\in V(J^{*})\) has degree two in \(H\), we have \(b_{t}\neq b_{1}\). If \(a_{t}=a_{1}\), then \(\{a_{t}b_{t},a_{2}b_{2}\}\) is a good pair in \(H\setminus V(J^{*})\). Thus, by symmetry, we may assume that \(\{a_{t},b_{t}\}\cap V(C)=\emptyset\), and every edge between \(C\) and \(H\setminus V(C)\) has \(b_{2}\) as a vertex. In this case, as \(C\) has no chord path and \(H\setminus V(C)\) is a tree, \(\{a_{t}b_{t},a_{2}b_{2}\}\) is a good pair in \(H\setminus V(J^{*})\). This completes the proof.
### Complement of a Bipartite Trigraph and Complement of a Line Trigraph
A _diamond_ in a trigraph \(T\) is an induced subtrigraph \(H\) such that the full realization of \(H\) is \(K_{4}\) minus an edge. A _claw_ in a trigraph \(T\) is an induced subtrigraph \(H\) such that the full realization of \(H\) is the complete bipartite graph \(K_{1,3}\). We will need the following characterization of line trigraph, which is a generalization of the main theorem of [10].
**Proposition 4.6**.: _Let \(T\) be a line trigraph. Then, \(T\) has no induced subtrigraph isomorphic to a diamond or a claw._
**Proof.** Suppose that \(T\) contains a diamond or a claw, then the full realization of \(T\) contains a diamond or a claw as an induced subgraph. By definition, the full realization of \(T\) is a line graph. This contradicts to the main theorem of [10], which states that a line graph of a bipartite graph is (claw,diamond)-free.
Basic trigraphs which are the complement of a bipartite trigraph and the complement of a line trigraph share the following key property.
**Proposition 4.7**.: _Let \(T\) be the complement of a bipartite trigraph or the complement of a line trigraph. Then, a path \(P\) of odd length in \(T\) has length at most three._
**Proof.** First, if \(T\) is the complement of a bipartite trigraph, then for all \(X\subseteq V(T)\) with \(|X|\geq 3\), there exists an edge with both ends in \(X\). Therefore, the path of maximal length in \(T\) has length three, so the result follows. Now, we may suppose that \(T\) is the complement of a line trigraph, and \(P\) is a path of \(T\) of length at least five. In this case, \(\overline{T|V(P)}\) contains a diamond, which contradicts 4.6. This completes the proof.
**Proposition 4.8**.: _Let \(T\) be a trigraph in \(\mathcal{F}\) such that \(T\) is either the complement of a bipartite trigraph or the complement of a line trigraph. If \(T\) is favorable, then either \(T\) is a graph, or \(T\) has an even pair disjoint from the switchable component._
**Proof.** Recall that by the definitions, every clique of size at least three is a strong clique in line trigraphs and bipartite trigraphs. If \(T\) has a light switchable component \(D=\{x,y,v\}\), then, \(\overline{T|D}\) is a clique of size three with two switchable pairs, contradicting that \(T\) is the complement of a bipartite trigraph or the complement of a line trigraph. Thus, we may assume that \(T\) has a small switchable component \(D=\{x,y\}\). Suppose that \(T\) is the complement of a bipartite trigraph with bipartition \((A,B)\). By definition, \(T|A\) and \(T|B\) are strong cliques, so we may assume \(x\in A\) and \(y\in B\) up to symmetry. In this case, \(T\setminus(D\cup N(x))\subseteq B\) and \(T\setminus(D\cup N(y))\subseteq A\) are both cliques, contradicting that \(T\) is favorable.
Now, we may assume that \(T\) is the complement of a line trigraph with a small switchable component. If there is a vertex \(v\) contained in \(T\setminus(N(x)\cup N(y))\), then \(\overline{T|\{x,y,t\}}\) is a clique of size three but not a strong clique, contradicting the definition of line trigraph. So \(T\setminus(N(x)\cup N(y))=\emptyset\). If there are two vertices \(s,t\in N(x)\) (or \(s,t\in N(y)\)) such that \(st\) is an edge in \(T\), then \(T|\{x,s,t,y\}\) is a claw, contradicting 4.6. Therefore, \(T|N(x)\) and \(T|N(y)\) are stable sets. Since \(T\) is favorable, we may assume up to symmetry that \(|N(x)|\geq 2\). Let \(s\) and \(t\) be two vertices in \(N(x)\), and we claim that \(\{s,t\}\) is an even pair: Suppose not, then there is an odd path \(s\)-\(v_{1}\)-\(v_{2}\)-\(t\) of length three by 4.7. As \(T|N(x)\) is a clique, \(\{v_{1},v_{2}\}\subseteq T\setminus(N(x)\cup\{x,y\})=N(y)\). So \(v_{1}v_{2}\) is an edge in \(T|N(y)\), contradicting that \(T|N(y)\) is a stable set. This completes the proof.
The proof of following proposition is inspired by the main idea of [13].
**Theorem 4.9**.: _Let \(T\) be a trigraph in \(\mathcal{F}\) with no antihole such that \(T\) is either the complement of a bipartite trigraph or the complement of a line trigraph. The following statements hold:_
1. \(T\) _is either complete or has an even pair._
2. _If_ \(T\) _is favorable, then_ \(T\) _has an even pair disjoint from its switchable component._
**Proof.** By 4.8, we may assume that \(T\) is a graph, and it suffices to prove the first statement. Let \(T\) be the vertex-minimal counterexample. We may assume that \(T\) is not complete. Let \(M\) be a maximal anticonnected set in \(T\) such that there are at least two nonadjacent vertices in \(\subseteq V(T)\setminus M\) that are complete to \(M\). Notice that \(M\) is nonempty: since \(T\) is not complete, it holds that \(T\) contains at least one path of length at least two. Let \(C(M)\) be the set of all vertices that are complete to \(M\). By 2.1, each class of basic graphs is closed under taking induced subgraphs. Thus, since \(T\) is minimal, it follows that \(C(M)\) has an even pair \(\{a,b\}\) as \(C(M)\) is not complete by our construction.
Suppose the contrary that \(\{a,b\}\) is not an even pair in \(T\). Thus, by 4.7, there is a path \(P=a\)-\(c\)-\(d\)-\(b\) of length three in \(T\). Since \(\{a,b\}\) is complete to \(M\), it follows that \(V(P)\cap M=\emptyset\). First, suppose \(\{c,d\}\subseteq V(T)\setminus(M\cup C(M))\). Since both \(c\) and \(d\) are not in \(C(M)\), it follows that \(c\) and \(d\) each has at least one strong antineighbor in \(M\). So there exists an antipath \(Q\) with ends \(c\) and \(d\) and \(Q^{*}\in M\). Then, \(c\)-\(Q\)-\(d\)-\(a\)-\(b\)-\(c\) is an antihole of length at least five in \(T\), a contradiction. Thus, we may assume up to symmetry that \(c\in C(M)\). Since \(\{a,b\}\) is an even pair in \(C(M)\), it follows that \(P\not\subseteq C(M)\), and so \(d\in V(T)\setminus(M\cup C(M))\). Now, \(M\cup\{d\}\) is also an anticonnected set in \(T\) such that there are at least two vertices in \(V(T)\setminus(M\cup\{d\})\) complete to \(M\cup\{d\}\), contradicting that \(M\) is maximal. Therefore, \(\{a,b\}\) is an even pair in \(T\). This completes the proof.
### Doubled Graph
We first state a proposition regarding even pairs in doubled graphs.
**Proposition 4.10**.: _Let \(T\) be a doubled graph with good partition \((X,Y)\). Then the following two statements hold:_
1. _Let_ \(C_{y}=\{y\}\) _be an anticomponent of size one in_ \(Y\)_. If_ \(y\) _has an antineighbor_ \(x\in X\)_, then_ \(xy\) _is an even pair. In particular, if_ \(X\) _has an edge_ \(x_{1}x_{2}\)_, then_ \(T\) _has an even pair._
2. _If_ \(T|Y\) _has a strong antiedge_ \(y_{1}y_{2}\) _and_ \(T|X\) _has no edge, then_ \(y_{1}y_{2}\) _is an even pair._
**Proof.**
1. In this case, \(y\) is complete to \(N(x)\), so all paths between \(x\) and \(y\) have length \(2\). If \(X\) contains an edge \(x_{1}x_{2}\), then either \(x_{1}\) or \(x_{2}\) is an antineighbor of \(y\), so one of \(x_{1}y\) or \(x_{2}y\) is an even pair.
2. In this case, \(N(y_{1})\cap X\) and \(N(y_{2})\cap X\) are two disjoint stable sets that partition \(X\). So there is no path from \(y_{1}\) and \(y_{2}\) whose interior is contained in \(X\). Since \(\{y_{1},y_{2}\}\) is complete to other vertices in \(Y\), all paths from \(y_{1}\) to \(y_{2}\) have length \(2\).
**Theorem 4.11**.: _Let \(T\) be a doubled graph in \(\mathcal{F}\) with no antihole of length six. The following statements hold:_
1. \(T\) _is either complete, or has an even pair._
2. _If_ \(T\) _is favorable, then_ \(T\) _has an even pair disjoint from its switchable component._
**Proof.** We may assume that \(T\) is connected and not complete. Let \((X,Y)\) be a good partition of \(T\). Note that by the definition of doubled graph, every switchable component of \(T\) is either an edge of \(T|X\) or an edge of \(T|Y\). If \(T\) has a switchable component \(D\), it must be small: otherwise, \(T|D\) is both a component and an anticomponent of size \(3\), contradicting that \(T\) is a doubled graph.
**Case 1: \(T|X\) has a component \(C_{1}=\{x_{1},x_{2}\}\) of size two.** If \(T|Y\) is empty, then \(T=C_{1}\) and \(T\) is a clique, so we may assume that \(T|Y\) is nonempty. If \(T|Y\) has two distinct anticomponents \(C_{2},C_{3}\) of size two, \(C_{1}\cup C_{2}\cup C_{3}\) is an antihole of length six, a contradiction. Therefore, \(T|Y\) has at most one anticomponent of size two.
First, suppose \(T|Y\) has an anticomponent \(C_{4}=\{v\}\) of size one. By symmetry, we may assume that \(v\) is strongly adjacent to \(x_{1}\) and strongly antiadjacent to \(x_{2}\). By (1) of 4.10, it follows that \(\{v,x_{2}\}\) is an even pair of \(T\), and this proves the first statement for this subcase. Next, assume that \(T\) has a small switchable component \(D\). If \(\{x_{1},x_{2}\}\) is not the switchable component of \(T\)
, then \(\{v,x_{2}\}\) is an even pair disjoint from its switchable component, so assume that \(\{x_{1},x_{2}\}\) is the switchable component. Since \(T\) is favorable, there exists a vertex \(x_{3}\) in \(T|X\setminus\{x_{1},x_{2}\}\) such that \(x_{3}\) has a non-neighbor \(y_{1}\in T|Y\). If \(\{y_{1}\}\) is an anticomponent of size one in \(T|Y\), \(\{x_{3},y_{1}\}\) is an even pair disjoint from the switchable component by (1) of 4.10. So we may assume \(y_{1}\) is in an anticomponent \(C_{5}=\{y_{1},y_{2}\}\) of size two in \(T|Y\). Since \(T|Y\) has less than two anticomponent of size two, we may assume that \(X\setminus\{x_{1},x_{2}\}\) is complete to \(Y\setminus\{y_{1},y_{2}\}\). Thus, \(X\setminus\{x_{1},x_{2}\}\) is a nonempty stable set. Now, \(\{x_{3},y_{1}\}\) is an even pair disjoint from the switchable component \(\{x_{1},x_{2}\}\): \(y_{1}\) is complete to \(N(x)\setminus\{y_{2}\}\), so a path from \(x_{3}\) to \(y_{1}\) either has length two, or is exactly \(x_{3}\)-\(y_{2}\)-\(x_{2}\)-\(x_{1}\)-\(y_{1}\), which has length four. This proves the second statement for this subcase.
Therefore, we may assume that \(T|Y\) contains an antiedge \(y_{1}y_{2}\) and \(Y=\{y_{1},y_{2}\}\). By symmetry, we may assume that \(x_{i}\) is strongly adjacent to \(y_{i}\) for \(i=1,2\). Notice that all paths from \(y_{1}\) to \(y_{2}\) have length three. Every path \(P\) from \(y_{1}\) to \(x_{2}\) goes through either \(x_{1}\) or \(y_{2}\). If \(x_{1}\in P\), then \(P=y_{1}\)-\(x_{1}\)-\(x_{2}\) has length two; if \(y_{2}\in P\), then \(P=y_{1}\)-\(\cdots\)-\(y_{2}\)-\(x_{2}\) has length four. Therefore, \(y_{1}x_{2}\) is an even pair, and \(x_{1}y_{2}\) is also an even pair by symmetry. Thus, this proves the first statement for this subcase. Next, assume that \(T\) is favorable. We may also suppose that either \(\{x_{1},x_{2}\}\) or \(\{y_{1},y_{2}\}\) is the switchable component. As \(|V(T)|\geq 5\), there exists a vertex \(x_{3}\in X\setminus\{x_{1},x_{2}\}\). By symmetry, we may assume \(x_{3}\) is strongly adjacent to \(y_{1}\) and strongly antiadjacent to \(y_{2}\). Suppose \(x_{3}\) has a neighbor \(x_{4}\in X\setminus\{x_{1},x_{2}\}\). Then, \(x_{3}y_{2}\) and \(x_{4}y_{1}\) are even pairs by the same argument above. Also, \(\{x_{1},x_{3}\}\) and \(\{x_{2},x_{4}\}\) are even pairs. So at least one of them is disjoint from the switchable component, which is either \(\{x_{1},x_{2}\}\) or \(\{y_{1},y_{2}\}\). Thus, we may suppose \(x_{3}\) has no neighbor in \(X\). Then, both \(\{x_{3},y_{2}\}\) and \(\{x_{1},x_{3}\}\) are even pairs: \(x_{1}\) is complete to \(N(x_{3})=\{y_{1}\}\), and every path from \(x_{3}\) to \(y_{2}\) that contains \(y_{1}\) has length four. So at least one even pair is disjoint from the switchable component. This completes the proof of both statements for Case 1.
**Case 2: \(T|X\) has no edge.** It follows that the switchable component of \(T\) is contained in \(T|Y\). By (2) of 4.10, we may assume that \(T|Y\) is a clique. Suppose that there is no switchable pair in \(T\). Then \(Y\) is a strong clique, and thus \(X\) is complete to \(Y\). Since \(T\) is not complete, there exist nonadjacent vertices \(x_{1},x_{2}\in X\). Now, \(\{x_{1},x_{2}\}\) is an even pair of \(T\). Next, assume that there exists a switchable pair \(y_{1}y_{2}\) in \(T\). As \(T\in\mathcal{F}\), \(N(y_{1})\cap N(y_{2})=\emptyset\), and thus \(Y=\{y_{1},y_{2}\}\). Since \(T\) is not complete, there exists a vertex \(x_{1}\in X\cap N(y_{1})\). Now, \(\{x_{1},y_{2}\}\) is an even pair. This proves the first statement for Case 2.
Suppose \(T\) is favorable with switchable pair \(y_{1}y_{2}\). Then, there exists \(x_{3},x_{4}\in X\) such that either \(\{x_{3},x_{4}\}\subseteq N(y_{1})\) or \(\{x_{3},x_{4}\}\subseteq N(y_{2})\). In either cases, \(\{x_{3},x_{4}\}\) is an even pair because \(N(x_{3})=N(x_{4})\). This completes the proof.
### Proof of Theorem 4.1
By 4.1, 4.5, 4.9, and 4.11, we have checked that the statements hold for each class of basic trigraphs. So 4.1 follows.
## 5. Even Pairs in Non-Basic Trigraphs
### Block of Decomposition
To handle 2-join partitions and their complements, we need the following definitions and a theorem regarding trigraphs with no balanced skew-partition from [7]. A set \(X\subseteq V(T)\) is a _fragment_ of a trigraph \(T\) if \((X,V(T)\setminus X)\) is a proper 2-join of \(T\). A proper 2-join is _even_ or _odd_ according to the parity of the paths described in 2.2. The _block of decomposition_\(T_{X}\) with respect to a fragment \(X\) is defined as follows. Let \(X_{1}=X\), \(X_{2}=V(T)\setminus X\), and \((A_{1},B_{1},C_{1},A_{2},B_{2},C_{2})\) be a split of the proper 2-join \((X_{1},X_{2})\).
* **Case 1: \((X_{1},X_{2})\) is odd**. We build the block of decomposition \(T_{X}=T_{X_{1}}\) as follows: starting with \(T|X_{1}\), we add two vertices \(a\) and \(b\), called _marker vertices_, such that \(a\) is strongly complete to \(A_{1}\), \(b\) is strongly complete to \(B_{1}\), \(ab\) is a switchable pair, and there are no other edges between \(\{a,b\}\) and \(X_{1}\). Note that \(\{a,b\}\) is a small switchable component of \(T_{X}\), and we call it the _marker component_ of \(T_{X}\).
* **Case 2: \((X_{1},X_{2})\) is even**. We build the block of decomposition \(T_{X}=T_{X_{1}}\) as follows: starting with \(T|X_{1}\), we add three vertices \(a,b,c\), called _marker vertices_, such that \(a\) is strongly complete to \(A_{1}\), \(b\) is strongly complete to \(B_{1}\), \(ac\) and \(bc\) are switchable pairs, and there are no other edges between \(\{a,b,c\}\) and \(X_{1}\). Note that \(\{a,b,c\}\) is a light switchable component of \(T_{X}\), and we call it the _marker component_ of \(T_{X}\).
In both cases, we say that \(a\) and \(b\) are the _ends_ of the marker component. Again, as our class \(\mathcal{F}\) is a subclass of the class of the same name studied in [7], we make use of the following result.
**Theorem 5.1** ([7]).: _If \(X\) is a fragment of a trigraph \(T\) in \(\mathcal{F}\) with no balanced skew-partition, then the block of decomposition \(T_{X}\) is Berge and has no balanced skew-partition._
**Theorem 5.2**.: _Let \(X\) be a fragment of a trigraph \(T\) in \(\mathcal{F}\) with no balanced skew-partition, no odd prism, and no antihole. Then the block of decomposition \(T_{X}\) is Berge, and \(T_{X}\) has no balanced skew-partition, no odd prism, and no antihole of length at least five._
**Proof.** By 5.1, it suffices to show \(T_{X}\) has no odd prism and no antihole of length at least six. Let \(M\) denote the marker component of \(T_{X}\). Suppose \(T_{X}\) has an odd prism \(Q\) and assume that \(Q\) is chosen among odd prisms of \(T_{X}\) so that \(|V(Q)\cap V(M)|\) is minimum. If \(Q\cap M=\emptyset\), then \(Q\) is an odd prism in \(T\), a contradiction. Therefore, we may assume up to symmetry between \(a\) and \(b\) that \(a\in V(Q)\cap V(M)\). Suppose that \(N(a)\cap V(M)\not\subseteq V(Q)\). Let \(y\in A_{2}\) and let \(Q^{\prime}=(Q\setminus\{a\})\cup\{y\}\). If \(V(Q^{\prime})\subseteq V(T)\), then \(Q^{\prime}\) is an odd prism of \(T\), a contradiction. Therefore, \(M=\{a,c,b\}\), \(a\) is not adjacent to \(b\), and \(b\in V(Q^{\prime})\). Let \(z\in B_{2}\) and let \(Q^{\prime\prime}=(Q^{\prime}\setminus\{b\})\cup\{z\}\). Now, \(Q^{\prime\prime}\) is an odd prism of \(T\), a contradiction. Therefore, \(V(Q)\cap V(M)=V(M)\). Let \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\) be a split of \((X,V(T)\setminus X)\). Since \((X,V(T)\setminus X)\) is a proper 2-join, it follows that there is a path \(P\) of \(T\) with ends in \(A_{2}\) and \(B_{2}\) and interior in \(C_{2}\) such that \(P\) has the same parity as \(M\). Now, \((Q\setminus M)\cup P\) is an odd prism of \(T\), a contradiction. Therefore, \(T_{X}\) does not contain an odd prism.
Next, suppose that \(T_{X}\) contains an antihole, and let \(H=v_{1}\)-\(\cdots\)-\(v_{k}\)-\(v_{1}\) be the shortest antihole in \(T_{X}\). Since \(T_{X}\) is Berge and an antihole of length six is an odd prism, we may assume that \(k\geq 7\). When \(T_{X}\) has the marker component \(\{a,b,c\}\), it follows that \(c\notin V(H)\) because \(c\) is strongly anticomplete to \(T_{X}|X\). If \(|V(H)\cap V(M)|=1\), we may assume by symmetry that \(a\in V(H)\cap V(M)\). Now, we may replace \(a\) by a vertex \(a^{\prime}\in A_{2}\), and \(T|(V(H))\setminus\{a\}\cup\{a^{\prime}\}\) is an antihole of the same length in \(T\), a contradiction. Therefore, we may assume that \(V(H)\cap V(M)=\{a,b\}\), and let \(a=v_{i}\) and \(b=v_{j}\) where \(i<j\). First, suppose \(ab\) is an antiedge of the antihole such that \(i-j=1\) or \(i-j=k-1\). Because \(\{a,b\}\) is strongly anticomplete to \(C_{1}\), it follows that no vertex of the antihole is contained in \(C_{1}\). Also, at most one vertex of \(H\) is in \(A_{1}\) and at most one vertex of \(H\) is in \(B_{1}\), since \(a\) is strongly anticomplete to \(B_{1}\) and \(b\) is strongly anticomplete to \(A_{1}\). Therefore, \(k\leq 4\), a contradiction. Hence, we may suppose that \(1<i-j<k-1\). However, as \(k\geq 7\), at least one of \(v_{i}\)-\(v_{i+1}\)-\(\cdots\)-\(v_{j}\)-\(v_{i}\) or \(v_{j}\)-\(v_{j+1}\)-\(\cdots\)-\(v_{i}\)-\(v_{j}\) is an antihole and has length less than \(k\), which is a contradiction that \(H\) is the shortest antihole in \(T_{X}\). Therefore, \(T_{X}\) does not contain an antihole. This completes the proof.
**Theorem 5.3**.: _Let \(X\) be a fragment of a trigraph \(T\) in \(\mathcal{F}\). If the block of decomposition \(T_{X}\) has an even pair \(uv\) disjoint from its marker component, then \(uv\) is also an even pair in \(T\)._
**Proof.** Let \((A_{1},B_{1},C_{1},A_{2},B_{2},C_{2})\) be a split of a proper two join \((X_{1},X_{2})\) with \(X=X_{1}\). Suppose that there is a path \(P\) from \(u\) to \(v\) in \(T\) of odd length. If \(P\subseteq T|X_{1}\), then \(P\) is a path of \(T_{X}\), a contradiction, so \(P\cap(T|X_{2})\) is not empty. First, suppose \(P\cap(T|X_{2})\) is a path \(Q\) of \(X_{2}\) from \(A_{2}\) to \(B_{2}\). Let \(a\) and \(b\) be the ends of the marker component of \(T_{X}\). By construction of the marker component, it follows that the path from \(a\) to \(b\) in the marker component of \(T_{X}\) has the same parity as \(Q\). Let \(Q^{\prime}\) be the path from \(a\) to \(b\) in the marker component of \(T_{X}\). Now, \(P^{\prime}=(P\setminus Q)\cup Q^{\prime}\) induces a path of \(T_{X}\) of the same parity as \(P\), a contradiction.
Therefore, if \(Q=P\cap(T|X_{2})\) is a path, we may assume up to symmetry that the endpoints of \(P\cap(T|X_{2})\) are contained in \(A_{2}\). However, as \(A_{1}\) is strongly complete to \(A_{2}\), any vertex \(a\) in
\(P\) induces a cycle with \(Q\), contradicting that \(P\) is a path with endpoints in \(X_{1}\). Thus, no edge of \(P\) is contained in \(T|X_{2}\), and either \(P\cap(T|X_{2})=\{a_{2},b_{2}\}\), or \(P\cap(T|X_{2})=\{a_{2}\}\), or \(P\cap(T|X_{2})=\{b_{2}\}\), where \(a_{2}\in A_{2}\) and \(b_{2}\in B_{2}\). Let \(P^{\prime}\subseteq T_{X}\) be the path obtained by replacing \(a_{2}\) with the marker vertex \(a\) if \(P\cap A_{2}\neq\emptyset\), and replacing \(b_{2}\) with the marker vertex \(b\) if \(P\cap B_{2}\neq\emptyset\). Now, \(P^{\prime}\) is a path of \(T_{X}\) of the same parity as \(P\), a contradiction.
### Handling 2-joins and their complements
First, we show that it remains to consider trigraphs that admit proper 2-joins.
**Theorem 5.4**.: _Let \(T\) be a trigraph in \(\mathcal{F}\) with no balanced skew-partition and no antihole. Then, either \(T\) is basic, or \(T\) admits a proper 2-join._
**Proof.** By Lemma 3.2, we may assume that \(\overline{T}\) admits a proper 2-join \((X_{1},X_{2})\) with split \((A_{1},C_{1},B_{1},A_{2},C_{2},B_{2})\). By the definition of balanced skew-partition, a trigraph admits a balanced skew-partition if and only if its complement admits a balanced skew-partition. Thus, by Lemma 2.3, \(\overline{T}\) has no star cutset. Suppose that \(C_{1}\neq\emptyset\), and assume up to symmetry between \(A_{1}\) and \(B_{1}\) that there is a vertex \(c\in C_{1}\) adjacent to a vertex \(a\in A_{1}\). Let \(Q\) be a path in \(\overline{T}|X_{2}\) from a vertex in \(A_{2}\) to a vertex in \(B_{2}\). We claim that \(N[a]\setminus\{c\}\) is a star cutset separating the component containing \(c\) in \(\overline{T}\setminus(N[a]\setminus\{c\})\) from the rest of the trigraph: If not, there exists a path \(P\in\overline{T}|X_{1}\setminus(N[a]\setminus\{c\})\) connecting \(c\) and a vertex \(b\in B_{1}\); however, \(T|(a\)-\(c\)-\(P^{*}\)-\(b\)-\(Q\)-\(a)\) is an antihole of length at least five, a contradiction. Therefore, \(C_{1}=\emptyset\), and \(C_{2}=\emptyset\) by symmetry. Now, \((A_{1},C_{1},B_{1},B_{2},C_{2},A_{2})\) is a split of a proper 2-join of \(T\). This completes the proof.
**Theorem 5.5**.: _Let \(T\) be a trigraph in \(\mathcal{F}\) with no balanced skew partition, no odd prism, and no antihole. Then, \(T\) is either complete or has an even pair._
**Proof.** Let \(T\) be a vertex-minimal counterexample to the claim. If \(T\) is basic, the theorem follows from 4.1. Thus, we may assume \(T\) admits a proper 2-join. Let \((A_{1},B_{1},C_{1},A_{2},B_{2},C_{2})\) be a split of a proper 2-join of \(T\) with \(X_{1}=A_{1}\cup B_{1}\cup C_{1}\) and \(X_{2}=A_{2}\cup B_{2}\cup C_{2}\). If \(T\) has a switchable component \(D\), we may assume up to symmetry that \(V(D)\subseteq X_{2}\), as no switchable pair meets both \(X_{1}\) and \(X_{2}\). By our construction, \(T_{X_{1}}\) has at most one light or small switchable component. By 5.2, \(T_{X_{1}}\) is Berge and thus \(T_{X_{1}}\in\mathcal{F}\). It also follows from 5.2 that \(T_{X_{1}}\) admits no balanced skew-partition, no antihole, and no odd prism.
Next, we show that \(T_{X}\) is favorable. By 3.1, \(|X_{i}|\geq 4\) for \(i=1,2\), and the marker component of \(T_{X_{1}}\) has either two or three vertices (depending on the parity of the 2-join \((X_{1},X_{2})\)). In either cases, we have \(|T|>|T_{X_{1}}|\geq 6\). By 3.3, it remains to show \(T_{X_{1}}\) is not complete: if \(C_{1}\) is not empty, then a vertex \(y\in C_{1}\) is not adjacent to any vertex of the marker component of \(T_{X_{1}}\); if \(C_{1}\) is empty, then by 3.1, \(|A_{1}|\geq 2\) and \(T_{X_{1}}|A_{1}\) is strongly anticomplete to \(b\).
Therefore, since \(T\) is the vertex-minimal counterexample, \(T_{X_{1}}\) contains an even pair \(uv\) disjoint its marker component. However, by 5.3, \(uv\) is an even pair in \(T\), a contradiction.
### Proof of the main theorem
Now, we are ready to prove 1.2. We restate it here for the sake of clairty.
**Theorem 5.6** (1.2).: _If \(G\) is a Berge graph with no odd prism and no antihole, and \(G\) does not admit a balanced skew-partition, then \(G\) is either complete or has an even pair._
**Proof.** First, \(G\in\mathcal{F}\) as \(G\) is Berge and has no switchable component. So the result follows from 5.5. |
2309.07188 | Predicting Survival Time of Ball Bearings in the Presence of Censoring | Ball bearings find widespread use in various manufacturing and mechanical
domains, and methods based on machine learning have been widely adopted in the
field to monitor wear and spot defects before they lead to failures. Few
studies, however, have addressed the problem of censored data, in which failure
is not observed. In this paper, we propose a novel approach to predict the time
to failure in ball bearings using survival analysis. First, we analyze bearing
data in the frequency domain and annotate when a bearing fails by comparing the
Kullback-Leibler divergence and the standard deviation between its break-in
frequency bins and its break-out frequency bins. Second, we train several
survival models to estimate the time to failure based on the annotated data and
covariates extracted from the time domain, such as skewness, kurtosis and
entropy. The models give a probabilistic prediction of risk over time and allow
us to compare the survival function between groups of bearings. We demonstrate
our approach on the XJTU and PRONOSTIA datasets. On XJTU, the best result is a
0.70 concordance-index and 0.21 integrated Brier score. On PRONOSTIA, the best
is a 0.76 concordance-index and 0.19 integrated Brier score. Our work motivates
further work on incorporating censored data in models for predictive
maintenance. | Christian Marius Lillelund, Fernando Pannullo, Morten Opprud Jakobsen, Christian Fischer Pedersen | 2023-09-13T08:30:31Z | http://arxiv.org/abs/2309.07188v1 | # Predicting Survival Time of Ball Bearings in the Presence of Censoring
###### Abstract
Ball bearings find widespread use in various manufacturing and mechanical domains, and methods based on machine learning have been widely adopted in the field to monitor wear and spot defects before they lead to failures. Few studies, however, have addressed the problem of censored data, in which failure is not observed. In this paper, we propose a novel approach to predict the time to failure in ball bearings using survival analysis. First, we analyze bearing data in the frequency domain and annotate when a bearing fails by comparing the Kullback-Leibler divergence and the standard deviation between its break-in frequency bins and its break-out frequency bins. Second, we train several survival models to estimate the time to failure based on the annotated data and covariates extracted from the time domain, such as skewness, kurtosis and entropy. The models give a probabilistic prediction of risk over time and allow us to compare the survival function between groups of bearings. We demonstrate our approach on the XJTU and PRONOSTIA datasets. On XJTU, the best result is a 0.70 concordance-index and 0.21 integrated Brier score. On PRONOSTIA, the best is a 0.76 concordance-index and 0.19 integrated Brier score. Our work motivates further work on incorporating censored data in models for predictive maintenance.
Department of Electrical and Computer Engineering, Aarhus University
Finlandsgade 22, 8200 Aarhus N, Denmark
{cl,202102261,morten,cfp}@ece.au.dk
## Introduction
Ball bearings find extensive use in various rotary machines, but are susceptible to defects, such as contamination wear, poor lubrication, and improper mounting Kim, An, and Choi (2017). Monitoring wear in bearings plays an important role in predictive maintenance programs, and engineers have traditionally used frequency-based vibration analysis tools to assess the severity of emerging mechanical damages Randall (2021); Randall and Antoni (2011). Predictive maintenance should be done as soon as a bearings' operating characteristics start to deviate significantly from its normal operating state in order to avoid actual failure and eventual breakdown of the machinery. To this end, researchers have applied various machine learning (ML) algorithms, particularly neural networks, to train models that can predict the remaining useful life (RUL) of bearings Guo et al. (2017); Zheng et al. (2018); Al Masry et al. (2019); Wang et al. (2020, 2022); Xu et al. (2022). This strategy has shown initial success and high predictive performance, but support for censored observations is generally overlooked in the field and only few studies have attempted to build models that support it Widodo and Yang (2011); Hochstein et al. (2013); Wang et al. (2022). Given the rarity ball bearings failures, censored observations are common, and simply ignoring them can lead to a loss of efficiency and introduce estimation bias Stepanova and Thomas (2002). Survival analysis is a type of regression that can leverage censored data Gareth et al. (2021). Current works in bearing prognostics using survival analysis do not provide any evaluation of predictive accuracy or ranking performance Wang et al. (2022), and the evaluated methods, such as the nonparametric Kaplan-Meier (KM) estimator Kaplan and Meier (1958), are often mainly descriptive, too simplistic or do not model the relationship between covariates and outcome Widodo and Yang (2011).
In this paper, we propose a novel approach for predictive maintenance of ball bearings using survival analysis (see Fig. 1). First, we process the raw bearing data in the frequency-domain to identify at which point in time a bearing starts to deviate significantly from its normal frequency characteristics. This happens well in advance of actual bearing failure, at a point where the faulty component can be readily replaced by the maintenance staff. This is our event detection algorithm, that we use to annotate bearings in two datasets (XJTU and PRONOSTIA) by the time this deviation occurs. The algorithm is a simple distance metric, and using it for forecasting is computationally expensive and prone to error. Therefore, we use the annotated data to train several survival models, that can predict the probability of failure as a progressive estimate over time, contrary to traditional RUL regression, that only offer a mere point estimate of the failure time. This method also enables us to quantify survival probabilities between groups of bearings by their time-domain features.
We compare our work to the method proposed by Xu et al. Xu et al. (2022) to estimate the RUL of bearings in the PRONOSTIA dataset. However, since our methodology, the annotation algorithm we have used and the bearing data differ from Xu et al. (2022) and others, we perform an indirect comparison. Source code is available at: [https://github.com/thecml/ball-bearing-survival](https://github.com/thecml/ball-bearing-survival)
## Fundamentals
### Elements of Ball Bearings
Ball bearings are used to carry rotating loads by separating elements in motion with two bearing races carried by steel or ceramic balls. Ball bearings can support either axial or radial loads, or a combination.
The health of a bearing can be tracked by monitoring the presence of defects in the bearing components. Bearing defects excite the bearing components and the resulting vibrations can be measured. By following the development of energy in selected frequency bands, ballpass frequency outer race (BPFO), ballpass frequency inner race (BPFI), fundamental train frequency (FTF), ball spin frequency (BSF), and shaft frequency (FS), each relating to a bearing's components, the bearing's health and remaining useful lifetime can be tracked [22, 1]. During infant bearing defects, the vibration signals will be weak and amplitude modulated with eigenfrequencies related to the bearing components, necessitating use of wideband Piezo or MEMS accelerometers (1-20 kHz) to capture any vibrations.
### Elements of Survival Analysis
Survival analysis is a form of regression modeling that studies the time to an event, which can be partially observed (i.e., censored). Survival data contain the observed covariates, the time to event, and a label indicating whether the event occurred or was censored. We treat the survival time as discrete and limit the time horizon to a finite duration, denoted as \(T=\{T_{0},...,T_{max}\}\), with \(T_{max}\) representing the predefined maximum time horizon (e.g., 1 year). Within this framework, we consider bearing failure as the event of interest, and assume that exactly one such event will occur eventually for each observation (e.g., a bearing will eventually fail, but from only one cause). However, not all events of interest are always observed due to factors such as bearings being decommissioned before failure occurs or simply running problem-free after \(T_{max}\), resulting in right-censored data.
Survival analysis models the probability that an event occurs at time \(T\) later than \(t\), which is denoted as the survival probability \(S(t)=\Pr(T>t)=1-\Pr(t\leq T)\). To estimate \(S(t)\), we use the so-called hazard function, \(h(t)=\lim_{\Delta t\to 0}\Pr(t<T\leq t+\Delta t|T>t)/\Delta t\), which corresponds to the failure rate at an instant after time \(t\), given survival past that time [1]. The relationship between the survival and hazard function is given by \(h(t)=f(t)/S(t)\), where \(f(t)\) is the probability density associated with \(T\), \(f(t):=\lim_{\Delta t\to 0}\Pr(t<T\leq t+\Delta t)/\Delta t\), which is the instantaneous rate of failure at time \(t\). In this regard, \(h(t)\) is the density of \(T\) conditional on \(T>t\), and the functions \(S(t)\), \(h(t)\), \(f(t)\), are equivalent ways of describing the distribution of \(T\), which formalizes the intuition that higher values for \(h(t)\) correspond to higher failure probabilities. In order to fit a regression model to survival times, Cox's Proportional Hazards (CoxPH) is a popular choice [1]. The model assumes a conditional individual hazard function of the form \(h(t|\mathbf{x}_{i})=h_{0}(t)\exp(f(\mathbf{\theta},\mathbf{x}_{i}))\), where \(i\) denotes the \(i\)-th individual, \(\mathbf{x}_{i}\) is a vector of \(d\) covariates and \(f(\mathbf{\theta},\mathbf{x}_{i})\) is a linear function of the covariates.
## Methods and Materials
### Datasets
We consider two bearing datasets in this study, XJTU [22] and PRONOSTIA [16]. For both datasets, a single bearing under test is subjected to radial load during rotation, and vibrations are captured. The bearing is instrumented with two piezoelectric accelerometers, mounted horizontally and vertically, and sampled at 25.6 kHz. The resulting data are saved as raw comma-separated values. In both experiments, the bearings are run to failure under very high load (C/P\(\leq\)4) to accelerate degradation. This significantly increases the risk of initiating bearing defects but also comes with the risk of local flash heating and subsequent uncontrolled damage of a bearing.
**XJTU**: The dataset is generated from 15 deep grove ball bearings. Three types of tests are conducted with different loads and speeds, and five bearings are used in each test. In our experiments, we use five bearings with the following characteristics: a operating condition (C/P) of 1.1, a radial force of 12.0 kN and a rotating speed of 2100 RPM.
**PRONOSTIA**: The dataset is generated from 6 deep grove ball bearings. Three types of tests are conducted with different loads and speeds, where two bearings are used in each test. In our experiments, we use two bearings with the following characteristics: a C/P of 0.8, a radial force of 5.0 kN and a rotating speed of 1500 RPM.
### Feature extraction
Features come from accelerometer sensors placed on the ball bearings at a angle of 90 degrees (referred to as the \(X\)-axis and \(Y\)-axis). Feature extraction starts by discretizing into bins a sequence of raw samples, i.e., \(\mathbf{x}=(x)_{i=1}^{N}\). We then apply the expressions in Tab. 1 to each of these bins and obtain a total of twelve time-domain features for the whole lifetime of a bearing.
Figure 1: Outline of the proposed solution. Raw data is used for event detection (top) and feature extraction (bottom) before training a survival model. Illustrations partly from [22].
### Event detection
In this paper, we propose an event detection algorithm to label ball bearing data to be used in a survival model (see Fig. 2). Collecting the raw bearing data, performing the Hilbert transformation, and applying the fast Fourier transform (FFT), produces five main bins that make up a probability density function (PDF) for appropriate time windows. This allows us to observe different conditions of the bearing through time and enables us to identify the behavior and characteristic that describe a bearing in a good or bad state.
After preprocessing, we apply the Kullback-Leibler divergence and standard deviation (SD) formulas in conjunction to detect changes in the PDF over time that possibly can identify the event of interest (see Fig. 2). We consider two distributions: \(T_{0}\) is the reference distribution and \(T_{i}\) is one over a moving time window. In this context, the KL divergence is a statistical method that measures the difference between two probability distributions, denoted \(D_{\mathrm{KL}}(P\|Q)\), as a measure of how one probability distribution \(P\) is different from the probability distribution \(Q\). This measures changes in entropy related to the distribution shape. We also compute the SD of \(T_{0}\) and \(T_{i}\), assuming they are Gaussian. By comparing the two PDFs' by SD and KL divergence from the break-in phase to the end of the bearing's lifetime, the progression of the PDF discrepancies are obtained. The KL is illustrated as a solid blue line in Fig. 3. The KL and SD progressions are similar, but not the same, because of changes to the underlying mechanics of the bearing throughout its lifetime. The KL progression line tends to reach a peak level (key point #1 in Fig. 3), i.e., high entropy and toward diversification, which we use as a threshold (red dotted line in Fig. 3, the highest point of the blue line plus 10%). Later, the threshold will be used to establish the event. After the break-in phase, the line reaches a plateau (key point #2 in Fig. 3), i.e., low entropy and toward equalization, and then begins to rise again when the bearing starts to degrade, i.e., high entropy and towards diversification between the PDFs, and then failure occurs (key point #3 in Fig. 3). We establish a KL-threshold and a SD-threshold and whichever of two are crossed last indicates an event.
### Data preprocessing
The XJTU and PRONOSTIA datasets contain only few actual bearings, so the number of data points and events is low. To remedy this, we perform data augmentation to create synthesized versions of the two datasets. This process is done independently for each bearing. First, we up-sample (double) the data points by treating each bearing's \(X\)-axis and \(Y\)-axis as independent bearings. Second, we do Constrained Bootstrapping as in [21]. Third, we divide the timeseries into 20 slices of the same size covering the entire lifetime, and for each slice assign a time to event or censoring, and the covariates at that time. This essentially re-samples a single event several times, while preserving the correlation between covariates and event time. We introduce a censoring rate of 20% in both datasets by simulating that some bearings did not experience the event.
\begin{table}
\begin{tabular}{|l|c|} \hline Absolute mean & \(\bar{x}=\frac{1}{N}\sum_{i=1}^{N}|x_{i}|\) \\ \hline Standard deviation & \(\sigma=(\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\bar{x})^{2})^{1/2}\) \\ \hline Skewness & \(SK=\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\bar{x})^{3}/\sigma^{3}\) \\ \hline Kurtosis & \(K=\frac{1}{N}\sum_{i=1}^{N}(x_{i}-\bar{x})^{4}/\sigma^{4}\) \\ \hline Entropy & \(H=-\sum_{i=1}^{N}P(x_{i})\log P(x_{i})\) \\ \hline Root-mean-square & \(RMS=(\frac{1}{N}\sum_{i=1}^{N}x_{i}^{2})^{1/2}\) \\ \hline Max value & \(max=\max(|\mathbf{x}|)\) \\ \hline Peak-To-Peak & \(P2P=\max(|\mathbf{x}|)-\min(|\mathbf{x}|)\) \\ \hline Crest factor & \(CR=max/RMS\) \\ \hline Clearance factor & \(CL=x_{p}/(\frac{1}{N}\sum_{i=1}^{N}|x_{i}|^{1/2})^{2}\) \\ \hline Shape factor & \(SH=RMS/\bar{x}\) \\ \hline Impulse & \(IM=max/\bar{x}\) \\ \hline \end{tabular}
\end{table}
Table 1: Features expressed in relation to a signal frame \(\mathbf{x}=(x)_{i=1}^{N}\).
Figure 2: Flowchart of proposed event detection algorithm. The entropy or deviation between the signal at time \(T_{0}\) and time \(T_{i}\) is continuously compared over the lifetime of the bearing.
### Survival models
The following models are evaluated:
**Cox Proportional Hazards (CoxPH):**(Cox, 1972) is the most commonly used regression model for survival data. It assumes a conditional individual hazard function. The risk score is estimated as a linear function of covariates and parameters, found by maximizing the partial log-likelihood.
**Random Survival Forest (RSF):**(Ishwaran et al., 2008) RSF is an ensemble of survival trees, where the data are recursively partitioned based on some splitting criterion, and similar data points based on the event of interest are put on the same node.
**CoxBoost:**(Hothorn et al., 2005) CoxBoost is an extension of traditional gradient boosting that supports survival data by minimizing the weighted empirical risk function as a least-squares problem.
**DeepSurv:**(Katzman et al., 2018) DeepSurv is a neural network that uses a Cox likelihood function to compute a relative risk score, which quantifies the likelihood of experiencing an event.
**Deep Survival Machines (DSM):**(Nagpal, Li, and Dubrawski, 2021) DSM is a neural network that estimates the conditional survival function as a mixture of primitive distributions, either Weibull or Log-Normal.
**Weibull AFT:**(Lee and Wang, 2013) The Accelerated Failure Time (AFT) model estimates the survival function using the Weibull distribution. It is a generalization of the exponential distribution, but does not assume a constant hazard rate, which allows for a broader application.
## Experiments and Results
### Setup
For our empirical analyses, we use the openly available XJTU and PRONOSTIA datasets, which differ in the number of bearings, operating conditions, and the number of samples. After preprocessing, we apply a \(z\)-score data normalization to all covariates and do a train-test split based on the number of bearings: for XJTU, we select three bearings for training and two for test, and for PRONOSTIA, we select one bearing for train and one for test. For all datasets and models, we tune the following hyperparameters over ten iterations using 5-fold cross-validation:
* CoxPH: Number of iterations and tolerance.
* RSF: Number of estimators, tree depth and split criterion.
* CoxBoost: Learning rate, tree depth, split criterion.
* DeepSurv: Iterations, learning rate and batch size.
* DSM: Iterations, learning rate and batch size.
* Weibull AFT: Penalizer coefficient.
Tuning is done solely on the training bearings. We use the hyperparameters leading to the highest average concordance index (Antolini's) on the validation folds to adjust the final models. Models are then evaluated on the test bearings.
### Results
Table 2 shows the predictive performance of the survival models. We report the training time, Harrell's concordance index (CI) (Harrell, Lee, and Mark, 1996), Antolini's time-dependent concordance index (CI\({}_{\text{td}}\)) (Antolini, Boracchi, and Biganzoli, 2005) and the integrated Brier score (IBS)
Figure 3: Top: Event detection based on KL divergence. The key point #1 is the break-in period, #2 is the plateau and #3 is the break-out period. The solid blue line is the KL divergence. The dotted gray line is the maximum entropy reached during the break-in period. The dotted red line is the threshold given by the maximum entropy plus 10%. Bottom left: FFT bins of the signal at break-in time. Bottom right: FFT bins of the signal at failure time.
(Graf et al., 1999). Concerning the results on the XJTU dataset (see Tab. 2(a)), we observe that RSF, CoxBoost and DeepSurv perform the best in terms of ranking (CI) and DSM in terms of predictive accuracy. A CI of 0.70 means that 70 out of a 100 comparable pairs are ranked correctly. An IBS of 0.214 equals a 78.6% accuracy in predicting the survival function over all available times. Concerning the results on the PRONOSTIA dataset (see Tab. 2(b)), DSM, DeepSurv and WeibullAFT performs the best.
To compare our work to SOTA, (Xu et al., 2022) report a mean 70% accuracy (95% CI: 0.58-0.87) on eleven PRONOSTIA bearings. In our work, DeepSurv (Katzman et al., 2018) performs the best in terms of predicting the survival function, yielding a 0.197 integrated Brier score on the test set. This translates to a predictive accuracy of 80.3%, however, the results are not directly comparable, as (Xu et al., 2022) use simple least-squares regression as opposed to survival regression, employ a different training/test split and only report the estimation error at the end of the bearing lifetime, instead of at available times.
Figure 4 shows inference results on the two XJTU test bearings. We see that all evaluated models provide consistent estimates of the survival function (leftmost panel). The Kaplan-Meier is used as a reference here and estimated on the full dataset (five bearings). In the central panel, we split the test data into two groups by the root mean square (RMS) covariate value, \(RMS\leq 2\) and \(RMS>2\), and compute the mean survival curve of the two separately using a Cox proportional hazards model. This shows a clear separation in predicted survival probability, already from \(t=10\), to convergence at \(t=120\). The shadowed area represents a 95% confidence interval, which is wider for the bearings in the first group. The plot presents an interesting application for predictive maintenance, as it can be used as decision support for maintenance personnel. In the rightmost panel, by following (Austin, 2012), we computed 500 simulated survival times using a Cox model for the two RMS groups, and plotted their respective hazard function and survival time; the two hazard functions are quantitatively distinct, and the expected survival time for \(RMS>2\) is lower than for \(RMS\leq 2\).
## Conclusion
We have proposed survival analysis as a method to predict the risk of failure in ball bearings. First, we identify the point in time when a bearing starts to diverge significantly from its original frequency characteristic, which marks our event of interest, and second, we train several survival models to estimate the risk of failure given a set of covariates sampled from the time-domain. We find the application of such methods interesting in manufacturing and engineering, as they inherently support censored data and provide a probabilistic prediction and confidence estimation instead of mere point estimates, thus we encourage further work within our framework.
## Acknowledgments
This work was supported by the PRECISE project under Grant Agreement No. AAL-2021-8-90-CP by the European AAL Association. |
2305.19705 | Expulsion of counter Evershed flows from sunspot penumbrae | In addition to the Evershed flow directed from the umbra towards the outer
boundary of the sunspot, under special circumstances, a counter Evershed flow
(CEF) in the opposite direction also occurs. We aim to characterize the proper
motions and evolution of three CEFs observed by the Solar Optical Telescope
onboard the Japanese Hinode spacecraft and the Helioseismic and Magnetic Imager
onboard the Solar Dynamics Observatory. We use state-of-the-art inversions of
the radiative transfer equation of polarized light applied to
spectropolarimetric observations of the Fe I line pair around 630 nm. The three
CEFs appeared within the penumbra. Two of the CEF structures, as part of their
decay process, were found to move radially outwards through the penumbra
parallel to the penumbral filaments with speeds, deduced from their proper
motions, ranging between 65 and 117 m/s. In these two cases, a new spot
appeared in the moat of the main sunspot after the CEFs reached the outer part
of the penumbra. Meanwhile, the CEFs moved away from the umbra, and their
magnetic field strengths decreased. The expulsion of these two CEFs seems to be
related to the normal Evershed flow. The third CEF appeared to be dragged by
the rotation of a satellite spot. Chromospheric brightenings were found to be
associated with the CEFs, and those CEFs that reached the umbra-penumbra
boundary showed enhanced chromospheric activity. The two CEFs, for which
line-of-sight velocity maps were available during their formation phase, appear
as intrusions into the penumbra. They may be associated with magnetic flux
emergence. | J. S. Castellanos Durán, A. Korpi-Lagg, S. K. Solanki | 2023-05-31T10:02:14Z | http://arxiv.org/abs/2305.19705v1 | # Expulsion of counter Evershed flows from sunspot penumbrae
###### Abstract
In addition to the Evershed flow directed from the umbra towards the outer boundary of the sunspot, under special circumstances, a counter Evershed flow (CEF) in the opposite direction also occurs. We aim to characterize the proper motions and evolution of three CEFs observed by the Solar Optical Telescope onboard the Japanese Hinode spacecraft and the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We use state-of-the-art inversions of the radiative transfer equation of polarized light applied to spectropolarimetric observations of the Fe i line pair around 630 nm. The three CEFs appeared within the penumbra. Two of the CEF structures, as part of their decay process, were found to move radially outwards through the penumbra parallel to the penumbral filaments with speeds, deduced from their proper motions, ranging between 65 and 117 m s\({}^{-1}\). In these two cases, a new spot appeared in the moat of the main sunspot after the CEFs reached the outer part of the penumbra. Meanwhile, the CEFs moved away from the umbra, and their magnetic field strengths decreased. The expulsion of these two CEFs seems to be related to the normal Evershed flow. The third CEF appeared to be dragged by the rotation of a satellite spot. Chromospheric brightenings were found to be associated with the CEFs, and those CEFs that reached the umbra-penumbra boundary showed enhanced chromospheric activity. The two CEFs, for which line-of-sight velocity maps were available during their formation phase, appear as intrusions into the penumbra. They may be associated with magnetic flux emergence.
Sunspot groups (1651); Solar photosphere (1518); Solar magnetic fields (1503); Solar active region velocity fields (1976); Solar chromosphere (1479); Solar flares (1496).
0000-0002-4880-7888]J. S. Castellanos Durán
0000-0002-4880-7880]A. Korpf-Lagg
0000-0001-8883-0888]S. K. Solanxi
## 1 Introduction
Evershed flows are characteristic outflows observed in the penumbrae of sunspots (Evershed, 1909) with typically subsonic velocities of \(\sim\)1-3 km s\({}^{-1}\) in the body of the filament (e.g., Schlichenmaier and Schmidt, 2000; Strecker and Bello Gonzalez, 2022) and 5-10 km s\({}^{-1}\) at the endpoints (e.g. Tiwari et al., 2013). The characteristic filamentary structure of penumbrae observed in continuum images is the result of the interaction between buoyant convective cells rising from the solar interior and inclined magnetic field (see Solanki, 2003; Borrero and Ichimoto, 2011, for reviews). The _normal_ Evershed flows transport plasma radially1 outwards along the penumbral filaments (= intraspines; e.g., Lites et al., 1993; Jurcak et al., 2007; Borrero and Solanki, 2008). In the last decade, penumbral regions with the opposite direction of the flow at photospheric layers, but otherwise indistinguishable in the continuum images, were observed (Kleint, 2012; Kleint and Sainz Dalda, 2013; Louis et al., 2014; Siu-Tapia et al., 2017; Castellanos Duran et al., 2021). The new type of penumbral flow was named counter Evershed flow (CEF) to distinguish it from the distinct chromospheric inverse Evershed flow (e.g., St. John, 1911a,b; Choudhary and Beck, 2018; Beck and Choudhary, 2020). CEFs have also been observed in ideal magnetohydrodynamic simulations (MHD; Siu-Tapia et al., 2018).
Footnote 1: The term ‘radial’ is referring to the direction along the solar surface away from the center of the sunspot.
Louis et al. (2014) did one of the first specific analyses of a CEF. They reported a maximum line-of-sight velocity of 1.6 km s\({}^{-1}\), an area of 5.2 arcsec\({}^{2}\) (\(\sim\)2.6 Mm\({}^{2}\)), and a lifetime of 1 h for the single event they studied. These authors associated these flows with the evolution of the sunspot, which fragmented two days after the analyzed observations. Siu-Tapia et al. (2017) found that the global properties inside a CEF, such as temperature, magnetic field strength (\(B\)), and the line-of-sight velocity (\(v_{\rm LOS}\)) vary with height similarly to the properties in the parts of the penumbra displaying the normal Evershed flow. Nonetheless, at the umbra-penumbra boundary, magnetic fields with strengths of up to 8.2 kG and \(v_{\rm LOS}\gtrsim 15\) km s\({}^{-1}\) at optical depth unity (\(\tau=1\)) were reported (Siu-Tapia et al., 2019).
Recently, Castellanos Duran et al. (2021) reported that CEFs appear ubiquitously in all types of sunspots. These
authors found almost \(\sim\)400 CEFs in their survey and documented different types of CEFs. In particular, they distinguished between those that appear in penumbrae bordering on regular umbrae, and those CEFs that are linked to light bridges.
When analyzing the different contributions in the momentum equation inside a simulated box from an MHD simulation, Siu-Tapia et al. (2018) confirmed that the normal Evershed flow is a result of the overturning of the hot material coming from the solar interior in the presence of an inclined magnetic field (Rempel et al., 2009; Rempel, 2011). The CEFs in the simulations are, according to Siu-Tapia et al. (2018), compatible with siphon flows, however. Penumbral siphon flows result from asymmetric heating inside the flux tube that produces the required difference in gas pressure to drive material along the arched magnetic tubes (Thomas and Montesinos, 1993; Montesinos and Thomas, 1997), although, in CEFs, the siphon flows point in the opposite direction to the normal Evershed flow.
Although the maintenance of CEFs during their steady phase, at least in the MHD simulations, can be explained by the siphon flow mechanism, it remains unclear, what process leads to the formation of the opposite direction to the Evershed flow.Possible candidates identified by observers are flux emergence (e.g., Louis et al., 2014, 2020), the adhesion of the penumbra from another spot after two spots merge (Siu-Tapia et al., 2017), as well as the association of granular and filamentary light bridges and CEFs (Castellanos Duran et al., 2021).
The evolution over time of CEFs is still barely known (cf. Louis et al., 2020). In contrast, the motion of another type of magnetic feature inside sunspot penumbrae has been the topic of numerous studies. The expulsion of so-called'seas-servent' magnetic fields lines was observed mainly in the plage surrounding the sunspot, but also in the penumbra itself (Sainz Dalda and Bellot Rubio, 2008). These small, bipolar features have a filamentary structure, their length ranges between 2''and 5''and with a mean width of 1.5''. They appeared in the mid-penumbra and are expelled radially outwards with velocities ranging from 0.3-0.7 km s\({}^{-1}\). Their lifetime ranges from 30 min up to 7 h. After the expulsion, these structures continue to travel in the moat up to 3-6''away from the penumbral boundary into the surrounding plage region. The same authors suggested that these bipolar structures are moving U-loops driven by the Evershed flow and are the precursors of moving magnetic features (MMF; Harvey and Harvey, 1973; Zhang et al., 2003; Sainz Dalda and Martinez Pillet, 2005; Zhang et al., 2007). Also, the so-called Evershed clouds prominent in proper motion studies, have been related to MMFs (Cabrera Solana et al., 2006).
The moat flow is a horizontal radially outward orientated flow starting from the outer part of the penumbra and connecting the penumbral filaments with the quiet Sun (e.g., Sheeley, 1969; Vargas Dominguez et al., 2007, 2008; Strecker and Bello Gonzalez, 2018). The typical velocity of the moat outflow ranges between 0.8 and 1.4 km s\({}^{-1}\) and it vanishes abruptly at a distance of similar length as the penumbra away from the outer penumbral boundary (Sobotka and Roudier, 2007; Lohner-Bottcher and Schlichenmaier, 2013).
In this work, we study the thermal and velocity conditions, magnetic field structure, and the temporal evolution of three CEFs observed in AR 10930 (solar cycle 23) and AR 11967 (solar cycle 24). Two of these CEFs are seen to be expelled radially outwards beyond the outer boundary into the moat of the main sunspot within the Active Region (AR). The host sunspots of these CEFs have been widely studied not only due to their peculiar flows, but also because they belong to ARs that harbored superstrong magnetic fields (Siu-Tapia et al., 2017; Okamoto and Sakurai, 2018; Siu-Tapia et al., 2019; Castellanos Duran et al., 2020), and AR 10930 hosted four large X-class flares. These solar flares are among the most studied and modeled X-class flares of solar cycle 23 (e.g., Wang et al., 2008; Schrijver et al., 2008; Gosain et al., 2009; Fan, 2011, 2016; Wang et al., 2022, and references therein).
In this study, we aim to characterize the temporal evolution of three CEFs. In particular, we analyze their appearance, evolution and expulsion, and describe the new magnetic configuration after their expulsion. In addition, we discuss the chromospheric response to the presence of CEFs.
This article is arranged as follows: Section 2 introduces the data and the applied inversion method to retrieve the physical conditions within the CEFs from spectropolarimetric data. Sections 3.1 and 3.2 describe the properties of the three studied CEFs. The appearance and expulsion of CEFs are presented in Sections 3.3 and 3.4. Section 3.5 illustrates the evolution of the magnetic regions that are left after the expulsion of CEFs. In Section 3.6, we describe the variation of \(B\) and \(v_{\rm LOS}\) within the CEFs. The chromospheric response to the presence of CEFs is presented in Section 3.7. In Section 4, we discuss our results and we conclude in Section 5.
## 2 Observations and Methods
### Data
We observed two sunspot groups from two different solar cycles. The sunspot group AR 10930 was followed for 8 days starting on 2006 December 8, and the sunspot group AR 11967 for 6 days starting from 2014 February 1. We analyzed spectropolarimetric observations taken by the Japanese Hinode mission launched in 2006 (Kosugi et al., 2007). The Spectro-Polarimeter (SP; Ichimoto et al., 2008) aboard Hinode measures the four Stokes parameters (\(I,Q,U,V\)) of the Fe i line pair around 6302 A, with a spectral sampling of 21.5 mA. We analyzed 42 scans of AR 10930 and 32 of AR 11967 (hereafter SCANS-A(00-41) and SCANS-B(00-31), respectively). The spatial sampling along the slit and scan direction can be either 0\(\farcs\)16 (normal mode) or 0\(\farcs\)32 (fast mode) depending on the observing mode. Data were reduced using the nominal Hinode/SOT-SP pipeline sp_prep(Lites and Ichimoto, 2013). We also analyzed all the available photospheric \(G\)-band filtergrams and the chromospheric Ca ii H i images taken by Hinode/SOT-BFI (Tsuneta et al., 2008), and the Stokes \(V\) maps from Hinode/SOT-NFI (Tsuneta et al., 2008) recorded in the intervals 2006 December 6 to 15 and 2014 February 1 to 6.
Figure 1: Temporal evolution of AR 10930 as observed by Hinode/SOT-SP. Time runs from top to bottom. Columns are the temperature, the magnetic field strength \(B\), \(v_{\rm LOS}\) and the inclination of the magnetic field in the line-of-sight. Contours show the inner and outer penumbra boundaries. The black and green arrows mark CEF-1 and CEF-2, respectively. White circles shown on the four bottom rows mark an intrusion into the umbra associated with CEF-2. See Figure 3 for a zoom-in of this intrusion. See also Animation 1 part of the online material.
We use the following nomenclature throughout the paper: Letters A and B are used to differentiate the Hinode/SOT-SP SCANS of AR 10930 and AR 11967, respectively. Notice that for AR 10930, the Hinode/SOT-SP scans covered the entire sunspot group; however, for AR 11967, many of the Hinode/SOT-SP scans focused only on the eastern group of sunspots. We restrict our analysis to the eastern group containing one of the CEFs, accounting for approximately \(\sim\)1/3 of the total sunspot area within AR 11967. The left columns in Figures 1 and 2 show a continuum image each of parts of AR 10930 and AR 11967 (see following sections for details). We use numbers 1 to 3 to mark the three CEFs analyzed in detail in this study.
In addition, we used data from the Solar Dynamic Observatory (SDO; Pesnell et al., 2012) taken by the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012; Schou et al., 2012). We analyzed the continuum intensity, Dopplergrams (\(v_{\rm LOS}\)), and magnetograms (\(B_{\rm LOS}\)) obtained at a spatial resolution of 1''. Two intervals were used with each of them having different cadences and fields-of-view (FOV). The first interval covered the entire passage of AR 11967 over the solar disk from 2014 January 28 at 20:00 UT to February 8 at 20:00 UT at a cadence of 12 minutes. The second interval started on 2014 February 1 at 04:00 UT and lasted until February 2 at 12:00 UT with the data taken at a cadence of 45 seconds. The FOV of the first dataset was cropped to cover the entire AR 11967, whilst the second FOV was cropped to cover the same region as observed by Hinode/SOT-SP, but extended to include the eastern moat of the main sunspot of AR 11967 (see Animation 2). Continuum maps were corrected for limb darkening following Castellanos Duran and Kleint (2020).
### Inversion scheme
To extract the physical information encoded in the Stokes profiles, we used the Stokes Profiles Inversion-O-Routines (SPINOR) inversion code (Frutiger et al., 2000). SPINOR builds on the STOkes PROfiles routines (STOPRO) that solve the radiative transfer equations for polarized light (Solanki, 1987). In the _traditional_ scheme, SPINOR (as well as other inversion codes commonly used in solar physics; e.g., the Stokes Inversion based on Response functions code (SIR; Ruiz Cobo and Iorio Iniesta, 1992), the He-Line Information extractor\({}^{+}\) code (HelX\({}^{+}\); Lagg et al., 2004, 2009), the HAnle and Zeeman Light code (HAZEL; Asensio Ramos et al., 2008), the Spectropolarimetric NLTE Analytically Powered Inversion code (SNAPI; Milic and van Noort, 2018)), invert each pixel \((x,y)[I(\lambda),Q(\lambda),U(\lambda),V(\lambda)]\) within the FOV independently. However, these pixels are spatially coupled due to the action of the point spread function (PSF) of the telescope. Recently, the spatially-coupled concept has been extended into the STockholm inversion Code (STIC; de la Cruz Rodriguez et al., 2019) to account for simultaneous observations taken by different instruments with intrinsically different PSFs (de la Cruz Rodriguez, 2019).
For the data analyzed here, the pupil of Hinode/SOT with its central obscuration and the triangular spider produces a complex radially non-symmetric PSF (Danilovic et al., 2008); cf. Figure 10 in van Noort, 2012). This complex PSF couples the information of neighboring pixels and needs to be taken into account when analyzing Hinode/SOT-SP observations. This was achieved when van Noort (2012) developed the spatially coupled scheme for inversions and implemented it into SPINOR (hereafter spatially coupled inversion) that treated both, the spectropolarimetric information and the inherent spatial degradation caused by the spatial PSF. This technique was improved by showing that improved results are obtained by applying it to finer, i.e. interpolated spatial pixels (van Noort et al., 2013).
The spatially coupled inversions allowed to obtain excellent fits to the observed Stokes profiles, while keeping a single depth-depended atmospheric model when fitting different photospheric features (see e.g., van Noort et al., 2013; Tiwari et al., 2015; Castellanos Duran, 2022). The spatially coupled inversions of the Hinode/SOT-SP observations were carried out with a depth-stratified atmosphere at three-node positions for the temperature, magnetic field strength, inclination and azimuth, \(v_{\rm LOS}\), and a constant value for microturbulence that accounts for the broadening of the spectral lines by unresolved turbulent motions. The spectral PSF is taken into account by convolving the synthetic spectra with the instrumental profile representing the spectral resolution of Hinode/SOT-SP (van Noort, 2012). The node positions were placed at \(\log\tau=(0,-0.8,-2.0)\) for AR 10930 following Siu-Tapia et al. (2017), and at \(\log\tau=(0,-0.8,-2.3)\) for AR 11967. Maps of the retrieved atmospheric conditions for these two sunspot groups are presented in Sections 3.1 and 3.2, as well as some examples of fits to the observed Stokes profiles.
When the spatial PSF of the optical system is known, the spatially coupled inversions can be used to estimate atmospheric conditions up to the telescope's diffraction limit. We upsampled the data by a factor of two before running the spatially coupled inversions to fit substructures that are below the spatial resolution of the telescope as recommended by van Noort et al. (2013). After final convergence of the spatially coupled inversion, we downsampled the retrieved atmospheric conditions and the best-fit profiles to the original sampling. Data upsampling and downsampling were performed in Fourier space.
Several Hinode/SOT-SP scans of all the CEFs analyzed in this work were taken at \(\mu\)-values larger than 0.8, allowing us to determine their polarity with reasonable accuracy without transforming the magnetic field into the local reference frame. Examples of observed Stokes profiles and their fits obtained with spatially coupled inversions are shown in Figure A1. These profiles were chosen to display that even highly complex Stokes profiles are well modelled with our inversion scheme.
## 3 Results
### CEFs in AR 10930
The \(\delta\)-sunspot group AR 10930 contains two large colliding sunspots of opposite polarity, with the southern spot ro
Figure 2: Same layout as Figure 1 for AR 11967. Black arrows indicate the location of CEF-3. See also Animation 2, which is part of the online material.
Figure 3: Zoom-in into the northern arrowhead-shaped region referred to as the tip of CEF-2 in the main text (marked by the arrows). Time runs from top to bottom. Columns are the temperature, the magnetic field strength \(B\), \(v_{\rm LOS}\) and the inclination of the magnetic field relative to the line-of-sight.
tating rapidly counterclockwise. This active region hosted two CEFs, both in the penumbra of the main sunspot located in the north of AR 10930. The complexity and rotation of the sunspots within AR 10930 influence the evolution of the CEFs that it harbored (see below).
The first CEF (CEF-1) was observed on the north-west part of this sunspot and remained within the penumbra for 17 Hinode/SOT-SP scans recorded between 2006 December 8 at 06:11 UT (SCANS-A00, \(\mu=0.56\)) and 2006 December 10 at 21:00 UT (SCANS-A16, \(\mu=0.92\)). CEF-1 appeared as a red-shifted region within the center-side penumbra surrounded by the normal Evershed flow, which appeared blue-shifted when AR 10930 was located on the eastern hemisphere.
The second CEF (CEF-2) emerged on 2006 December 9 at 07:00 UT (SCANS-A08, \(\mu=0.76\)) and completely vanished on 2006 December 11 at 11:10 UT (SCANS-A20, \(\mu=0.98\)) before AR 10930 crossed the central meridional. CEF-2 appeared as an elongated, blue-shifted penumbral region enclosed by normal penumbra on the limb-side (i.e., the normal Evershed flow in that part of the penumbra was red-shifted). CEF-2 was located on the south-side of AR 10930. CEF-2 connected the main umbra of AR 10930 and a smaller umbra with opposite magnetic polarity. CEF-2 appeared like a normal Evershed flow, but oriented from the smaller umbra towards the bigger one, while on both sides of the CEF-2 the Evershed flow was dominated by the main umbra (which would be CEFs when viewed from the small umbra). This example shows the difficulty of distinguishing between the normal Evershed flow and a CEF in more complex ARs.
Figure 1 displays the temporal evolution of both, CEF-1 and CEF-2. Columns display from left to right the temperature, \(B\), \(v_{\rm LOS}\) and \(\gamma_{\rm LOS}\), all at the middle node.
The magnetic configurations of CEF-1 and -2 were very different. CEF-1 had the same polarity as the main spot in AR 10930 close to the umbra-penumbra boundary and opposite in outer penumbra. CEF-2 had opposite polarity to the surrounding penumbrae. CEF-1 covered an area starting from the umbra-penumbra boundary to the quiet Sun. CEF-2 appeared as a thin elongated filamentary structure that grew until it formed a bridge between the main north positive umbra and the growing south negative umbra. To better display the temporal evolution of the CEF-1 and -2, we co-aligned the Hinode/SOT-SP scans with each other and present them as Animation 1 among the online material.
### CEF in AR 11967
Active region 11967 was one of the largest and most complex sunspot groups of solar cycle 24. We tracked AR 11967 for 11.1 days. During this period 19 CEFs were found at different parts of the sunspots belonging to this intricate active region. In this work, we focus only on one of these CEFs, which was co-observed by Hinode/SOT-SP. Hereafter we refer to this CEF as CEF-3 (Figure 2). CEF-3 was observed when AR 11967 was on the eastern hemisphere and it emerged as an intrusion in the penumbra with opposite polarity. CEF-3 was present in 9 out of 11 scans taken by Hinode/SOT-SP between 2014 February 1 at 10:42 UT (SCANS-B00, \(\mu=0.83\)) and 2014 February 2 at 10:20 UT (SCANS-B10, \(\mu=0.96\)). CEF-3 first appeared as two elon
Figure 4: Temporal evolution of the center of gravity of \(v_{\rm LOS}\) (\(R_{v_{\rm LOS}}\); black line), magnetic field strength (\(R_{B}\); dark gray line), and brightness (\(R_{L}\); light gray line), as well as the area (blue line; right axis) of CEF-1 (panel (a)) and CEF-3 (panel (b)). The vertical line on the left panel represents the time when CEF-1 is totally expelled from the penumbra of the main sunspot of AR 10930. After this time, panel (a) shows the location of the centers of gravity and area of the spot that formed at the location of where CEF-1 ended into the moat of AR 10930. The vertical lines on panel (b) mark the times when CEF-3 started to grow (\(t_{0}\), vertical solid line), when CEF-3 started to be expelled (\(t_{1}\), vertical gray dashed line), when the LOS magnetic field and velocity had their maximum (\(t_{2}\), vertical red dotted line), when the maximum area was reached (\(t_{3}\), vertical blue dash-dotted line), and when CEF-3 was totally expelled from the penumbra into the moat of AR 11967 (\(t_{4}\), vertical grey dashed line; see main text for details).
gated penumbral filaments that grew and later merged (Figure 2, SCANS-B00 to B62). It had opposite magnetic polarity compared to the surrounding penumbra and the umbra in its vicinity. CEF-3 expanded until it filled the entire length of the penumbra before it got expelled. Animation 2 showing the temporal evolution of CEF-3 as seen by SDO/HMI is available as online material.
In AR 11967 there is another elongated blue-shifted region in the south-west of CEF-3 (see Figure 2, SCANS-B08 at \((40,20)\) Mm). This region is a widely studied bipolar light bridge (Okamoto and Sakurai, 2018; Castellanos Duran et al., 2020) that separates opposite polarity umbrae. Bipolar light bridges usually harbor bi-directional flows, which can be identified by velocities of alternating sign (Castellanos Duran, 2022). Consequently, the direction of flows inside these regions cannot be classified as either normal or counter Evershed flows.
### Appearance of the CEFs
Unfortunately there are no Hinode/SOT data during the appearance phase of CEF-1. For CEF-2 and CEF-3, we could follow their entire formation process. These two CEFs appeared as intrusions inside a fully formed penumbra without any merging with an external magnetic structure (see Figures 1 and 2), resembling the emergence of new magnetic flux at the solar surface. This appearance process of CEF-2 and CEF-3 is better seen in Animation 1 and Animation 2.
In addition, during the appearance phase of CEF-2, the northern edge of the penumbral filament that harbored CEF-2 showed a fairly distinctive behavior. As time progresses, it developed into an arrowhead-shaped intrusion of the penumbra towards the umbra. When the intrusion was fully formed, the umbra-penumbra boundary shifted by \(\sim\)5 Mm towards the inner umbra. This region is encircled in Figure 1, centered at \((56,\,56)\) Mm. Figure 3 shows a zoom-in into this intrusion, revealing an enhanced \(B\) at its edges. The flow at the tip of the intrusion is of opposite direction to CEF-2 but has the same direction of the normal Evershed flow at that location. Projection effects can be excluded as a reason for the opposite flow direction and polarity, as \(\mu\)\(\gtrsim\)0.8 and the tip of CEF-2 was located on the center side of the main sunspot of the group. The continuum images exhibit a continuous filamentary structure from the tip to the main body of CEF-2. The sign of the flow and magnetic field in this region is consistent with a downflow at the locality after being nearly vertical at the filament's head. The filament that harbored CEF-2 became more horizontal in the body and finally bent over to return into the Sun at the tail. In that location within the tip, strong fields were observed. When CEF-2 moved away from the umbra, the magnetic field returned to nominal penumbral values.
### Expulsion of the CEFs
After CEF-1 and CEF-3 grew to occupy almost the entire distance from umbral to the outer penumbral boundary, the entire region containing the CEFs started to move. The temporal evolution of these regions harboring CEFs shows a radially outwards motion from the place they first appeared within the penumbra. They moved towards the outer boundary of the main sunspot of the group parallel to the penumbral filaments. We can trace the location of the CEFs at all times, as the direction of \(v_{\text{LOS}}\) within them stayed opposite to the local normal Evershed flow of the surrounding penumbra. Hereafter, we refer to the outward motion of CEFs from the place they initially appear as their _expulsion_.
We used the available low-cadence Hinode/SOT-SP scans for CEF-1 and the SDO/HMI data for CEF-3 to estimate the apparent velocity of the expulsion of the CEFs through their proper motion. The restriction of SDO/HMI is the low spectral resolution; however, SDO/HMI provides continuous 45 s-cadence LOS velocity and magnetic field measurements, albeit at a single height (see Animation 2). For the two data sets, we masked the CEFs and calculated the location of the center of gravity \(R\) of a quantity \(F\) within the CEF as
\[R_{F}=\frac{\sum_{(i,j)\in A_{\text{CEF}}}F_{ij}\sqrt{(r_{0}-r_{ij})^{2}}}{\sum _{[i,j]\in A_{\text{CEF}}}F_{ij}}, \tag{1}\]
where \(A_{\text{CEF}}\) is the area covered by the CEF, \(i\), \(j\) identify pixels inside the CEF (identified using the \(v_{\text{LOS}}\) maps) and \(r_{0}\) is the reference point chosen inside the closest umbra-penumbra boundary. By replacing the placeholder \(F\) by the parameters \(I_{\text{c}}\), \(B\) or \(v_{\text{LOS}}\), we obtained the centers of gravity of the brightness (\(R_{I_{\text{c}}}\)), of the magnetic field (\(R_{B}\)), and of the LOS velocity (\(R_{v_{\text{LOS}}}\)).
In Figure 4 we present the temporal evolution of \(R_{v_{\text{LOS}}}\) (black line), \(R_{B}\) (dark gray), and \(R_{I_{\text{c}}}\) (light gray). The blue line shows the temporal evolution of the area of the CEFs. Before CEF-1 and CEF-3 were expelled, \(R_{B}\) was closer to the umbra, while \(R_{I_{\text{c}}}\) is located in the mid-penumbra. This displacement between the centers of gravity comes from the fact that the field strength increases towards the umbra, also inside the CEFs. When these CEFs started moving the distance between the centers of gravity reduced until they coincide.
The horizontal velocity of expulsion for CEF-1 is on average \(65\,\mathrm{m}\,\mathrm{s}^{-1}\) (red line in Figure 4(a)). This horizontal velocity traces the proper motion of the entire CEF-1 on the surface of the Sun, and not the plasma flow velocities within the penumbral filaments harbored inside CEF-1. The vertical dashed line marks the time when the magnetic structure that forms CEF-1 leaves the penumbra and a new spot starts forming. The maximum area of CEF-1 inside the penumbra is \(24.7\,\mathrm{M}\mathrm{m}^{2}\). In addition, the decay of CEF-1 reveals that it is composed of individual strands harboring oppositely directed flows. While the center-of-gravity of CEF-1 moves smoothly radially outwards, an individual strand was observed moving with a speed ten times larger than the center-of-gravity velocity (see the middle row of Animation 1).
We marked the expulsion of CEF-3 at five different moments (see vertical lines in Figure 4(b)). The reference time is 2014 February 1 at 04:00 UT, which corresponds to the first frame of Animation 2 provided as online material. CEF-3 was sporadic (i.e., it appeared and disappeared) in the early
phase which lasted for 2 hours until it reached an area of \(\sim\)3 Mm\({}^{2}\). Figure 4 starts at this time. At \(t_{0}=10\):05 UT the size of CEF-3 started to grow almost linearly in area at a rate of 130 km\({}^{2}\) s\({}^{-1}\). Approximately five hours later (\(t_{1}=14\):40 UT), CEF-3 started to be expelled with a horizontal velocity of 117 m s\({}^{-1}\). Its maximum magnetic flux density and maximum \(v_{\rm LOS}\) were reached at \(t_{2}=18\):50 UT, well before it reached its maximum area (13.1 Mm\({}^{2}\)) at \(t_{3}=20\):58 UT. This, again, is because the strongest fields and \(v_{\rm LOS}\) values are to be found at or close to the umbral boundary. The innermost part of CEF-3 reached the outer penumbral boundary at \(t_{4}=02\):30 UT on February 2. After \(t_{4}\), a new spot started forming in the moat of the original host sunspot. The opposite-directed flow with respect to the adjacent penumbral inside the new spot suggests that this spot is formed from the same magnetic structure which previously formed CEF-3. The further evolution plotted in Figure 4 follows this spot.
CEF-2 also undergoes dynamical changes and moves away from the region where it first appeared within AR 10930. However, a different mechanism seems to be at work here. Recall that CEF-2 was located in between the north spot (main) and the south spot (satellite). On 2006 December 10, the satellite spot started to slowly rotate counterclockwise. The temporal evolution suggests that CEF-2 followed the counterclockwise rotation of the satellite spot, indicating that it was anchored in the satellite umbra and was stretched by the satellite spot's rotation until CEF-2 disappeared (see e.g., Figure 1). This stretching of CEF-2 can be seen in the bottom panels of Animation 1 provided as online material.
### What happens with the CEF magnetic structure after its expulsion?
During the expulsion of CEF-1, in the outer penumbra of the main spot of AR 10930, or just outside its boundary, a number of pores developed, which then coalesced to form a small umbra with a penumbra attached to it (Figure 5). Panel 5(i) shows the CEF-1 when it was located inside the penumbra of the main spot. In panel 5(j) four small pore-like dark regions appear (black arrows). These regions seem to merge and form a complex structure, as shown in panels 5(k) and 5(l). In panel 5(m) the new feature has coalesced into an umbra that forms a penumbra on two sides including the one facing the penumbra of the main spot. The flow inside the newly formed penumbra has the same flow direction as the CEF-1 had when it was located inside the penumbra of the main spot. This flow pattern can be seen in the change from a redshifted patch when AR 10930 was on the eastern hemisphere (black arrows) to a blueshifted patch of flow on the western hemisphere (green arrows). From the perspective of the small umbra, the flow running along the newly formed penumbra has the direction of the normal Evershed flow.
The newly formed region has the opposite magnetic polarity compared to the main spot. The polarity of the new spot could be unambiguously determined from the 19 Hinode/SOT-SP scans of AR 10930 that were taken close to disk center (\(\mu>0.9\)). The newly formed region also showed a slow counter-clockwise rotation in the moat of the main spot of AR 10930. The penumbra of the newly formed spot reached its maximum area around 20 UT on December 11, before it started to decay to a small pore on Dec 14. This pore was present for at least six days, before disappearing behind the west limb. The full temporal evolution can be seen in the middle row of Animation 1, which is part of the
Figure 5: Maps of the \(v_{\rm LOS}\) (top) and continuum (bottom) during the expulsion of CEF-1 and after it left the penumbra. \(v_{\rm LOS}\) are clipped at \(\pm\)4 km s\({}^{-1}\). In the first two columns, AR 10930 was located on the eastern solar hemisphere, while in the last four columns AR 10930 is on the western hemisphere. This change in viewing geometry between hemispheres of AR 10930 causes the normal Evershed flow to appear in panels (a) to (b) blueshifted and the CEF redshifted, while in panels (e) to (h) this pattern is reversed. The time and heliocentric coordinates of each scan are marked on the top part of each column. The full temporal evolution of CEF-1 is shown in the middle row of Animation 1 part of the online material.
online material. These observations suggest that the origin of the small spot is closely related to the magnetic structure that harbored CEF-1.
CEF-3 was expelled into a region where the penumbra appeared to be extended in a way that suggested a separate penumbra attached to the main penumbra of the spot (in particular it suggests the same polarity and curvature of the field, see Animation 2). Once outside the main penumbra of the sunspot, CEF-3 appeared to form small patches of penumbra, moving radially outward from the host sunspot in the moat of AR 11967, increasingly resembling an orphan penumbra (see Animation 2).
### B and \(v_{\rm LOS}\) inside the CEFs
The magnetic field strength and \(|v_{\rm LOS}|\) within the CEFs were taken from the spatially coupled inversion of the Hinode/SOT-SP data. The left and right columns of Figure 6 show the temporal evolution of the averaged \(B\) and \(|v_{\rm LOS}|\), respectively. The black lines are the averaged values within the region of interest (ROI) inside the sunspot group. We define the ROI as the full map displayed in Figures 1 and 2, where we masked out the quiet Sun (as the CEFs are present only in penumbrae), and the dark umbra (where the inversion results are less reliable due to the blends with molecular lines that form at low umbral temperatures). From top to bottom, the color-coded lines are the averaged values of \(B\) and \(|v_{\rm LOS}|\) for CEF-1 to CEF-3. Color-coded marks indicate the different time stamps of the scans. For CEF-1 and CEF-2 some scans overlap because CEF-1 and -2 are partly present at the same time, hence some data points appear in both the top and middle panels.
The mean \(B\) value within the ROIs was on average \(\sim\)2 kG, and showed little variation over the course of the evolution of the active regions. In the case of CEF-1, superstrong fields were observed in the early stages, when the area of CEF-1 was largest and filled the entire penumbral sector. In this phase, the CEF-1 reached the umbra-penumbra boundary (Figure 6(a)). In a later stage, CEF-1 showed moderate field strengths, similar to the field strengths observed in CEF-2 and -3 although CEF-2 also harbored individual pixels with field strengths reaching 6 kG.
For CEF-2 the strongest magnetic fields occurred at the time of its appearance, while the mean magnetic field of the spot was around \(\sim\)3.2 kG (Figure 6(c)). Magnetic fields larger than 4 kG were seen inside CEF-2 during its formation
Figure 6: Temporal evolution of the magnetic field strength (left column) and \(|v_{\rm LOS}|\) (right column) at \(\tau_{5000\AA}=1\) inside the ROI in AR 10930 and AR 11967 and their CEFs. The black lines display the magnetic field strength and \(|v_{\rm LOS}|\) averaged over the entire ROI inside the sunspot group. Colored lines show the mean values within CEF-1 (top), CEF-2 (middle) and CEF-3 (bottom), while the colors indicate the Hinode/SOT-SP scan times, starting from red and progressing to blue. The light-blue curve in each panel (referring to the right axis) indicates the \(\mu-\)values of the scans.
phase. The mean \(B\) remained at a high value until about 20 hr after its appearance. Thereafter it decreased.
The magnetic evolution of CEF-3 is different compared to the other two CEFs. The mean value of \(B\) inside CEF-3 oscillated around \(\sim\)1.9 kG (Figure 6(e)). The general trend of decreasing mean field strength with time, as seen for CEF-1 and CEF-2, is not visible in CEF-3.
The \(v_{\rm LOS}\) values strongly depend on the projection (\(\mu\)-value), and therefore we do not compare their values one-to-one between different scans, but rather provide a qualitative description of their evolution. For scans observed close in time, the \(\mu\)-variation between scans is small, which allows us to describe roughly the temporal evolution of the line-of-sight velocity.
The temporal evolution of the line-of-sight velocity shows that CEF-1 harbored considerably larger \(|v_{\rm LOS}|\) values than the other two CEFs (Figure 6, right column). Particularly during the early scans, CEF-1 was characterized by supersonic \(|v_{\rm LOS}|\). The photospheric sound speed lies typically in the range \(c_{s}\) \(\sim\) 6-8 km s\({}^{-1}\). These fast \(|v_{\rm LOS}|\) were co-temporal and roughly co-spatial with the superstrong magnetic fields found in CEF-1. In the late stages of CEF-1, the velocities returned to nominal penumbral values. CEF-2 and CEF-3 showed mainly low \(|v_{\rm LOS}|\) values, with CEF-2 having a few points with clearly supersonic flows (roughly similar in number to points having \(B>4\) G).
The early superstrong fields in CEF-1 were located in the same pixels as those first reported by Siu-Tapia et al. (2017, 2019). These strong magnetic fields within CEF-1 stayed mostly close to the umbra-penumbra boundary at all times (Figure 7). The number of pixels with strong fields decreased along with their maximum field strength (Figure A1) at the time when CEF-1 lost contact with the umbra. After the complete expulsion of CEF-1, the magnetic field strength, as well as the other atmospheric parameters in the patch of penumbra that had previously hosted it, returned to typical penumbral values (see e.g., Figure 1, SCANS-A14).
Figures 3.4 and 3.7 of Castellanos Duran (2022) show the distributions of \(B\) and \(|v_{\rm LOS}|\) inside these three CEFs and how they vary over time. As discussed previously, those figures show a high number of pixels with strong magnetic fields and fast LOS velocities when CEF-1 and CEF-2 were in contact with the umbra-penumbra boundary. CEF-3 did not touch the umbra-penumbra boundary and strong magnetic fields on the side of CEF-3 that was closer the umbra-penumbra boundary were not present in CEF-3 at any time.
### Chromospheric response above the CEFs
While the continuum images of the CEFs look very similar to the normal penumbra, the chromosphere above these structures is much more dynamic. The chromospheric images of the Ca II H line taken by Hinode/BFI show brightening events that are co-spatial or appear at the boundaries of CEFs (cf. Louis et al., 2014). These brightening events were observed repeatedly. To quantify this chromospheric activity, we calculated the radiative flux in the Ca II H line within three circular sectors for AR 10930 that hosted CEF-1 and CEF-2. The aperture of these sectors is 90\({}^{\circ}\) with a radius of 36\({}^{\prime\prime}\). We selected the areas to be of the same size for an unbiased comparison. The aperture and radius of the sectors were chosen to fully contain the CEFs during all phases of their evolution, covering also the strong elongation of CEF-2. In addition, tests were performed by varying the aperture and radius of the circular sectors (not shown). The similarity of the results obtained suggests that the discussion below does not depend on the selection of the sectors.
Figure 8 displays the temporal evolution within the three sectors color-coded blue for CEF-1, green for CEF-2, and orange for a control region containing only a typical penumbra region without any CEF. The three-light curves (h), (i), and (k) are normalized dividing by the area inside the circular sector and the averaged quiet-Sun intensity. Since we
Figure 7: Location of the strong magnetic field in CEF-1. The two rows show the temperature (top) and magnetic field strength (bottom) at \(\tau=1\). Contours mark the umbra-penumbra boundary (yellow) and CEF-1 (blue).
are interested to quantify the brightenings, i.e., short peaks in the light curve, rather than the long-term evolution of the sunspot group, we fitted the background with a 10th-order polynomial and subtracted this fit from the light curve. We also included the GOES 1-8 A flux showing the soft X-ray activity integrated over the entire solar disk. The light curves of the two CEF regions indeed showed enhanced chromospheric emission. Examples of associated brightenings appearing above or next to the location of CEFs range from small events (Figure 8(b), (c), (l), (n)) to a _large_ C-class flare seen in soft X-rays (Figure 8(g)).
A similar analysis was carried out for CEF-3. Little brightening events are also observed above CEF-3 (see Figure 9), however, their frequency and intensity are lower compared to the high chromospheric activity above CEF-1 and CEF-2. The complex magnetic topology of AR 11967 and the continuous chromospheric activity all over AR 11967 makes the chromospheric activity above CEF-3 only a minor contributor.
## 4 Discussion
We analyzed the photospheric properties inside three CEFs using spatially coupled inversions. We also consider the influence of CEFs to the chromosphere. We followed the temporal evolution of the CEFs by inverting all the available spectropolarimetric maps taken by Hinode/SOT-SP of the sunspot groups harboring them. The response at chromospheric heights above the CEFs was characterized using the filtergraph images in the Ca ii H line. Table 1 summarizes the properties of the three CEFs analyzed. We found that the CEFs are expelled from the location in the main sunspot of the group at a velocity of about \(\sim\)100 m s\({}^{-1}\) where they emerged radially outwards into the moat of the sunspot. To our knowledge there is just one report that showed the expulsion of a CEF (Kleint & Sainz Dalda, 2013), however, that study focused on the so-called _umbral filaments_ and did not provide further information about the CEF beyond the movement of the CEF.
Figure 8: (a): Filtergram image of the chromospheric Ca ii H lines. (b)-(g): Examples of the chromospheric brightenings above CEF-1. (h), (i) and (k): Light curves of the Ca ii H mean intensity inside the three circular-sectors enclosing CEF-1 (blue), CEF-2 (green) and a control region without CEF (orange) marked in panel (a). See main text for how the circular sectors were selected. (j): GOES light curve at 1-8 Å. GOES classes A to C represent the X-ray flux integrated over the entire Sun in logarithmic scale ranging from 10\({}^{-8}\) to 10\({}^{-6}\) W m\({}^{-2}\), respectively. Images (b)-(g) and (l)-(q) show examples of brightening events observed above CEF-1 ((b)-(g)) and CEF-2 ((l)-(q)). Locations of CEFs are marked with yellow contours. The time is marked with respect to the light curves in panels (h) and (k). Images are normalized to the averaged quiet-Sun intensity and their dynamic ranges are shown on the bottom left of each panel.
The analyzed CEFs appear to be the result of two different processes. Although there were no Hinode/SOT data available during the appearance phase of CEF-1, Hinode/SOT-BFI and -NFI images of December 6 and early 7, when AR 10930 appeared on the east limb and before CEF-1 was formed, were available. Siu-Tapia et al. (2017) suggested that CEF-1 resulted from the coalescence of a satellite spot with the main spot, which inherited the penumbra of the satellite spot. For CEF-2 and CEF-3 it was possible to follow their entire formation process. These two CEFs appeared as intrusions within a fully developed penumbra without the merging with any external magnetic structure already visible on the solar surface. These intrusions mimic the appearance of new magnetic flux at the surface of the Sun. Similar emergence-like CEFs were observed in MHD simulations (Chen et al., 2017).
Using MHD simulations, Siu-Tapia et al. (2018) proposed that CEFs can be driven by siphon flows. The gas pressure difference required to drive these flows can originate from any process that leads to a field strength enhancement at the endpoint of the flow. For a CEF, this is at the boundary between the umbra and penumbra. Such field strength enhancements were indeed observed for CEF-1 and CEF-2, making the siphon flow a possible driver of these two flows. However, for CEF-3, no such field strength enhancement was observed.
CEFs showed a slightly different inclination relative to the surrounding penumbrae. This indicates that the reversed flow direction, which is the signature of CEFs, is associated by and likely driven by a somewhat different magnetic structure. Indeed, this is consistent with the finding of Siu-Tapia et al. (2018) that CEFs are driven by a siphon flow, while the normal Evershed flow is not.
CEF-1 (Animation 1) and CEF-3 (Animation 2) travelled radially outwards through the penumbra. When these CEFs reached the outer boundary of the penumbra of the main spot, a satellite spot started forming. The Evershed flow of this newly formed spot was originally the CEF of the main spot and did not change its flow direction (Figure 5) when detaching from the main sunspot. This could suggest that the newly formed spot belonged to the same magnetic structure that formed the CEFs inside the penumbra of the main spot. The newly formed spot continued traveling into the moat of the main sunspot up to a distance similar to the length of the adjacent penumbra of the main sunspot. This distance coincides with the typical extension of the sunspot's moat, which often has the width of the penumbra in its vicinity (e.g., Brickhouse and Labonte, 1988; Sobotka and Roudier, 2007). After the new spot travelled this distance, it stopped its radially outward motion. During this process, the new spot started decaying, losing its penumbra in the process (Figure 5l).
There is evidence that connects the Evershed flow and the moat flow as its extension (Vargas Dominguez et al., 2007, 2008; Rempel, 2011). Also, there are previous reports of magnetic structures moving radially outwards from the penumbra into the moat, such as the expulsion of sea-serpent features and Evershed clouds (Rimmele, 1994; Sainz Dalda and Bellot Rubio, 2008; Cabrera Solana et al., 2006, 2007, 2008). Sea-serpent features and Evershed clouds have (proper motion) expulsion speeds of \(\sim\)300-500 m s\({}^{-1}\). These expulsion speeds are faster than the expulsion speeds of CEF-1 (\(\sim\)65 m s\({}^{-1}\)) and CEF-3 (\(\sim\)117 m s\({}^{-1}\)). The mean areas of sea-serpent features and Evershed clouds (\(\sim\)1.2-2.5 Mm\({}^{2}\)) tend to be smaller than the areas covered by CEF-1 (\(\sim\)10-25 Mm\({}^{2}\)) and CEF-3 (\(\sim\)2-13 Mm\({}^{2}\)). For all the features, the direction of the expulsion is parallel to the Evershed flow direction at this location. This suggests that the expulsion speed of a feature depends on its area, although the statistics are rather poor. We speculate that this may reflect a common mechanism responsible for the expulsion. This mechanism could be related to the Evershed flow itself, accelerating smaller features to higher velocities than larger ones. One possible test of this scenario would be to use the large sample of CEFs presented by Castellanos Duran et al. (2021). The sample covers a wide range of CEF areas. A common expulsion mechanism may show up in a correlation of the areas of CEFs with their expulsion speeds.
The process leading to the expulsion of CEF-2 appears to be different from that affecting CEFs 1 and 3. The temporal evolution of CEF-2 suggests that its disappearance is caused by the rotation of the satellite spot. CEF-2 was anchored in the satellite umbra and subsequently stretched by the satellite spot's rotation until it disappeared (Figure 1). Two studies found that the total rotation of the satellite spot in AR 10930 between 2006 December 10 and 2006 December 13 is 240\({}^{\circ}-440^{\circ}\)(Zhang et al., 2007; Minoshima et al., 2009). The rotation velocity of the spot increased almost linearly from \(\sim\)0.25\({}^{\circ}\) hr\({}^{-1}\) to \(\sim\)8\({}^{\circ}\) hr\({}^{-1}\)(Figure 8(c) of Min and Chae, 2009) at the time when CEF-2 vanished.
CEF-1 and CEF-2 showed downflows co-spatial with strong magnetic fields. Strong \(B\)-values were always present at the umbra-penumbra boundary as long as CEF-1 was in contact with it. The area covered by strong magnetic
Figure 9: Examples of chromospheric brightenings observed in the Ca ii H line above CEF-3. The dynamic range of the images, normalized to the averaged quiet-Sun intensity, is given in the bottom part of each panel. Yellow contours mark the location of CEF-3 in the underlying photosphere.
fields and the maximum field strengths within these areas decreased when CEF-1 lost contact with the umbra. After the complete expulsion of CEF-1, the magnetic field strength and other atmospheric conditions in the same penumbral patch returned to normal. In the case of CEF-2, the gas flowing towards the main umbra was compressed by the strong field at the boundary of the umbra. The compression subsequently amplified \(B\) and \(v_{\rm LOS}\) to the observed high values in CEF-2. As with CEF-1, the magnetic field and \(v_{\rm LOS}\) returned to nominal penumbral values after the expulsion of CEF-2 (cf. Figures 3.5 and 3.7 of Castellanos Duran 2022). The strong fields inside CEFs 1 and 2 could be related to the so-called "magnetic barrier" (van Noort et al., 2013) as proposed for CEF 1 by Siu-Tapia et al. (2019). This mechanism was first proposed to explain the superstrong fields found at the endpoints of penumbral filaments. In the case of CEFs 1 and 2, the material flowing in a penumbral filament towards the umbra is forced to descend rapidly because of the presence of the strong umbral field acting as the magnetic barrier and hindering the flow from continuing. The magnetic barrier scenario also explains why \(B\) and \(v_{\rm LOS}\) returned to nominal values after the CEFs moved away from this barrier.
CEF-3 harbored strong fields of up to 5 kG located at the endpoints of the penumbral filaments, similarly to the observations by van Noort et al. (2013). Contrary to CEF-1 and CEF-2, CEF-3 emerged \(\sim\)1'' away from the umbra-penumbra boundary. Therefore, no compression towards the umbra occurred there.
In concordance with previous works (e.g., Kleint, 2012; Kleint & Sainz Dalda, 2013; Louis et al., 2014, 2020), our data show many flares and brightenings associated with CEFs (Figures 8 and 9). In addition, we also found increased chromospheric activity that appears to depend on how far the inner part of the CEF is located from the umbra-penumbra boundary. Thus, CEFs 1 and 2 that reach this boundary show considerably higher activity than CEF-3.
The combination of the shear induced by the rotation of AR 10930 and the complexity of the polarity inversion line (PIL) were proposed to be crucial to triggering the X3.4 flare (SOL20061213T02:40; e.g., Kubo et al., 2007; Wang et al., 2008; Schrijver et al., 2008; Jing et al., 2008; Lim et al., 2010; Gosain & Venkatakrishnan, 2010; Fan, 2011; Ravindra et al., 2011; Inoue et al., 2011, 2012; He et al., 2014; Wang et al., 2022). However, to our knowledge, previous studies neglected the opposite direction of the flow along the penumbral filaments at the location where the major flare was triggered. CEF-2 appeared in the middle of the penumbra and was then dragged/expelled with a rotation rate of 4\({}^{\rm o}\) hr\({}^{-1}\)(Min & Chae, 2009) by the south satellite spot in AR 10930. The remnants of CEF-2, visible in the \(v_{\rm LOS}\) column of Animation 1, coincide exactly with the location at the PIL which previous studies recognized as the region where this major flare was triggered. The presence of various opposite-directed flows, remnant from CEF-2 in this region, presents an extra factor in the complexity of the PIL and might therefore be another ingredient in triggering this X-class flare.
## 5 Summary and Conclusions
In this study, we analyzed three CEFs observed in two sunspots groups. We investigate their temporal evolution, and their chromospheric impact. In the following, we summarize the main results of our study:
* CEFs first appear close to or at the umbra-penumbra boundary and they grow until they reach the outer penumbral boundary.
\begin{table}
\begin{tabular}{l|c c c} \hline \hline ID & CEF-1 & CEF-2 & CEF-3 \\ \hline NOAA AR ID & 10930 & 10930 & 11967 \\ Date first observed & 2006 Dec 8 & 2006 Dec 7 & 2014 Feb 01 \\ SP/SCAN ID & A & A & B \\ Observed in SP/SCANS & 00-16 & 08-20 & 00-10 \\ SDO/HMI & No & No & Yes \\ Lifetime\({}^{a}\) (hr) & 50 & 49 & 24 \\ \(\mu\) range & 0.56-0.92 & 0.76-0.98 & 0.83-0.96 \\ Maximum area\({}^{b}\) (Mm\({}^{2}\)) & 24.7 & 12.3 & 17.7 \\ Opposite polarity\({}^{c}\)? & No & Yes & Yes \\ Max \(B(\log\tau=0)\) (kG) & 8.4 & 6.7 & 5 \\ Max \(v_{\rm LOS}(\log\tau=0)\) (km s\({}^{-1}\)) & 22.2 & 8.2 & 12.1 \\ Location strongest \(B\) & UPB\({}^{d}\) & UPB\({}^{d}\) & EPPF\({}^{e}\) \\ New spot formed? & Yes & No & Yes \\ Ejection mechanism & Radial\({}^{e}\) & Rotation AR & Radial\({}^{e}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Properties of the three expelled CEFs. \({}^{a}\)Lifetime inside the penumbra. \({}^{b}\)Maximum area of the CEF. \({}^{c}\)Opposite polarity of the CEF with respect to the main spot. \({}^{d}\)UPB: Umbra-penumbra boundary. \({}^{e}\)EPPF: End-point penumbra filament. \({}^{e}\)Radially outward towards the moat of the AR.
* Two different processes can explain the formation of the three CEFs part of this study. For CEF-1, Siu-Tapia et al. (2017) suggested that it could have resulted from the coalescence of a satellite spot and the main umbra. Differently, CEF-2 and CEF-3 appeared as intrusions within a fully formed penumbra, independent of visible external magnetic structures (cf. Louis et al., 2014, 2020; Guglielmino et al., 2017). This behavior is compatible with the emergence of sub-surface magnetic flux within the penumbra. This was discussed for a simulated spot that is forming (Chen et al., 2017). In these circumstances, CEFs are related to new flux (either emerging directly in the penumbra or just outside it). However, the CEFs studied here are within mature spots.
* After a growth phase, CEFs 1 and 3 are seen to start moving parallel to the penumbral filaments. When they reach the outer part of the penumbra, a new spot starts forming in the moat of the main sunspot. The direction of the flow inside the penumbra of the newly formed spot is the same as in the CEFs and opposite to the adjacent penumbra of the main spot. This provides strong circumstantial evidence for a linkage between the CEFs and the newly formed spots.
* In the moat, the newly formed spot reached a maximum distance to the penumbra at the outer boundary of the moat flow.
* The expulsion speeds observed of CEF-1 and -3 in the penumbra are lower than the ones of Evershed clouds (Cabrera Solana et al., 2006) and sea-servent magnetic features (Sainz Dalda & Bellot Rubio, 2008). Considering CEFs are typically larger features (covering a larger area), one possible explanation is that these speeds depend on the size of the features. These photospheric features are often seen moving parallel to the penumbral filaments similar to CEF-1 and CEF-3. Common to all (CEFs, Evershed clouds, and sea-servent features) is the presence of the normal Evershed flow surrounding these features and parallel to the direction of the expulsion.
* Siu-Tapia et al. (2017, 2019) showed for one Hinode/SOT-SP scan that superstrong \(B\) observed in CEF-1 were associated with these flows directed towards the umbra, and that they were located mainly at the umbra-penumbra boundary. We confirm the presence of the superstrong fields in several Hinode/SOT-SP scans at different \(\mu\)-values. This makes a possible interpretation of a strongly Doppler shifted component as a magnetic component of strongly Zeeman splitted spectral lines less likely (cf. Castellanos Duran et al., 2020). The temporal evolution of these superstrong \(B\) showed that as soon as the expulsion of CEF-1 begins, and the contact to the umbra is lost, the maximum field strength drops. This supports the interpretation of Siu-Tapia et al. (2019) that the origin of the superstrong fields in AR 10930 is related to compression at the magnetic barrier formed by the umbral field (van Noort et al., 2013).
* The expulsion mechanism of CEF-2 is influenced by the complex evolution of AR 10930, and it is completely different from that of CEFs 1 and 3. CEF-2 was apparently dragged and subsequently stretched by the rotation of the satellite spot with a rotation rate of \(\sim\)4\({}^{\circ}\) hr\({}^{-1}\).
Observers identify three physical processes that can lead to CEF formation: flux emergence (e.g., Louis et al., 2014, 2020), adhesion of the penumbra from another spot after merging (Siu-Tapia et al., 2017), and the association of granular and filamentary light bridges and CEFs (Castellanos Duran et al., 2021). Further observations of CEFs and analyses of the deeper layers using simulated CEFs are needed to gain insight into the physical mechanisms responsible for their formation and maintenance.
A total of 19 CEFs were identified in AR 11967, however, in this study we focused on only two of them, for which multiple Hinode/SOT-SP observations were available. These 19 CEFs come on top of the 387 CEFs already reported by Castellanos Duran et al. (2021). An analysis of the known \(\sim\)400 CEFs could form the basis of an in-depth statistical study of CEF properties and evolution, to enhance not only our understanding of the nature of CEFs themselves, but also their impact on the sunspot dynamics and on the layers above CEFs.
In addition, the combination with new observations, in particular, stereoscopic observations between Hinode or SDO/HMI, combined with SO/PHI (Solanki et al., 2020) onboard Solar Orbiter (Muller et al., 2020), will allow determining the two components of the velocity vector and not only the line-of-sight component. This will provide us with the necessary additional information to better understand CEFs.
## Acknowledgments
We would like to thank the anonymous referee for careful reading and suggestions that improved the quality of the manuscript. J. S. Castellanos Duran was funded by the Deutscher Akademischer Austauschdienst (DAAD) and the International Max Planck Research School (IMPRS) for Solar System Science at the University of Gottingen. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 695075). Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and UKSA as international partners. It is operated by these agencies in cooperation with ESA and NSC (Norway). SDO is a mission of the NASA's Living With a Star program.
Figure A1--Examples of observed Stokes profiles and the location of the strong fields within CEF-1 as a function of time. Highly complex Stokes profiles were chosen to display the quality of the fits. The first two columns display the \(v_{\rm LOS}\) and continuum maps of CEF-1, where CEF-1 can be identified as a red patch in the first column. Time runs from the top row to the bottom. Green, blue, and yellow contours on columns 1 and 2 mark the locations harboring fields stronger
than 3.5 kG, 4 kG, and 5 kG, respectively. The number of pixels inside the green contours (N\({}_{\rm c}\)) are displayed in the second column. Columns 3 to 6 in Figure A1 show examples of observed Stokes profiles (gray open circles) and the fits using the spatially coupled inversions (blue lines). Notice that despite the high complexity of the observed Stokes profiles at these locations, the spatially coupled inversions obtained remarkably good fits (e.g., Castellanos Duran et al., 2020).
Animation 1Temporal evolution of CEF-1 and CEF-2 as seen by Hinode/SOT-SP. This animation is composed of nine panels that mainly show the expulsion of CEF-1 and CEF-2. The columns display the continuum intensity, the magnetic field strength (clipped below 1 kG and above 5 kG), the LOS velocity (clipped between \(\pm 3\) km s\({}^{-1}\)), and the LOS inclination of the magnetic field. The top row shows the full AR 10930, while the second and third rows present close-ups of CEF-1 and CEF-2 (black arrows). The cadence of the animation varies depending on the availability of the Hinode/SOT-SP maps. The first frame starts on 2006 December 08 at 6:11 UT when AR 10930 was located in the solar western hemisphere at (-697'', -83''). The last frame was taken on 2006 December 15 at 13:02 UT when AR 10930 was located in the eastern hemisphere at (711'', -86''). The duration of the animation is 5 seconds.
Animation 2Temporal evolution of CEF-3 as seen by SDO/HMI. The animation consists of four panels that show the expulsion of CEF-3. Panels display the continuum intensity (a), LOS magnetic field (b), LOS velocity (c), and the location of CEF-3 (d; enclosed by black contours). The field of view covers an area of \(\sim 50\times 50\) Mm. Thin contours in all panels mark the locations of the penumbra and umbra boundaries. The first frame starts on 2014 February 2 at 00:00 UT when the sunspot group was at (540'', -130''). The last frame was taken on 2014 February 2 at 13:30 UT when AR 11967 was at (-220'', -125''). The cadence between images is 45 seconds. The duration of the animation is 27 seconds. For better visibility of the processes in the penumbra, we masked out umbral pixels in panels (b) and (c).
|
2302.14286 | HugNLP: A Unified and Comprehensive Library for Natural Language
Processing | In this paper, we introduce HugNLP, a unified and comprehensive library for
natural language processing (NLP) with the prevalent backend of HuggingFace
Transformers, which is designed for NLP researchers to easily utilize
off-the-shelf algorithms and develop novel methods with user-defined models and
tasks in real-world scenarios. HugNLP consists of a hierarchical structure
including models, processors and applications that unifies the learning process
of pre-trained language models (PLMs) on different NLP tasks. Additionally, we
present some featured NLP applications to show the effectiveness of HugNLP,
such as knowledge-enhanced PLMs, universal information extraction, low-resource
mining, and code understanding and generation, etc. The source code will be
released on GitHub (https://github.com/wjn1996/HugNLP). | Jianing Wang, Nuo Chen, Qiushi Sun, Wenkang Huang, Chengyu Wang, Ming Gao | 2023-02-28T03:38:26Z | http://arxiv.org/abs/2302.14286v1 | # HugNLP: A Unified and Comprehensive Library for
###### Abstract
In this paper, we introduce HugNLP, a unified and comprehensive library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers, which is designed for NLP researchers to easily utilize off-the-shelf algorithms and develop novel methods with user-defined models and tasks in real-world scenarios. HugNLP consists of a hierarchical structure including models, processors and applications that unifies the learning process of pre-trained language models (PLMs) on different NLP tasks. Additionally, we present some featured NLP applications to show the effectiveness of HugNLP, such as knowledge-enhanced PLMs, universal information extraction, low-resource mining, and code understanding and generation, etc. The source code will be released on GitHub ([https://github.com/wjn1996/HugNLP](https://github.com/wjn1996/HugNLP)).
## 1 Introduction
Recently, pre-trained language models (PLMs) have become the imperative infrastructure in a series of downstream natural language processing (NLP) tasks Devlin et al. (2019); Liu et al. (2019); Yang et al. (2019), which bring substantial improvements by a two-stage training strategy, including _pre-training_ and _fine-tuning_. Benefiting from this strategy, a branch of PLM methods arises to improve the models' effectiveness, promoting NLP's development in both academia and industry Liu et al. (2023); Hu et al. (2022).
Yet, many existing approaches follow different patterns and code architectures, it is not easy to obtain high-performing models and develop them easily for researchers. To fill this gap, this paper presents HugNLP, a unified and comprehensive open-source library to allow researchers to develop and evaluate NLP models more efficiently and effectively. To reach this goal, we utilize HuggingFace Transformers 1 as the prevalent backend, which provides abundant backbones of different scale-sizes of PLMs. For training, we integrate a well-designed tracking toolkit _MLFlow_2 into the backend, which is convenient to observe experimental progress and records. HugNLP consists of some well-designed components, such as _Models_, _Processors_, and _Applications_. Concretely, 1) for _Models_, we provide some popular PLMs, including BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), DeBERTa He et al. (2021), GPT-2 Radford et al. (2019) and T5 Raffel et al. (2020), etc. Based on these PLMs, we develop task-specific modules for pre-training (e.g., masked language modeling (MLM), casual language modeling (CLM)) and fine-tuning (e.g., sequence classifying and matching, span extraction, text generation). We also provide some prompt-based fine-tuning techniques which enable parameter-efficient tuning for PLMs, including PET Schick and Schutze (2021), P-tuning Liu et al. (2021), Prefix-tuning Li and Liang (2021), Adapter-tuning Houlsby et al. (2019). 2) In _Processors_, we develop relevant data processing tools 3 for some commonly used benchmark datasets and business-specific corpora. 3) In _Applications_, we present core capacities to support the upper applications. Specifically, our proposed KP-PLM Wang et al. (2022) enables plug-and-play knowledge injection in model pre-training and fine-tuning via converting structure knowledge into unified language prompts. We also develop HugIE, a universal information extraction toolkit through instruction-tuning with extractive modeling (e.g., global pointer) Su et al. (2022). HugNLP also integrates some novel algorithms and applications, such as uncertainty-aware self-training Mukherjee
jee and Awadallah, 2020; Wang et al., 2023), code understanding and generation Feng et al. (2020); Wang et al. (2021).
Overall, HugNLP has the following features.
* HugNLP offers a range of pre-built components and modules (i.e., _Models_, _Processors_, _Applications_) that can be used to speed up the development process and simplify the implementation of complex NLP models and tasks.
* HugNLP can also be easily integrated into existing workflows and customized to meet the specific needs of individual researchers or projects, ensuring the framework's scalability and flexibility.
* HugNLP is equipped with some novel core capacities, such as knowledge-enhanced pre-training, prompt-based fine-tuning, instruction and in-context learning, uncertainty-aware self-training, and parameter-efficient learning. We thus develop some featured products or solutions on real-world application scenarios, e.g., KP-PLM, and HugIE.
* HugNLP is based on PyTorch and HuggingFace, which are two widely used tools and platforms in the NLP community, allowing researchers to leverage their strengths and applying it to different academics and industry scenarios Qiu et al. (2021); Wang et al. (2022).
## 2 Background
### Pre-trained Language Models
The goal of the PLM is to learn semantic representations over unsupervised corpora via well-designed self-supervised learning tasks in the pre-training stage. Notable PLMs can be divided into three main types, including encoder-only Devlin et al. (2019); Liu et al. (2019); He et al. (2021); Yang et al. (2019); Lan et al. (2020), decoder-only Radford et al. (2018); Brown et al. (2020); Zhang et al. (2022) and encoder-decoder Lewis et al. (2020); Raffel et al. (2020). However, these PLM may lack of background knowledge when applied to some task-specific scenarios. To solve this problem, a branch of knowledge-enhanced PLMs Zhang et al. (2019); Wang et al. (2021); Pan et al. (2022) have been proposed for capturing rich factual knowledge from external knowledge bases. In addition, some recent large-scale PLMs (e.g., GPT-3 Brown et al. (2020)) can enable few/zero-shot in-context learning with language prompts or instructions. Thus, we can leverage cross-task learning to unify semantics knowledge from different NLP tasks.
### Fine-tuning for PLMs
A large number of applications in real scenarios focus on how to fine-tune the PLM to transfer the prior knowledge derived from the general domain to downstream task-specific domains Xu et al. (2020); Wang et al. (2018). We integrate some task-orient fine-tuning methods to allow users to develop and evaluate PLM on different NLP tasks. We also implement some popular tuning algorithms to enable tuning on low-resource scenarios, such as prompt-tuning Liu et al. (2021), in-context learning Brown et al. (2020), etc.
## 3 HugNLP
### Overview
HugNLP is an open-sourced library with a hierarchical structure. As shown in Figure 1. The backend is the prevalent HuggingFace Transformers platform that provides multiple transformer-based models and task trainers. In other words, HugNLP can be seen as a customized NLP platform for efficiently training and evaluating. In addition, HugNLP integrates _MLFlow_, which is a novel tracking callback toolkit for model training and experiment result analysis. Users can simply add one configure parameter tracking_uri in the training script, and observe the tracking records after running _MLFlow_ server.
HugNLP consists of three key components, including _Models_, _Processors_, and _Applications_. Users can directly select the pre-built settings for some common tasks, or develop special user-defined training solutions in real-world application scenarios. We will provide a detailed description in the following sections.
### Library Architecture
Models.In _Models_, we provide some popular transformer-based models as backbones, such as BERT, RoBERTa, GPT-2, etc. We also release our pre-built KP-PLM, a novel knowledge-enhanced pre-training model which leverages _knowledge prompting_Wang et al. (2022) paradigm to inject factual knowledge and can be easily used for arbitrary PLMs. Apart from basic PLMs, we also implement some task-specific models, involving
sequence classification, matching, labeling, span extraction, multi-choice, and text generation. Particularly, we develop standard fine-tuning (based on CLS Head 4) and prompt-tuning models 5 that enable PLM tuning on classification tasks. For few-shot learning settings, HugNLP provides a prototypical network Snell et al. (2017) in both few-shot text classification and named entity recognition (NER).
Footnote 4: For standard fine-tuning, we need to add a classification head (CLS head) on the PLM and obtain the probability distribution of each class. The parameters of the CLS head are randomly initialized.
Footnote 5: Different from fine-tuning, prompt-tuning can reuse the pre-training objective (e.g., MLM, CLM) to perform classifying on the masked token. It requires a task-orient template (e.g., “It was [MASK].”) and the label word mapping (e.g., “great” maps to “positive” class in sentiment analysis task.)
In addition, we also incorporate some _plug-and-play utils_ in HugNLP. 1) _Parameter Freezing_. If we want to perform parameter-efficient learning Mao et al. (2022), which aims to freeze some parameters in PLMs to improve the training efficiency, we can set the configure use_freezing and freeze the backbone. A use case is shown in Code 1. 2) _Uncertainty Estimation_ aims to calculate the model certainty when in semi-supervised learning Mukherjee and Awadallah (2020). 3) We also design _Prediction Calibration_, which can be used to further improve the accuracy by calibrating the distribution and alleviating the semantics bias problem Zhao et al. (2021).
Processors.HugNLP aims to load the dataset and process the task examples in a pipeline, containing sentence tokenization, sampling, and tensor generation. Specifically, users can directly obtain the data through load_dataset, which can directly download it from the Internet or load it from the local disk. For different tasks, users should define a task-specific data collator, which aims to transform the original examples into model input tensor features.
Figure 1: An overview of the HugNLP library.
Applications.It provides rich modules for users to build real-world applications and products by selecting among an array of settings from _Models_ and _Processors_. More details are shown in Section 3.4.
### Core Capacities
To further improve the effectiveness of HugNLP, we design multiple core capacities in the following.
Knowledge-enhanced Pre-training.Conventional pre-training methods lack factual knowledge Zhang et al. (2022); Pan et al. (2022). To deal with this issue, we present KP-PLM Wang et al. (2022) with a novel knowledge prompting paradigm for knowledge-enhanced pre-training. Specifically, we construct a knowledge sub-graph for each input text by recognizing entities and aligning with the knowledge base (e.g., Wikidata5M 6) and decompose this sub-graph into multiple relation paths, which can be directly transformed into language prompts. KP-PLM can be easily applied to other PLMs without introducing extra parameters as knowledge encoders.
Footnote 6: [https://deepgraphlearning.github.io/project/wikidata5m](https://deepgraphlearning.github.io/project/wikidata5m).
Prompt-based Fine-tuning.Prompt-based fine-tuning aims to reuse the pre-training objective (e.g., MLM) and utilizes a well-designed template and verbalizer to make predictions, which has achieved great success in low-resource settings. We integrate some novel approaches into HugNLP, such as PET Schick and Schutze (2021), P-tuning Liu et al. (2021), etc.
Instruction-tuning and In-Context Learning.Instruction-tuning Wei et al. (2022) and in-context learning Brown et al. (2020) enable few/zero-shot learning without parameter update, which aims to concatenate the task-aware instructions or example-based demonstrations to prompt GPT-style PLMs to generate reliable responses. So, all the NLP tasks can be unified into the same format and can substantially improve the models' generalization. Inspired by this idea, we extend it into other two paradigms: 1) extractive-style paradigm: we unify various NLP tasks into span extraction, which is the same as extractive question answering Keskar et al. (2019), and 2) inference-style paradigm: all the tasks can be viewed as natural language inference to match the relations between inputs and outputs Wang et al. (2021).
Uncertainty-aware Self-training.Self-training can address the labeled data scarcity issue by leveraging the large-scale unlabeled data in addition to labeled data, which is one of the mature paradigms in semi-supervised learning Qi and Luo (2022); Chawla and Karakoulas (2005); Amini et al. (2022). However, the standard self-training may generate too many noises, inevitably degrading the model performance due to the confirmation bias. Thus, we present uncertainty-aware self-training. Specifically, we train a teacher model on few-shot labeled data, and then use Monte Carlo (MC) dropout technique in Bayesian neural network (BNN) Gal and Ghahramani (2016) to approximate the model certainty, and judiciously select the examples that have a higher model certainty of the teacher.
Parameter-efficient Learning.To improve the training efficiency of HugNLP, we also implement parameter-efficient learning, which aims to freeze some parameters in the backbone so that we only tune a few parameters during model training. We develop some novel parameter-efficient learning approaches, such as Prefix-tuning Li and Liang (2021), Adapter-tuning Houlsby et al. (2019), BitFit Zaken et al. (2022) and LoRA Hu et al. (2022), etc.
Figure 2: An application case of HugIE.
### Featured Applications
Benchmark Tuning.We develop the training application for some popular benchmarks, such as Chinese CLUE and GLUE. We use both standard fine-tuning and prompt-based fine-tuning paradigms to tune PLMs over these benchmarks. The case of this application is shown in Code 2.
Universal Information Extraction based on Extractive Instruction.We develop HugIE, a novel universal information extraction toolkit based on HugNLP. Specifically, we collect multiple Chinese NER and event extraction datasets from ModelScope 7 and QianYan 8. Then, we use the core capacity of extractive-style instruction with a global pointer Su et al. (2022) to pre-train a universal information extraction model. We also upload the trained model to HuggingFace 9. An example of using HugIE is shown in Figure 2.
Footnote 7: [https://modelscope.cn/datasets](https://modelscope.cn/datasets)
Footnote 8: [https://www.luge.ai](https://www.luge.ai)
Footnote 9: [https://huggingface.co/wjn1996/wjn1996-hughlp-hugie-large-zh](https://huggingface.co/wjn1996/wjn1996-hughlp-hugie-large-zh).
Low-resource Tuning for PLMs.For low-resource settings, we have integrated two core capacities of prompt-tuning and uncertainty-aware self-training to further improve the performance with limited labeled data. In other words, prompt-tuning can fully reuse the prior knowledge derived from PLMs to achieve high grades with few examples, while self-training can augment unlabeled data to enhance effectiveness.
Code Understanding and Generation.In addition to traditional NLP tasks, we also consider the scenario of code understanding and generation, such as clone detection, defect detection, and code summarization Lu et al. (2021).
### Development Workflow
HugNLP is easy to use and develop. We draw a workflow in Figure 3 to show how to develop a new running task. It consists of five main steps, including library installation, data preparation, processor selection or design, model selection or design, and application design. This illustrates that HugNLP can simplify the implementation of complex NLP models and tasks.
## 4 Experimental Performances
In this section, we empirically examine the effectiveness and efficiency of the HugNLP toolkit on some public datasets.
### Performance of Benchmarks
To validate the effectiveness of HugNLP on both fine-tuning and prompt-tuning, we choose Chinese CLUE Xu et al. (2020) and GLUE benchmarks Wang et al. (2018). For Chinese CLUE, we choose different sizes of BERT, RoBERTa and MacBERT Cui et al. (2020) and report the accuracy over the development sets of each task in Tables 1. For GLUE, we perform full-resource fine-tuning (FT-full), few-shot prompt-tuning (PT-few), and zero-shot prompt-tuning (PT-zero) based on
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**PLMs** & **AFQMC** & **CMNLI** & **CSL** & **IFLYTEK** & **OCNLI** & **TNEWS** & **WSC** & **Avg.** \\ \hline BERT-base & 72.30 & 75.91 & 80.83 & 60.11 & 78.52 & 57.18 & 75.89 & 72.04 \\ BERT-large & 72.91 & 77.62 & 81.30 & 60.77 & 78.71 & 57.77 & 78.28 & 72.60 \\ RoBERTa-base & 73.33 & 81.05 & 80.17 & 60.81 & 80.88 & 57.69 & 86.74 & 74.10 \\ RoBERTa-large & 74.66 & 80.50 & 82.60 & 61.37 & 82.19 & 58.54 & 87.53 & 75.33 \\ MacBERT-base & 74.23 & 80.65 & 81.63 & 61.14 & 80.65 & 57.65 & 80.26 & 73.80 \\ MacBERT-large & 74.66 & 81.19 & 83.70 & 62.05 & 81.92 & 59.03 & 86.74 & 75.46 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Accuracy (%) of different tasks in the CLUE benchmark.
Figure 3: The development workflow of HugNLP.
our proposed KP-PLM. We select RoBERTa as the strong baseline and report the accuracy results with standard deviation in Table 2. The obtained comparable performance has shown the reliability of HugNLP in both full and low-resource scenarios, which achieves similar performance compared to other open-source frameworks and their original implementations Wang et al. (2022).
### Evaluation of Code-related Tasks
We use HugNLP to evaluate the performance on multiple code-related tasks, such as code clone detection, defection, translation, and refinement. We fine-tune two widely used models: CodeT5 Wang et al. (2021) and PLBART Ahmad et al. (2021), and then compare them with competitive parameter-efficient learning methods, including BitFit, Adapter, and P-tuning V2 Liu et al. (2021). Results in Table 3 and Table 4 demonstrate the effectiveness and efficiency of HugNLP.
### Effectiveness of Self-training
We end this section with an additional validation on the self-training. We choose some recent methods (using uncertainty estimation) to evaluate the implementations of HugNLP, including UST Mukherjee and Awadallah (2020), CEST Tsai et al. (2022), and LiST Wang et al. (2022). Results in Table 5 show that self-training can make substantial improvements in low-resource scenarios.
## 5 Conclusion
In this paper, we introduce HugNLP, a unified and comprehensive library based on PyTorch and HuggingFace, allowing researchers to apply it to different academics and industry scenarios. HugNLP consists of three key components (i.e., _Processors_, _Models_ and _Applications_) and multiple pre-built core capacities and plug-and-play utils. Finally, we perform some evaluation of different aspects of applications, and the results demonstrate its efficiency and effectiveness. We think HugNLP can promote research and development for NLP applications.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Params.**} & \multicolumn{2}{c}{**Java to C\#**} & \multicolumn{2}{c}{**C\# to Java**} & \multicolumn{2}{c}{**Refine Small**} & \multicolumn{2}{c}{**Refine Medium**} \\ \cline{3-10} & & (bleu) & (em) & (bleu) & (em) & (bleu) & (em) & (bleu) & (em) \\ \hline \multicolumn{10}{l}{**CodeT5**} \\ Fine-Tuning & 224M & **84.15** & **65.30** & **79.12** & **66.40** & 77.39 & **21.35** & **91.04** & **7.82** \\ BiFit & 0.001M & 0.25 & 0.00 & 0.24 & 0.00 & 1.28 & 0.00 & 5.14 & 0.00 \\ Adapter & 14.22M & 75.43 & 52.40 & 73.10 & 57.70 & 77.41 & 18.58 & 91.01 & 3.61 \\ P-Tuning V2 & 0.633M & 59.86 & 33.70 & 57.10 & 41.00 & **78.99** & 45.56 & 90.02 & 0.79 \\ \multicolumn{10}{l}{**PLBART**} \\ Fine-Tuning & 139M & **77.05** & **62.60** & **79.29** & **62.80** & 73.32 & **12.71** & 83.88 & **4.24** \\ BiFit & 0.126M & 16.48 & 0.10 & 17.43 & 0.90 & **74.08** & 1.45 & **85.41** & 0.42 \\ Adapter & 7.11M & 66.72 & 42.10 & 68.70 & 51.00 & 73.58 & 10.90 & 84.72 & 3.12 \\ P-Tuning V2 & 0.329M & 22.87 & 1.00 & 48.08 & 33.80 & 73.87 & 2.07 & 73.58 & 0.03 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison between KP-PLM and RoBERTa-base over multiple natural language understanding (NLU) tasks in terms of acc/f1/matt. (%) and standard deviation with three paradigms, such as zero-shot prompt-tuning (PT-Zero), few-shot prompt-tuning (PT-Few), and full-data fine-tuning (FT-Full).
\begin{table}
\begin{tabular}{l l c c c c c c c c} \hline \hline
**Paradigms** & **Methods** & **SST-2** & **SST-5** & **MR** & **CR** & **MPQA** & **Subj** & **TREC** & **CoLA** & **Avg.** \\ & & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (acc) & (matt.) & **Avg.** \\ \hline \multirow{2}{*}{PT-Zero} & RoBERTa & 82.57 & 29.46 & **65.10** & **82.15** & 49.90 & **69.20** & 20.80 & -4.89 & 49.29 \\ & KP-PLM & **84.15** & **30.67** & 64.15 & 81.60 & **53.80** & 68.70 & **24.80** & **-2.99** & **50.61** \\ \multirow{2}{*}{PT-Few} & RoBERTa & 86.35\(\pm\)1.3 & 36.79\(\pm\)2.0 & **83.35\(\pm\)0.9** & **88.85\(\pm\)1.4** & 66.40\(\pm\)1.9 & 89.25\(\pm\)2.6 & 76.80\(\pm\)5.0 & 6.61\(\pm\)6.9 & 66.80 \\ & KP-PLM & **90.71\(\pm\)1.0** & **44.21\(\pm\)2.9** & 82.00\(\pm\)1.5 & 85.35\(\pm\)0.4 & **67.30\(\pm\)1.2** & **91.45\(\pm\)0.4** & **81.00\(\pm\)3.3** & **24.28\(\pm\)1.13** & **70.79** \\ \multirow{2}{*}{FT-Full} & RoBERTa & 94.90 & 56.90 & **89.60** & 88.80 & 86.30 & **96.50** & **97.10** & 63.90 & 84.25 \\ & KP-PLM & **95.30** & **57.63** & 89.20 & **89.10** & **87.40** & 96.20 & **97.10** & **64.87** & **84.60** \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparison between KP-PLM and RoBERTa-base over multiple natural language understanding (NLU) tasks in terms of acc/f1/matt. (%) and standard deviation with three paradigms, such as zero-shot prompt-tuning (PT-Zero), few-shot prompt-tuning (PT-Few), and full-data fine-tuning (FT-Full).
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Methods** & **Params.** & **Defect** & **Clone** \\ \hline \multicolumn{4}{l}{**CodeT5**} \\ Fine-Tuning & 224M & **64.35** & **94.97** \\ BiFit & 1.183M & 55.05 & 69.52 \\ Adapter & 15.40M & 59.74 & 94.47 \\ P-Tuning V2 & 1.182M & 54.61 & 79.83 \\
**PLBART** & **1.24** \\ Fine-Tuning & 139M & **62.27** & **92.85** \\ BiFit & 1.308M & 56.30 & 92.42 \\ Adapter & 8.29M & 61.60 & 92.74 \\ P-Tuning V2 & 1.182M & 53.81 & 75.88 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance (%) on Code Translation & Code Refinement Tasks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Methods** & **RTE** & **CB** & **AGNews** & **Avg.** \\ \hline \multicolumn{4}{l}{_Few Labeled Data (16-shot)_} \\ Fine-Tuning & 54.4\(\pm\)3.9 & 74.5\(\pm\)2.6 & 88.9\(\pm\)2.7 & 72.60 \\ \hline \multicolumn{4}{l}{_Few Labeled Data (16-shot) + Unlabeled Data_} \\ UST & 55.6\(\pm\)2.6 & 76.0\(\pm\)3.1 & 89.3\(\pm\)3.5 & 73.63 \\ CEST & 57.0\(\pm\)1.9 & 78.1\(\pm\)2.7 & 88.5\(\pm\)2.2 & 74.53 \\ LiST & **60.82\(\pm\)2.5** & **79.72\(\pm\)2.9** & **90.3\(\pm\)2.5** & **76.93** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Accuracy (%) of uncertain-aware self-training with only 16 labeled examples per class.
## Ethics Statement
Our contribution in this work is to construct a unified and comprehensive library for NLP research and application. However, transformer-based models may have some negative impacts, such as gender and social bias. Our work would unavoidably suffer from these issues. We suggest that users should carefully address potential risks when models trained using the HugNLP library are deployed online.
## Acknowledgements
This work has also been supported by the National Natural Science Foundation of China under Grant No. U1911203, Alibaba Group through the Alibaba Innovation Research Program, and the National Natural Science Foundation of China under Grant No. 61877018, the Research Project of Shanghai Science and Technology Commission (20dz2260300) and the Fundamental Research Funds for the Central Universities.
|
2309.16913 | SIMD-ified R-tree Query Processing and Optimization | The introduction of Single Instruction Multiple Data (SIMD) instructions in
mainstream CPUs has enabled modern database engines to leverage data
parallelism by performing more computation with a single instruction, resulting
in a reduced number of instructions required to execute a query as well as the
elimination of conditional branches. Though SIMD in the context of traditional
database engines has been studied extensively, it has been overlooked in the
context of spatial databases. In this paper, we investigate how spatial
database engines can benefit from SIMD vectorization in the context of an
R-tree spatial index. We present vectorized versions of the spatial range
select, and spatial join operations over a vectorized R-tree index. For each of
the operations, we investigate two storage layouts for an R-tree node to
leverage SIMD instructions. We design vectorized algorithms for each of the
spatial operations given each of the two data layouts. We show that the
introduction of SIMD can improve the latency of the spatial query operators up
to 9x. We introduce several optimizations over the vectorized implementation of
these query operators, and study their effectiveness in query performance and
various hardware performance counters under different scenarios. | Yeasir Rayhan, Walid G. Aref | 2023-09-29T00:55:22Z | http://arxiv.org/abs/2309.16913v1 | # SIMD-ified R-tree Query Processing and Optimization
###### Abstract.
The introduction of Single Instruction Multiple Data (SIMD) instructions in mainstream CPUs has enabled modern database engines to leverage data parallelism by performing more computation with a single instruction, resulting in a reduced number of instructions required to execute a query as well as the elimination of conditional branches. Though SIMD in the context of traditional database engines has been studied extensively, it has been overlooked in the context of spatial databases. In this paper, we investigate how spatial database engines can benefit from SIMD vectorization in the context of an R-tree spatial index. We present vectorized versions of the spatial range select, and spatial join operations over a vectorized R-tree index. For each of the operations, we investigate two storage layouts for an R-tree node to leverage SIMD instructions. We design vectorized algorithms for each of the spatial operations given each of the two data layouts. We show that the introduction of SIMD can improve the latency of the spatial query operators up to 9\(\times\). We introduce several optimizations over the vectorized implementation of these query operators, and study their effectiveness in query performance and various hardware performance counters under different scenarios.
Single Instruction Multiple Data (SIMD), Spatial Query Processing, R-tree, Query Optimization +
Footnote †: journal:
+
Footnote †: journal:
## 1. Introduction
With the popularity and ubiquity of smart phones and location-based services, the amount of location-based data has grown tremendously in recent years. Processing location-data in timely fashion has become a big challenge. Parallelism is one way to deal with this problem. This can be achieved in the form of either _thread-level parallelism_, _instruction-level parallelism_, or _data-level parallelism_. In _thread-level parallelism_, multiple hardware threads work together in parallel to fully leverage the multi-core capabilities of modern CPU chips. In contrast, in _instruction-level parallelism_, a single core in a CPU chip executes multiple instructions possibly out-of-order in a single clock cycle. In _data-level parallelism_, a single core in a CPU chip applies a single instruction on multiple data units, i.e., integers, floats, doubles in a single clock cycle through Single Instruction Multiple Data (SIMD, for short) instructions.
Thread-level and instruction-level parallelism have been investigated extensively in spatial databases literature in the form of standalone query operators, query execution pipeline, and compilation of query plans (Sandes et al., 2017). However, this is not the case for data-level parallelism. Previous works (Sandes et al., 2017; Wang et al., 2018) in relational database management systems (RDBMS) on query operators, e.g., scan (Sandes et al., 2017; Wang et al., 2018), join (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018), sorting (Sandes et al., 2017; Wang et al., 2018; Wang et al., 2018), and query execution pipeline (Sandes et al., 2017; Wang et al., 2018; Wang et al., 2018) suggest that database engines benefit from SIMD, mainly through _raw processing power_ by working on multiple elements at once, _reduced instruction count_ by imposing minimum pressure on the processor's decode and execution unit, and _conditional branch elimination_ by relieving the processor from bad speculation and mispredicting branches. Hence, there exist parallel execution opportunities to utilize SIMD for improving query performance in spatial databases, which is the focus of this paper.
This is more so the case in modern CPUs that have evolved to equip each CPU core with its own SIMD execution unit be it in the same or different chip. These SIMD execution units are increasingly being equipped with wider SIMD registers (e.g., 512 bits), with more complex instruction sets, e.g., AVX512F, AVX512BW, AVX512CD, AVX512PF.1 To benefit from SIMD capabilities and leverage per core data parallelism for processing queries in spatial databases, spatial data has to be laid out in a SIMD-friendly manner be it in main-memory or in disk to facilitate best use of SIMD instructions, and novel implementations of query processing algorithms. In this paper, we focus on main-memory two-dimensional R-Tree (Chen et al., 2017), and investigate how range select, and spatial join can benefit from SIMD vectorization. Furthermore, we study the effect of various storage layouts of R-Tree index nodes on the performance of these spatial operators.
Footnote 1: www.intel.com/content/www/us/en/docs/intrinsic-guide/index.html/techs-AVX, 512
With the advent of large-capacity main-memory chips, indexes can fit fully in main memory, disk I/O is no longer the bottleneck for the index operators. Rather, the bottleneck has shifted to the computational efficiency of the CPU, e.g., the number of instructions executed per clock cycle (IPC, for short), main-memory stalls, dominated by Last Level Cache misses (LLC misses, for short), Translation Lookaside Buffer misses (TLB misses, for short), and branch mispredictions. An LLC miss refers to the event of the processor attempting to access data that is not present in the last level cache and requires fetching data from memory. This miss incurs a penalty of 89 ns for Intel Ice Lake processors2. TLBs are small caches that store the virtual-to-physical address mapping to speedup the translation of the virtual addresses requested by the processor to either access data or instructions. A single TLB miss can incur a miss penalty ranging from 7 to 30 clock cycles for Intel Ice Lake processors2. Analogously, processors incur significant penalty for a single branch misprediction, e.g., 17 clock cycles for Intel Ice Lake processors2. These misses not only impact query performance but also hinders the CPU from fully utilizing SIMD capabilities to the point that it can perform worse than its scalar counterpart. Thus, hardware-conscious data layout is a must to fully utilize SIMD capabilities of modern CPU architectures. We propose three different data layouts for the 2D R-Tree, and investigate their impact on the various hardware performance counters, e.g., LLC and TLB misses, branch mispredictions, and the number of instructions executed. We implement range select and spatial join over a main-memory R-tree,
and present techniques to vectorize them using SIMD instructions with several optimizations. We study the performance trade-offs of these data layouts when performing these index operations based on the performance counters above.
We redesign the spatial select operator to better facilitate SIMD vectorization, and reduce LLC misses. Performing a select on an index typically results in cold LLC misses equal to the number of nodes accessed. Hardware prefetchers cannot hide these cold LLC misses, and it badly hurts index performance and the CPU's SIMD capabilities. Given that the R-Tree index nodes overlap each other, we introduce a queue to keep track of the nodes that need to be accessed as we perform a breadth-first search (BFS) over the index and perform software prefetching to bring the index node that will be accessed in cache in a timely manner. To reduce the overhead of the additional queue, we use SIMD instructions to insert multiple items into the queue with a single instruction. Also, we vectorize spatial join operations over a SIMD-based (or SIMDfied) R-tree. We focus on the spatial nested-index join operator (Bordes and Tuck, 2017), where we start with the root of 2 R-Tree indexes, and traverse both trees simultaneously in top-down fashion until we reach the leaf nodes. If the MBRs of both indexes are unsorted, then this is the same as applying the range select operator, where the outer index node is the query rectangle. However, if the MBRs of the index nodes are sorted on one of the dimensions, then the performance of the nested index join can be improved through several optimizations. We introduce two optimizations for the nested index spatial join operation, where the index node MBRs are sorted on a pre-determined dimension.
We compare the vectorized implementation of these spatial operators against their scalar counterpart. The experiments show a speedup of up to 4\(\times\), and 9\(\times\), for select, and spatial join, respectively. We study the performance of the proposed optimizations, and investigate their best use scenarios.
The contributions of the paper can be summarized as follows.
* We present vectorized algorithms for 2D range select and indexed spatial join over a SIMD-ified R-tree.
* We investigate 3 data layouts for the R-Tree, and study the tradeoffs of these layouts for the index query operators.
* We compare our vectorized query operators against their scalar counterparts, and achieve a speedup of upto 9\(\times\).
* We introduce 5 optimizations for the vectorized query operators, and study their effectiveness under various conditions.
The rest of the paper proceeds as follows. Section 2 overviews the SIMD and prefetch instructions used in the paper, and introduces the proposed layouts for the R-tree nodes. Sections 3 and 4 present the vectorized spatial range select, and spatial join, respectively. Section 5 presents the experimental study. Section 6 discusses the related work, and Section 7 concludes the paper.
## 2. Preliminaries
In this section, we overview SIMD and prefetching operations, and the various data layouts being investigated in this paper.
### SIMD Instructions
CPU vendors provide SIMD capabilities via different instruction sets starting from SSE (operates on 4 32-bit data elements), AVX2 (8 32-bit data elements) to the recent AVX512 that operates simultaneously on 16 32-bit data elements. Coupled with the special-purpose CPU SIMD registers, theoretically these instructions, e.g., AVX512, can provide upto \(W=16\times\) speedup over the traditional scalar instructions. Below, we overview the SIMD instructions (AVX512) we use to implement the vectorized spatial query operators. Let \(a-k\) be 32-bit data elements. Even though an AVX512 SIMD register can hold at most 16 32-bit data elements, for ease of illustration, we restrict the registers, i.e., source, target, and index vector to contain only 4, enclosed in \([]\). Let \(||\) denote data elements located in memory along with a \(\downarrow\) to point to a certain memory location.
**1. Load Instructions:**
**1.1. Load:** Vector load takes as input a memory location, say \(m\), and loads contiguous elements starting at \(m\) into a target register, e.g.,
\[\underbrace{[f,g,h,i]}_{\text{Target vector}}\leftarrow\text{load}( \underbrace{[a,b,c,d,e,\mathbf{f},\mathbf{g},\mathbf{h},i,j,k]}_{\text{ Memory}})\]
**1.2. Gather:** Takes as input an array, say \(A\), with an index vector, \(\overrightarrow{idx}\), and stores the \(A\) elements specified by the \(\overrightarrow{idx}\) in order (as specified in the index vector) in a target register, e.g.,
\[\underbrace{[a,j,c,e]}_{\text{Target vector}}\leftarrow\text{gather}( \underbrace{[0,9,2,4]}_{\text{Index Vector}},\underbrace{[a,b,c,d,e,\mathbf{f},g,h,i,j,k]}_{ \text{In-Memory Array}})\]
**1.3. Expand Load:** Takes as input a memory location, say \(m\), along with a write mask \(k\), and stores contiguous elements starting at \(m\) into a target register using Mask \(k\), e.g.,
\[\underbrace{[\times,f,g,\times]}_{\text{Target vector}}\leftarrow\text{load}( \underbrace{0110}_{\text{Write Mask}},\underbrace{[a,b,c,d,e,\mathbf{f},g, \mathbf{h},i,j,k]}_{\text{Memory}})\]
**1.4. Broadcast:** Takes as input a data element, say \(e\), or a vector, say \(\vec{v}\), and replicates \(e\) or part of the \(\vec{v}\) across all lanes of a target register. There are many variants of Broadcast depending on what needs to be replicated, e.g., to duplicate \(\vec{v}\)'s two lower elements into all lanes of a target register, we perform:
\[\underbrace{[a,b,a,b]}_{\text{Target vector}}\leftarrow\text{broadcast}( \underbrace{[\mathbf{a},\mathbf{b},c,d]}_{\text{Source vector}})\]
**2. Store Instructions:**
**2.1. Store:** Takes as input a vector, say \(\vec{v}\), and a memory location, say \(m\), and stores \(\vec{v}\)'s data elements in memory starting at \(m\), e.g.,
\[\underbrace{[\times,\times,\times,\times,\mathbf{f},\mathbf{g},\mathbf{h},i, \times,\times,\mathbf{x}]}_{\text{Memory}}\leftarrow\text{store}(\underbrace{ [f,g,h,i]}_{\text{Source vector}})\]
**2.2. Compress Store:** Takes as input a vector, say \(\vec{v}\), a write mask, say \(k\), and a memory location, say \(m\), and stores \(\vec{v}\)'s elements (indicated by \(k\)) into contiguous memory locations starting at \(m\), e.g.,
\[\underbrace{[\times,\times,\times,\underbrace{\mathbf{i}}_{\text{ }},\mathbf{h},\times,\times,\times,\times,\times]}_{\text{Memory}}\leftarrow\text{ compress}(\underbrace{0110}_{\text{Write Mask}},\underbrace{[f,g,h,i]}_{\text{Source vector}})\]
**1.4. Broadcast:** Takes as input a data element, say \(e\), or a vector, say \(\vec{v}\), and replicates \(e\) or part of the \(\vec{v}\) across all lanes of a target register. There are many variants of Broadcast depending on what needs to be replicated, e.g., to duplicate \(\vec{v}\)'s two lower elements into all lanes of a target register, we perform:
\[\underbrace{[a,b,a,b]}_{\text{Target vector}}\leftarrow\text{broadcast}( \underbrace{[\mathbf{a},\mathbf{b},c,d]}_{\text{Source vector}})\]
**2. Store Instructions:**
**2.1. Store:** Takes as input a vector, say \(\vec{v}\), and a memory location, say \(m\), and stores \(\vec{v}\)'s data elements in memory starting at \(m\), e.g.,
\[\underbrace{[\times,\times,\times,\times,\mathbf{f},\mathbf{g},\mathbf{h},i, \times,\times,\times]}_{\text{Memory}}\leftarrow\text{compress}(\underbrace{ [\mathbf{a},\mathbf{b},\mathbf{b},c,d]}_{\text{Source vector}})\]
**2.2. Compress Store:** Takes as input a vector, say \(\vec{v}\), a write mask, say \(k\), and a memory location, say \(m\), and stores \(\vec{v}\)'s elements (indicated by \(k\)) into contiguous memory locations starting at \(m\), e.g.,
\[\underbrace{[\times,\times,\times,\underbrace{\mathbf{i}}_{\text{ }},\mathbf{h},\times,\times,\times,\times]}_{\text{Memory}}\leftarrow\text{ compress}(\underbrace{0110}_{\text{Write Mask}},\underbrace{[f,g,h,i]}_{\text{Source vector}})\]
**3. Permute:** Takes as input a vector, say \(\vec{v}\), an index vector, say \(\overrightarrow{idx}\), and shuffles \(\vec{v}\)'s elements using \(\overrightarrow{idx}\) into a target register, e.g.,
\[\underbrace{[d,c,c,b]}_{\text{Target vector}}\leftarrow\text{permute}( \underbrace{[3,2,2,1]}_{\text{Index vector}},\underbrace{[a,b,c,d]}_{\text{ Source vector}})\]
**4. Blend:** Takes as input 2 vectors, say \(\vec{v}_{1},\vec{v}_{2}\), and combines \(\vec{v}_{1}\)'s and \(\vec{v}_{2}\)'s elements into a target register using an input mask \(k\), e.g.,
\[\underbrace{[e,b,g,d]}_{\text{Target vector}}\leftarrow\text{ blend}(\underbrace{1010}_{\text{Mask}},\underbrace{[a,b,c,d]}_{\text{Source vector 1}},\underbrace{[e,f,g,h]}_{\text{ Source vector 2}})\]
**5. Arithmetic Instructions:**
**5.1. Compare:** Takes as input 2 vectors, say \(\vec{v}_{1},\vec{v}_{2}\), and compares \(\vec{v}_{1}\) and \(\vec{v}_{2}\) using a comparator, say \(op\), store the result as a bitmask.
\[\underbrace{0111}_{\text{Target Mask}}\leftarrow\text{compare}(\geq,\underbrace{ [a,b,c,d]}_{\text{Source vector 1}},\underbrace{[d,a,b,c]}_{\text{ Source vector 2}})\]
**5.2. Masked Addition:** Ads 2 vector registers, say \(\vec{v}_{1}\) and \(\vec{v}_{2}\), and stores the result in a target register using a write mask, \(k\), e.g.,
\[\underbrace{[ae,bf,c,d]}_{\text{Target vector}}\leftarrow\text{masked} \text{add}(\underbrace{1100}_{\text{Write mask}},\underbrace{[a,b,c,d]}_{ \text{Source vector 1}},\underbrace{[e,f,g,h]}_{\text{ Source vector 2}})\]
### Software Prefetch Instructions
CPUs, e.g., Intel's SSE extension of x86-64 Instruction Set Architecture provides prefetch intrinsics, e.g., mm_prefetch (char const* p, int hint) that allow programmers or compilers to specify a virtual address that requires prefetching into cache, e.g., L1, L2 or L3, as specified by _hint_. The CPU can load a cacheline worth of data containing the specified addressed byte or, if busy, ignore it altogether. Seemingly very effective in hiding cache miss latency, software prefetch intrinsics can introduce computational overhead on the processor's computation unit, and stress the cache and memory bandwidth resulting in performance degradation when the processor is busy or when the data is already in cache.
### Index Node Storage Layouts
An in-memory R-Tree index node contains upto a maximum fanout \(F\) of entries. Each entry is of the form \((key,ptr)\equiv(MBR,ptr)\). The key of each entry is the \(MBR=(MBR.low_{X},MBR.low_{y},MBR.high_{x},MBR.high_{y})\) assuming 2D space. In addition, each index node contains the depth and the number of child MBRs or entries associated with the node. Based on the packing strategies of these entries we identify three index node layouts as follows. Refer to Figure 1 for illustration.
1. **Node Layout D0:** The fields associated with each MBR entry, \((MBR,ptr)\) are stored contiguously as proposed in the original R-Tree (Cheng et al., 2017), and its variants, e.g., CR Tree (Cheng et al., 2017), MR Tree (Cheng et al., 2017) (cf. Figure (a)a). One disadvantage of this index layout is its inability to efficiently facilitate SIMD instructions.
2. **Node Layout D1:** We pack the \(MBR.low_{x}\) of all the child MBRs in an array, followed by separate arrays for the \(MBR.low_{y},MBR.high_{x},MBR.high_{y}\) of all the child MBRs and the addresses of all the child nodes \(MBR.ptr\) (see Figure (b)b). Packing the child MBR keys and the child MBR addresses allows applying SIMD instructions efficiently on the index nodes for the various query operators.
3. **Node Layout D2:** The leftmost point of each MBR, \(MBR.low:(MBR.low_{x},MBR.low_{y})\) is stored in an array followed by separate arrays for the rightmost point \(MBR.high:(MBR.high_{x},MBR.high_{y})\) of all child MBRs and the addresses of all child nodes \(MBR.ptr\) (see Figure (c)c).
## 3. Spatial Select
The scalar version of the spatial select algorithm follows a recursive approach, where the index is traversed depth-first starting from the root, and then following the child nodes that qualify the query predicate using the logical operators. The query predicate evaluation of the index nodes involve executing a compound selection condition with 4 separate selects, i.e., comparing the query predicate's high x, high y, low x and low y with the index node's low x, low y, high x and high y, respectively. When these select conditions are implemented using a logical operator, the assembly code generated by the compiler replaces the 4 select conditions with 4 conditional branches. If the selectivities of these select conditions are close to 0.5, it makes the processor's branch predictor unit's job of predicting accurate branches a lot harder. This may result in as many as 4 branch mispredictions impacting the query performance. In contrast, when the 4 select conditions are implemented using a bitwise operator, the compiler replaces them with a single conditional branch and evaluates all 4 of the select conditions (Kang et al., 2017). Even though this requires executing a greater number of instructions, the branch misprediction penalty associated with this approach is expected to be smaller due to the possibility of only one branch misprediction. We implement both these variants for the spatial select to compare their performance.
In contrast, the vectorized version of the spatial select operator preforms a breadth-first traversal of the R-Tree, and maintains a queue \(Q\) to track the addresses of the internal nodes that need to be visited. The algorithm starts by visiting the root, and inserts into \(Q\) the child nodes that qualify the query predicate. Then, each qualified index node is dequeued, and is evaluated until the queue
Figure 1. Different storage layouts of index nodes.
becomes empty. Refer to Figures 2 and 3 for the illustration of the algorithms for node layouts D1 and D2, respectively.
**1. Query vector (\(\overrightarrow{q}\)) construction.** Given a 2D query rectangle, i.e., key, the key parts, e.g., \(q.low_{\text{\tiny{$\mathcal{N}$}}}\) (for Node Layout D1) or \(q.low\) (for Node Layout D2) are broadcast to construct the query vectors. The layout of the query vector needs to exactly match the layout of the index mode that it operates on. This is necessary so that when an index node is loaded from memory into a SIMD register for select predicate evaluation, a SIMD comparison can be performed efficiently. This step is performed exactly once at the start of query execution. For Node Layout D1, it takes 4 broadcast instructions to load the 4 key parts, i.e., \(q.low_{\text{\tiny{$\mathcal{N}$}}}\), \(q.low_{\text{\tiny{$\mathcal{N}$}}}\), \(q.high_{\text{\tiny{$\mathcal{N}$}}}\) and \(q.high_{\text{\tiny{$\mathcal{N}$}}}\) in 4 separate vector registers, while for node layout D2 it takes 2 instructions to load the corresponding 2 key excerpts, i.e., \(q.low\) and \(q.high\), accompanied by 2 extra masked load instructions for the query vector to match the Node Layout D2.
```
//QueryRectangle floatq[4]={q.low_x,q.low_y,q.high_x,q.high_y}; //NodeLayout-D1:Broadcastq.low_x__=m512_wt_low_x__=m512_set19s(_low_x); //NodeLayout-D2: Extractq.low_x,q.low_; thenbroadcast__m128t=__m_mask_load_ps(t, #x03, q); __m512qv_low_=m512_broadcast_f32x2(t);
```
**2. Child MBR vector (\(\overrightarrow{mbr}\)) construction.** We apply the vector load instructions to load contiguously stored key excerpts of all the child node MBRs. For Node Layout D1, this requires executing 4 separate load instructions \(\lceil\frac{n_{\text{\tiny{$\mathcal{N}$}}}}{\rceil}\) times to load the \(MBR.low_{\text{\tiny{$\mathcal{N}$}}},MBR.low_{\text{\tiny{$\mathcal{N}$}}},MBR. high_{\text{\tiny{$\mathcal{N}$}}}\) and \(MBR.high_{\text{\tiny{$\mathcal{N}$}}}\)' of the child MBRs, respectively. For the Node Layout D2, this requires executing 2 separate load instructions \(\lceil\frac{2n_{\text{\tiny{$\mathcal{N}$}}}}{\rceil}\) times to load the contiguously stored \(MBR.low\) and \(MBR.high_{\text{\tiny{$\mathcal{N}$}}}\)' of the child node MBRs. Here, \(n_{\text{\tiny{$\mathcal{N}$}}}\) refers to the number of children of the index node.
```
//NodeLayout-D1:Loads1st16MBR.low_xofn;fmax-fanoutstructnode1n:lx[f],ly[f],hxf[f],ly[f],ptr[f]; __m512mbr_lx=__m512_load_ps(n.lx); //NodeLayout-D2:Loads1st8MBR.lowofn;fmax-fanoutstructnode2n:lo[f*2],hif[*2],ptr[f]; __m512mbr_lo=__m512_load_ps(n.lo);
```
**3. Predicate evaluation.** SIMD comparison instructions are executed on the constructed query and child MBR vectors to evaluate the predicates and generate a bitmask of the qualifying child nodes for further evaluation.
**4. Queue insertion.** We use a masked compress store instruction to store addresses of the qualified child nodes into \(\mathcal{Q}\). This leverages full SIMD capabilities by inserting into Q up to 8 addresses with a single instruction to make spatial select fully vectorized. This also improves cache locality as it requires the addresses of the child nodes to be loaded into SIMD registers only once, when they are in cache, ensuring full utilization of child addresses.
**5. Prefetching.** Unlike a B+ Tree, the overlap of the MBRs in the index nodes of an RTree may require to descend multiple index nodes at the same level when evaluating a select predicate. This feature along with the use of a queue exposes the need for prefetching to speedup spatial selects. To increase the likelihood that prefetching is beneficial, we maintain a parameter pf_distance to prefetch the index node that is pf_distance steps ahead in the queue. This scheme is effective when there are multiple nodes to evaluate at the same level as we traverse down the tree using a breadth-first traversal. This situation arises when the ratio of the nodes overlapping is relatively large in the R-tree and/or the queries are less selective. In such cases, the cold misses of the index nodes can be fully hidden irrespective of its being an internal or a leaf node. Based on the selectivity of the select operator, the number of instructions executed by spatial select is fairly small making it memory-bound, i.e., query execution time is spent on mostly the CPU's stalling on the LLC cache misses. Using the proposed prefetching scheme, these cache misses are reduced, and thus, improving query performance.
**Avoiding recursion for SIMD-friendly tree traversal (O1).** Avoid recursion to give a tree traversal algorithm the best chance to benefit from SIMD vectorization by introducing external data structures, e.g., queue to mimic recursion call stack, and use SIMD masked compress store instruction to store addresses of multiple qualified nodes or objects into the queue to better utilize cache locality and memory bandwidth.
**Prefetching in tree-indexes that require traversing multiple nodes at the same level (O2).** Use an external data structure, e.g., queue to facilitate looking up the addresses of the next-to-be-visited index nodes, and use software prefetch intrinsics to bring these nodes in cache ahead of time to hide the expensive cold cache miss latency.
## 4. Spatial join
A spatial join combines 2 spatial relations based on some spatial predicate, e.g., _intersects_. Many variants of spatial join algorithms exist. In this paper, we consider the _R-Tree Join_(Covington et al., 2017), where both input relations have R-tree indexes. The scalar version of this algorithm (Covington et al., 2017) traverses the two indexes simultaneously starting from both root nodes, and follows the child node pairs that intersect. For the vectorized implementation of the spatial join algorithm, we propose 2 approaches as discussed next.
Figure 2. Spatial Select for D1.
### Approach 1: One to Many Comparison
The vectorized implementation of this approach of our spatial join algorithm follows the same flow as the vectorized spatial select. The only difference is that we generate the outer index MBR vectors in place of the query MBR vectors. The key idea is to duplicate each MBR of an outer index child node across all the SIMD lanes and compare it with all the inner index child node MBRs, hence the term _one to many comparison_. Refer to Figures 4 and 4 that illustrates this approach for Node Layouts D1 and D2, respectively.
**1. Inner index MBR vector (\(\overrightarrow{mpbr}_{in}\)) construction**. The MBR key excerpts of all the child node MBRs of the inner index node, i.e., \(MBR.low_{x}\)s' \(MBR.low_{y}\)s', \(MBR.high_{x}\)s' and \(MBR.high_{y}\)s' for Node Layout D1 or \(MBR.low\)s' and \(MBR.high\)s' for Node Layout D2 are loaded from memory using the vector 10ad instruction.
**2. Outer index MBR vector (\(\overrightarrow{mpbr}_{out}\)) construction**. Each child MBR of the outer index node is considered one at a time, and during each iteration the MBR key excerpt of the considered child MBR is broadcast to construct the outer index MBR vectors. For Node Layout D1, we construct separate outer index MBR vectors with duplicate \(MBR.low_{x},MBR.low_{y},MBR.high_{x}\) and \(MBR.high_{y}\) values in all the SIMD lanes. For Node Layout D2, we construct separate MBR vectors with duplicate \(MBR.low\) and \(MBR.high\) values.
**3. Predicate evaluation**. SIMD comparison instructions are applied to the constructed inner and outer index MBR vectors to produce a bitmask of the qualified inner index child nodes that require further processing along with the corresponding outer child node as a pair. Predicates are evaluated in 4 stages with 4 sets of comparisons, \([\overrightarrow{mpbr}_{out,lx}\geq\overrightarrow{mpbr}_{in,hx}]\), \([\overrightarrow{mpbr}_{out,lx}\leq\overrightarrow{mpbr}_{in,lx}]\), \([\overrightarrow{mpbr}_{out,ly}\geq\overrightarrow{mpbr}_{in,hy}]\) and \([\overrightarrow{mpbr}_{out,hy}\leq\overrightarrow{mpbr}_{in}^{ly}]\). Data Layout D2 takes 2 stages, \([\overrightarrow{mpbr}_{out,low}\geq\overrightarrow{mp}_{in,high}]\) and \([\overrightarrow{mpbr}_{out,high}\leq\overrightarrow{mpbr}_{in,low}]\). \(\overrightarrow{mpbr}_{out/in,k}\) is the MBR vector of the outer or inner index for key excerpt \(k\) based on the data layout.
Several optimizations apply for early pruning assuming that the nodes are sorted on an MBR key excerpt, e.g., \(MBR.low_{x}\), \(MBR.low_{y}\), \(MBR.high_{x}\) or \(MBR.high_{y}\). These strategies result in savings in terms of (i) the number of outer index child node MBRs considered, and (ii) the number of inner child node MBRs considered for both node layouts.
Let the index nodes be sorted on \(MBR.low_{x}\) and the outer index child node MBRs are considered one by one in ascending order of the \(MBR.low_{x}\). If predicate evaluation of \([\overrightarrow{mpbr}_{out,lx}\geq\overrightarrow{mpbr}_{in,hx}]\) produces a bitmask of all zeros (\(=0x0000\)), then all the child nodes of the outer index after the current one can be pruned because \(mbr_{out,lx}^{current}\leq mbr_{out,lx}^{next}\). This reduces the number of instructions executed as it reduces the effective size of the outer index node.
Similarly, assuming that the inner index child MBRs are loaded in SIMD registers in ascending order of the \(MBR.low_{x}\), early pruning can reduce the number of instructions executed. When \([\overrightarrow{mpbr}_{out,lx}\leq\overrightarrow{mpbr}_{in,lx}]\) generates a bitmask anything other than all ones (\(\neq\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\)\(\pm\)\(\pm\)\(\)
to duplicate the address of the outer index child across all the SIMD lanes and issue \(\left\lceil\frac{\hat{n}_{inc}}{W}\right\rceil\) vector store instructions to store the addresses in the respective queue of the outer index, where \(\hat{n}_{in,c}\) states the number of qualified child nodes of the inner index.
It needs mentioning that the optimization strategies proposed in the predicate evaluation step for the vectorized implementation of the join algorithm can be applied to the scalar version as well.
**Slicing off parts of an outer index node (O3).** Assume that the inner and outer index nodes are sorted on one of the MBR key excerpts, e.g., \(MBR\_low_{x}\). If for any child MBR of the outer index node, the join predicate on the sorted outer index key involving itself and all the child MBRs of the inner index node evaluates to no qualifying node-pairs, then the rest of the child MBRs can be skipped, i.e., part of the outer index node can simply be skipped.
**Shrinking the MBR of an inner index node (O4).** Assume that the index nodes are sorted on one of the MBR key excerpts, e.g., \(MBR\_low_{x}\). Given a _single_ child MBR of the outer index node, if for any child MBR of the inner index node, the join predicate on the sorted inner index key involving both the child MBRs evaluate to disqualification, i.e., they do not intersect; then the rest of the child MBRs of the inner index node can be skipped to check for qualification against the given outer index child MBR, effectively reducing the size of the inner index node.
### Approach 2: Many to Many Comparison
This approach of our vectorized implementation of the spatial join algorithm is specific to Node Layout D1. The only difference from the _One to Many_ approach is in the predicate evaluation step, mainly, how the bitmask generation of \(\left\lceil\overrightarrow{mp}_{out,hx}\leq\overrightarrow{mp}_{in,hx}\right\rceil\) is handled. _The one to many_ approach considers each child node MBR of the outer index one at a time, and broadcasts it across all the lanes of a SIMD register to construct the outer index MBR vector. This requires executing \(n_{out,c}\) broadcast and \(n_{out,c}\left\lceil\frac{n_{inc}}{W}\right\rceil\) SIMD compare instructions for evaluating \(\left\lceil\overrightarrow{mp}_{out,hx}\leq\overrightarrow{mp}_{in,hx}\right\rceil\), where \(n_{out/in,c}\) refers to the number of child MBR of the outer or inner index node. To reduce this large number of executions, we propose the following approach, where multiple (_many_) outer index child nodes are compared against a selected set (_many_) of inner index child nodes at once. Refer to Figure 6 for illustration.
**1. Outer index \(MBR_{\text{hx}}\) vector (\(\overrightarrow{mp}_{out,hx}\)) construction.** The \(MBR\_high_{x}\) of all the child MBRs of the outer index are loaded into SIMD registers with the vector load instructions, 16 at a time. It takes \(\left\lceil\frac{n_{out,c}}{W}\right\rceil\) vector load instructions to fully load all the \(MBR\_{high_{x}}\)s' of outer index node's child MBRs. For each of these child MBR vectors, Steps 2-4 are carried in \(\log_{2}F+1\) times. Here, \(F\) is the maximum fanout of the index.
**2. Gather indices vector construction for inner index.** We use gather instruction to load the \(MBR\_low_{x}\) of the desired child MBRs of the inner index. However, this requires constructing a gather indices vector specifying the indices of the \(MBR\_low_{x}\)s' of the desired child MBRs. Initially, \(n_{in,c}/2\) is duplicated across all SIMD lanes to generate the gather indices vector as we want to load the \(MBR\_low_{x}\) of the middle child MBR for the inner index, where \(n_{in,c}\) refers to the number of child MBRs of the inner index node. For the following iterations, the gather indices vector is updated based on the bitmask generated from Step 3. This requires performing 2 SIMD masked addition.
**3. Predicate evaluation.** Once both the outer and inner index MBR vectors are constructed, we evaluate the \(\left\lceil\overrightarrow{mp}_{out,hx}\geq\overrightarrow{mp}_{in,hx}\right\rceil\) predicate to generate a bitmask during each iteration.
**4. Flip indices vector construction.** A flip indices vector is required to track the eligible inner child MBRs for all the outer index child MBRs. For each child MBR in the outer index, the corresponding entry in the flip indices vector indicates the index of the inner child MBR, beyond which the other child MBRs can be ignored, as they do not qualify. Initially, \(F\) is duplicated across all the SIMD lanes to generate the flip indices vector, denoting all the inner index child MBRs qualify the predicate. For the following iterations, a masked blend instruction is performed to update the indices vector. After the completion of \(\log_{2}F+1\) iterations for each outer index child MBR vector, the flip indices vector is stored in memory using the vector store instruction.
**5. Extracting bitmask.** Contrary to _Approach_ 1, the predicate evaluation step of \(\left\lceil\overrightarrow{mp}_{out,hx}\geq\overrightarrow{mp}_{in,hx}\right\rceil\) for _Approach_ 2 produces bitmasks referring to the eligibility of inner index child MBRs for different sets of outer index child MBRs. Thus, it requires extracting the corresponding entry of an outer index child MBR from the flip indices vector and generating the bitmask. Once the predicate evaluation stage of \(\left\lceil\overrightarrow{mp}_{out,hx}\leq\overrightarrow{mp}_{in,hx}\right\rceil\) is completed, the rest of the algorithm remains the same as in the _one to many_ approach for evaluating the remaining 3 predicates, \(\left\lceil\overrightarrow{mp}_{out,hx}\geq\overrightarrow{mp}_{in,hx}\right\rceil\), \(\left\lfloor\overrightarrow{mp}_{out,hx}\geq\overrightarrow{mp}_{in,hx}\right\rfloor\) and \(\left\lfloor\overrightarrow{mp}_{out,hx}\leq\overrightarrow{mp}_{in}^{1,g}\right\rceil\).
Compared to the _one to many_ approach that takes \(n_{out,c}\left\lceil\frac{n_{inc}}{W}\right\rceil\) SIMD comparison instructions for evaluating the \(\left\lceil\overrightarrow{mp}_{out,hx}\leq\overrightarrow{mp}_{in,hx}\right\rceil\) predicate, the _many to many_ approach takes at most \(\left\lceil\frac{n_{out,c}}{W}\right\rceil\)(\(\left\lfloor log_{2}F+1\right\)) comparison instructions for the same task. However, this comes with the additional cost of other SIMD instructions in the form of blend, masked add and gather operations. This further reduces the number of broadcast instructions in the sense that any outer index child node that does not qualify, i.e., the flip indices of the node remain undefined (Refer to the first lane of the final flip indices vector in Figure 6) can be ignored for the next stage of processing. Notice that the optimization technique _O4_ proposed for _one to many_ approach contrasts the optimization technique discussed above for the _many to many_ approach. However, optimization _O3_ is also applicable to this approach.
_Batched shrinkage of an inner index node's MBR (O5)_ To reduce the large number of SIMD broadcast and comparison instructions in O4, use gather instructions to selectively load inner index node's child MBRs and compare them against a batch (W) of outer index child MBRs instead of one.
Refer to Figure 6. Each node has exactly \(n_{c}=4\) child MBRs. The flip indices vector is set to all-undefined values as initially we consider all the child nodes to qualify and the outer index MBR vector contains all the MBRs from R1 to R4. The gather indices vector is set to all-\(2s\) (\(n_{c,in}/2=2\)) that generates the inner index MBR vector with all R3s' for the first iteration. Then, the outer index
MBR is compared with the inner index MBR vector to generate the bitmask (\(0b1001\)) that is fed to the blend instruction pipeline along with the gather indices vector to generate the flip vector for the next iteration, i.e., \([\times,\times,\times]\xrightarrow[\frac{[2,2,2,2]}{0b1001},\ [\times,2,2,\times]\). Similarly, the same bitmask is used to generate the gather indices for the next iteration, \([2,2,2,2]\xrightarrow[\frac{0b1001}{}]\)\([3,1,1,3]\). The same steps are repeated for the next iterations as illustrated in the Figure.
## 5. Experimental Evaluation
Experiments run on a server with Intel(R) Xeon(R) Gold 6330 CPU processors based on Intel IceLake microarchitecture and Linux OS. The L1-D, L1-J, L2, and LLC cache sizes are 2.6MB, 1.8MB, 70MB and 84MB, respectively. The DTLB cache contains 64 4KB pages. The machine supports 56 cores and 512-bit SIMD registers. The CPU clock frequency is 2.0 GHz. We compile with core 11.3.0 with flags -funroll-loops, and -03 enabled. We use Linux's per events API to collect the hardware performance counters. All query operations are evaluated on an R-tree index with 10M 2D points synthetically generated that follow uniform distribution. We use 32-bit keys to present each dimension of the data points. The default maximum fanout of the index is 64 and the default selectivity of range select is 0.1%.
### Spatial Select
#### 5.1.1. Effect of SIMD
Figure 7 gives the query processing time and hardware performance counters: the number of retired (i.e., executed) instructions, L1-D cache misses, LLC misses, and branch mispredictions of the R-tree's scalar and vectorized range select operator with maximum fanout 64 and dataset size 10M. We examine the logical and bitwise scalar variants of range select. For the vectorized implementation, we examine 3 variants for each data layout, D1, D2. Variants V(D1), V(D2) traverse the index recursively, Variants V(D1)-O1, V(D2)-O1 traverse the index via a queue, and Variants V(D1)-O1+O2, V(D2)-O1+O2 issue prefetch instructions with the queue. Data layout D2 with both optimizations O1 and O2 performs best, achieving a speedup of 2.97\(\times\) over the best-performing scalar variant. It reduces the number of instructions 3.12\(\times\), LLC misses 2.18\(\times\), and branch mispredictions 18.30\(\times\). These factors boost the performance of the vectorized implementation. The worst performing vectorized variant, layout D2 with no optimizations, 1.91\(\times\) outperforms the best-performing scalar variant. One consistent aspect for all vectorized variants with no prefetching is that they experience more LLC misses than the scalar variants, e.g., Layout D2 with no prefetching, V(D2)-O1 incurs 6.70\(\times\) more LLC misses than the scalar implementation with bitwise operators.
Between the 2 scalar variants, the one with logical operators performs better. Even though the introduction of bitwise operators reduce the number of branch mispredictions by 1.10\(\times\), it comes at the cost of evaluating all the conditions of the select predicate, i.e., resulting in more instructions (1.27\(\times\)). Thus, the benefit of the reduced number of branch mispredictions cannot mitigate the overhead due to the increased number of retired instructions.
#### 5.1.2. Effect of optimizations
O1 reduces query latency by 1.32\(\times\) and 1.40\(\times\) for Data Layout D1 and D2, respectively. O1 avoids recursion and uses one instruction to enqueue addresses of at most 8 index nodes, thus reducing the number of retired instructions by up to 2\(\times\) for both data layouts. Also, it reduces branch mispredictions by 14.80\(\times\) and 5.71\(\times\) for layouts D1 and D2, respectively, than the partially vectorized variant (V). But the introduction of the queue worsens cache performance as it results in 1.52\(\times\) and 1.73\(\times\) more LLC misses for both data layouts, respectively. This is expected as it requires an extra lookup to dequeue the address of the next qualifying index node. To mitigate the effect of bad cache performance, when O2 is applied on top of O1, it further improves the query performance 11.14\(\%\) and 10.46\(\%\) by reducing the LLC misses by 13.51\(\times\) and 10.20\(\times\) for layout D1 and D2, respectively. This reduced number of LLC misses comes at the cost of increased number of retired instructions, i.e., 1.68\(\times\) for layout D1 and 1.43\(\times\) for layout D2 in the form of software prefetch instructions. This prefetching scheme not only improves over the vectorized variants that suffer from heavy LLC misses, it outperforms the scalar versions as well in terms of LLC misses. Compared to the scalar (logical) select operator, prefetching-enabled vectorized operator exhibit 2.80\(\times\) and 2.18\(\times\) less LLC misses for storage layout D1 and D2, respectively.
#### 5.1.3. Effect of node layouts
Between the 2 node layouts, D2 outperforms D1 by only 3.62\(\%\), with both optimizations, O1 and O2
Figure 6. Spatial Join - _Many_ to _Many_ Comparison for D1. Each row is a different iteration.
Figure 7. Spatial select: Effect of SIMD and optimizations. (Dataset size = 10M, Maximum fanout = 64, Selectivity = 0.1%)
enabled. After optimizing for LLC cache misses, L1-D misses become the bottleneck for range selects for the in-memory R-tree (c.f. Figure 7(a)). This is why D2 slightly outperforms D1 as it shows better L1-D cache performance despite the larger number of retired instructions. D2 has \(1.06\times\) less L1-D misses than D1. If we exclude prefetching and focus on prefetching-disabled vectorized variants, i.e., the partially vectorized implementation, D2 has better LLC performance with \(1.18\times\) less cache misses.
#### 5.1.4. Effect of maximum fanout
As the maximum fanout of the R-tree increases, the performance of range select improves until it reaches a plateau at fanout 64, and then the performance starts to degrade. This is true for all scalar or vector implementations. The number of retired instructions, L1-D cache misses, and DTLB misses follow the same trend with the exception of LLC misses and branch mispredictions (Figure 9). Excluding range select variants (V-O1+O2) with software prefetching, all other variants with higher maximum fanout have lower LLC misses. As node size increases, the number of nodes probed by a range select reduces, resulting in less number of cold cache misses. In addition, larger node sizes enable the hardware prefetcher to fully kick in as the data addresses to be prefetched are more predictable, and prefetches to cache more child MBRs located contiguously in memory, ahead of their use.
The fanout impacts the performance of the optimizations. For indexes with smaller fanout, as observed in Section 5.1.2, the incremental introduction of O1 and O2 improves query performance over the partially vectorized technique. However, when node size increases, the effect of both optimizations starts to diminish, e.g., from maximum fanout 1024 onwards, prefetching rather decreases query performance, and O1 outperforms O1+O2. From maximum fanout 512, the partial vectorized operator performs better. Even though prefetching still reduces the number of LLC misses for these indexes with larger fanouts, the increased number of retired instructions and L1-D cache misses hinder the benefits gained from it. Notice that prefetching increases the number of L1-D cache misses for indexes with larger fanout. The reason is that we use the hint _MM_HINT_T0 when issuing prefetch requests to bring the node data that will be required in future into L1-D cache. With larger node sizes, this results in evicting active node data that are being worked on.
From fanout 512 onwards, the partially vectorized operator outperforms its vectorized counterparts with the optimizations by almost \(1.20\times\). As node size increases, the probability that the addresses of the qualified entries get enqueued with the same instruction decreases, thus resembling a normal store instruction but with increased latency, hence degrading performance (Figure 9(a)).
#### 5.1.5. Effect of selectivity
The observations made in Sections 5.1.1 to 5.1.4 remain valid for varying selectivity of the range select operation. Figure 9(b) compares the performance of different optimization techniques and data layouts under varying selectivity.
### Spatial Join
#### 5.2.1. Effect of SIMD
Figure 11 gives the performance of the scalar and vectorized implementations of R-tree spatial join in terms of query latency and hardware performance counters. The maximum index fanout is 64, and the data size is 10M points. We examine 2 variants of the scalar implementation, one with no optimizations (S-D0) and the other with O3, S-D0(O3) while having the index sorted on one on the MBR key excerpts. Similarly, we examine 7 variants of the vectorized implementation, 4 and 3 for Data Layouts D1 and D2, respectively. O4 and O5 are orthogonal to each other, hence only one can be applied with O3 for Data Layout D1. For Data Layout D2, it is not possible to apply O5. For O5 to take effect, the consecutive elements of an index node are to be sorted. Data Layout D2 packs both the \(MBR.low_{x}\) and \(MBR.high_{x}\) consecutively and the index node can be sorted on one of \(MBR.high_{x}\) or \(MBR.low_{x}\).
Figure 11 illustrates that all 6 SIMD variants of the join operator outperform the scalar variants. At worst, the vectorized implementation (V-D1) achieves \(2.12\times\) speedup over the best performing scalar variant. The speedup increases upto \(5.53\times\) for the best performing vectorized implementation, i.e., layout D1 with O5 and O3 (V-D1+O3+O5). A reduced number of executed instructions and branch mispredictions contribute to this speedup. Compared to S-D0(O3), invariant V-D1(O3+O5) executes \(7.55\times\) less instructions and \(14.50\times\) less branch mispredictions. But the cache performance of these vectorized implementations is worse. The best case (V-D1+O3+O5) incurs \(1.62\times\) and \(3.00\times\) more L1-D, and LLC cache misses, respectively. Its DTLB cache performance is also \(2\times\) worse than S-D0(O3).
Figure 8. An approximation of the clock cycles breakdown (Dataset size = 10M, Max fanout = 64, Selectivity = 0.1%)
Figure 9. Spatial select: Effect of maximum fanout on h/w performance counters. (Dataset size = 10M, Selectivity = 0.1%)
#### 5.2.2. Effect of optimizations
Out of the 3 optimizations for spatial join, O3 is the most effective. It reduces the effective size of the outer index node by pruning outer index child nodes, which positively impacts all hardware performance counters. The scalar version enhances query latency by 1.28\(\times\), while for vector layouts D0 and D1 the speedup is 1.50\(\times\) and 1.49\(\times\), respectively. We can achieve an additional 1.00\(\times\) and 1.31\(\times\) improvement in terms of query latency by applying O4 and O5, respectively, on top of O3 for layout D1 by pruning multiple inner index child nodes.
Introducing O4 over O3 requires executing additional branches to check if the pruning condition is satisfied. This exposes the processor to further speculations resulting in more branch mispredictions, and hence requires executing more instructions, e.g., O4 incurs 1.85\(\times\) more branch mispredictions for D1 and 2.84\(\times\) for D2. However, these pruning strategies reduce the data footprint of the query, i.e., requiring less data to be fetched from memory. This can be observed in terms of an improved number of cache misses of O3 over no optimizations, and O4+O3, O5+O3 over O3. Hence, for O3+O4, there is a tradeoff between the number of instructions executed and the number of cache misses. For D1, the performance remains the same as O3, while for D2 it degrades (1.11\(\times\)).
For D1, although O3+O5 has more branch mispredictions than O3, it executes 1.80\(\times\) less instructions than O3. This is due to the _many to many_ comparison strategy that requires less comparison instructions to execute and prunes the inner index node early. Hence, the performance gain from O3+O5 over O3+O4 is 1.32\(\times\) in terms of query latency for layout D1. A better instruction count, cache performance and speculation attributes to this speedup.
#### 5.2.3. Effect of node layouts
Contrary to range selects, in-memory spatial join is CPU-bound. Refer to Figure 7(b). Generally, the data layout with less retired instruction counts outperforms the other. Thus, D1 outperforms D2 in all scenarios of spatial join.
#### 5.2.4. Effect of maximum fanout
The common trend is that as the maximum fanout of the R-tree increases, the performance of spatial join degrades significantly irrespective of the utilization of SIMD or the optimization techniques, e.g., for the best performing vector algorithm, V-(D1)+O3+O5 on an R-tree of 10M points, a join is 54.90\(\times\) slower for maximum fanout 2048 than for maximum fanout 64. The increase in the number of instructions executed, and L1-D, L1-I, L1\(\times\) misses cause this degradation (Figure 9(c)).
The trends in Sections 5.2.1 and 5.2.2 for an R-tree with fanout 64 hold for the other fanouts, with a few exceptions. Node layout D1 with O3 (V-D1+O3), slightly outperforms the other D1 variants for smaller fanouts, i.e., 16 and 32. The clock cycles saved from minimizing cache misses cannot overpower the increase in number of instructions due to the additional pruning strategy. Similarly, V(D1)-O3+O4 outperforms V(D1)-O3+O5 for smaller fanouts on D1. To compensate for the reduction in comparison instructions, O5 has multiple costly gather, permute, blend instructions. Due to the logarithmic nature of this strategy, the savings only show when the fanout is sufficiently larger, i.e., starting from 64.
## 6. Related Work
**Spatial Operators.** A large body of work studies query processing in the context of spatial databases. (Beng et al., 2017; Wang et al., 2017) study the Nearest-neighbor (NN) and k-nearest neighbor (kNN) queries in spatial
Figure 11. Spatial join: Effect of SIMD and optimizations. (Dataset size = 10M, Maximum fanout = 64)
Figure 12. Spatial join: Effect of maximum fanout on h/w performance counters. (Dataset size = 10M)
Figure 10. Comparing node layouts and optimizations for spatial select and join. (Default max fanout = 64, selectivity = 0.1%)
databases following a depth-first and best-first approaches, respectively. Several spatial join algorithms exist, and differ based on whether both (Bellez and Cheung, 2019), only one (Bellez and Cheung, 2019; Chen et al., 2019) or none (Bellez and Cheung, 2019; Chen et al., 2019; Chen et al., 2019) of the input relations are indexed. While (Bellez and Cheung, 2019) traverses both indexes synchronously, (Bellez and Cheung, 2019) follows a plane-sweep approach sweeping through the query rectangles and data points that are sorted on one of the dimensions. (Chen et al., 2019) partitions the two spatial datasets into the same grid and extends over (Bellez and Cheung, 2019) to perform the join operation. In contrast to our work, none of the algorithms proposed in the spatial databases literature consider SIMD to vectorize the algorithms.
**SIMD DB operators.** An extensive list of vectorized operators exist in database literature to utilize SIMD capabilities of the hardware ranging from scan (Han et al., 2017; Chen et al., 2019), join (Bellez and Cheung, 2019; Chen et al., 2019; Chen et al., 2019) to compression (Chen et al., 2019), sorting (Chen et al., 2019; Chen et al., 2019; Chen et al., 2019), bloom filters (Chen et al., 2019). Some of these operators exhibit linear access patterns, e.g., scan, sorting, while others, e.g., bloom filters, exhibit random access patterns. Our work falls in the category of random access vectorized operators.
**SIMD and prefetching in tree indexes**. There exists a branch of work with the same philosophy as of ours that design index node layouts for a limited number of index operations, i.e., tree traversal and search to benefit from SIMD vectorization, e.g., FAST (Chen et al., 2019), VAST (Chen et al., 2019), ART (Chen et al., 2019). (Chen et al., 2019) studies prefetching in the context of SIMD to reduce cache misses and enhance the benefits gained from vectorization, while (Chen et al., 2019) studies prefetching in the context of tree indexes, i.e., B+ tree. In this paper, we study both SIMD and prefetching in the context of the R-tree. (Chen et al., 2019) proposes multiple partially vectorized search algorithms to traverse tree-like indexes, e.g., B+ tree, Quad tree using SIMD instructions. In contrast, our proposed algorithms are fully vectorized.
## 7. Conclusion
In this paper, we vectorize spatial range select and join operators, and investigate how spatial operators for an in-memory R-tree benefit from SIMD vectorization. The key findings can be summarized as follows.
* Vectorized range select operator outperforms the best performing scalar variant from \(2\times\) to \(4\times\).
* Vectorized spatial join outperforms the best performing scalar variant from \(4\times\) to \(9\times\).
* Vectorized select can benefit from avoiding recursion (O1) and prefetching (O2) by upto \(1.63\times\) and \(1.84\times\), respectively.
* Vectorized join can benefit from slicing the outer index node (O3) by upto \(1.63\times\).
* Shrinking the MBR of the inner index node _in batches_ (O5) can speedup the vectorized join by upto \(2.09\times\) (O4).
* Data Layout D1 is favorable for CPU-bound operators, e.g., join, compared to Data Layout D2 that is favorable for memory-bound operators.
* The vectorized R-tree with smaller maximum fanout performs better than the one with larger fanout.
## 8. Acknowledgements
Walid G. Aref acknowledges the support of the National Science Foundation under Grant Number IIS-190216.
|
2309.09833 | Predictive Uncertainty-based Bias Mitigation in Ranking | Societal biases that are contained in retrieved documents have received
increased interest. Such biases, which are often prevalent in the training data
and learned by the model, can cause societal harms, by misrepresenting certain
groups, and by enforcing stereotypes. Mitigating such biases demands algorithms
that balance the trade-off between maximized utility for the user with fairness
objectives, which incentivize unbiased rankings. Prior work on bias mitigation
often assumes that ranking scores, which correspond to the utility that a
document holds for a user, can be accurately determined. In reality, there is
always a degree of uncertainty in the estimate of expected document utility.
This uncertainty can be approximated by viewing ranking models through a
Bayesian perspective, where the standard deterministic score becomes a
distribution.
In this work, we investigate whether uncertainty estimates can be used to
decrease the amount of bias in the ranked results, while minimizing loss in
measured utility. We introduce a simple method that uses the uncertainty of the
ranking scores for an uncertainty-aware, post hoc approach to bias mitigation.
We compare our proposed method with existing baselines for bias mitigation with
respect to the utility-fairness trade-off, the controllability of methods, and
computational costs. We show that an uncertainty-based approach can provide an
intuitive and flexible trade-off that outperforms all baselines without
additional training requirements, allowing for the post hoc use of this
approach on top of arbitrary retrieval models. | Maria Heuss, Daniel Cohen, Masoud Mansoury, Maarten de Rijke, Carsten Eickhoff | 2023-09-18T14:52:28Z | http://arxiv.org/abs/2309.09833v1 | # Predictive Uncertainty-based Bias Mitigation in Ranking
###### Abstract.
Societal biases that are contained in retrieved documents have received increased interest. Such biases, which are often prevalent in the training data and learned by the model, can cause societal harms, by misrepresenting certain groups, and by enforcing stereotypes. Mitigating such biases demands algorithms that balance the trade-off between maximized utility for the user with fairness objectives, which incentivize unbiased rankings. Prior work on bias mitigation often assumes that ranking scores, which correspond to the utility that a document holds for a user, can be accurately determined. In reality, there is always a degree of uncertainty in the estimate of expected document utility. This uncertainty can be approximated by viewing ranking models through a Bayesian perspective, where the standard deterministic score becomes a distribution.
In this work, we investigate whether uncertainty estimates can be used to decrease the amount of bias in the ranked results, while minimizing loss in measured utility. We introduce a simple method that uses the uncertainty of the ranking scores for an uncertainty-aware, post hoc approach to bias mitigation. We compare our proposed method with existing baselines for bias mitigation with respect to the utility-fairness trade-off, the controllability of methods, and computational costs. We show that an uncertainty-based approach can provide an intuitive and flexible trade-off that outperforms all baselines without additional training requirements, allowing for the post hoc use of this approach on top of arbitrary retrieval models.
Mitigating bias, Fairness, Uncertainty, Utility-fairness trade-off 232320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232020232023202320202320232023202023202320232020232023202320202320202320232023202023202320202320232020232023202320202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023202320232023232023202320232023232023202323202320232320232023202323202323202323202323202323232323232323232323232323232323232323232323232323232323232323232323323233232323232323232323232323232323232323232323233233233233233233233323323332
impact of biased documents, while adhering to the PRP as closely as possible, only intervening in places where the ranking model was not very certain to begin with.
Additionally, we introduce an entirely post hoc uncertainty quantification procedure, based on Laplace approximation, that allows PUFR to approximate the uncertainty for any off the shelf model without access to the training data or optimization procedure. This is in contrast to past work that requires a specific training regime to produce the uncertainty scores for each candidate (Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020).
**Motivating example.** In Fig. 1, we visualize our approach to predictive uncertainty-based fairness, PUFR. In this example, the objective is to promote the unbiased documents (marked in green) to appear on top of the ranked result. We start by considering not only the mean ranking score but also the score distribution (uncertainty) as visualized with the cross resp. curve in Fig. 1a. We chose confidence intervals relative to the standard deviation in which we allow PUFR to adjust the scores for each document, as can be seen in Fig. 1b. Depending on whether a document is biased or not, we increase the score in this confidence interval if the document is unbiased or decrease it otherwise as visualized with the green/red crosses in Fig. 1c. As the confidence intervals of the second (D2) and third (D3) documents _intersect_, this changes the order of these scores. After re-ranking with respect to the newly obtained scores, the protected document D3 has swapped place with the non-protected document D2 as seen in Fig 1d. As there are minimal computational costs for PUFR, developers/users have the freedom to modify the trade-off between utility and fairness with minimal costs for their use-cases.
**Our contributions.** We summarize our contributions as follows:
* We introduce the notion of uncertainty-based fair ranking and analyze the potential of using the model uncertainty w.r.t. the ranking scores for bias mitigation.
* We define PUFR, an intuitive re-ranking approach that takes as input the ranking score distribution and calculates new ranking scores that can be used to create a less biased ranked list, while still preserving some certainty guarantees.
* We compare PUFR to several in- and post-processing bias mitigation methods and show that it outperforms all baselines, while being computationally much less expensive than some of them. Moreover, we demonstrate that PUFR is easily controllable with respect to the trade-off between fairness and utility, making it practical for use in real-life ranking applications.
## 2. Related Work
### Uncertainty in ranking
Zhu et al. (Zhu et al., 2020) introduce the notion of considering a model's confidence when ranking documents. The authors view the confidence of a score based on the probabilistic model's own estimate - the variance. Alternatively, we can assume a Bayesian perspective that considers how well the training data support the current model. As this approach does not rely on a probabilistic ranking model, it complements current ranking regimes. Penha and Hauff (Penha and Hauff, 2019) first introduce this notion of uncertainty into conversational retrieval by incorporating dropout into a BERT architecture at inference time. The ranking score is then modified by an uncertainty measure to improve the final re-ranking. Cohen et al. (Cohen et al., 2019) suggest a similar approach for ad hoc retrieval where only the last layer's uncertainty is measured to offset both the complexity of a neural model and the size of the document set with similar re-ranking improvements. Yang et al. (Yang et al., 2020) extend the above work by leveraging the uncertainty estimate to improve the exploration of an online learning to rank model. Rather than performing uncertainty-aware re-ranking, the uncertainty estimate is used to take an optimistic perspective on candidate documents to reduce the exploitation bias commonly found in an online learning to rank setting.
### Mitigating bias and fair ranking
Recent years have seen a broad range of research on uncovering and mitigating biases in different information retrieval systems, such as biases in talent pool (Krause et al., 2019) and resume search (Krause et al., 2019) and the reinforcement of gender biases through search engines (Krause et al., 2019). Rekabsaz and Schedl (Rekabsaz and Schedl, 2020) explore the extent to which documents with gender bias can be found in the retrieved results of different neural retrieval models. Other work focuses more on the mitigation of such biases (e.g., (Zhu et al., 2020; Wang et al., 2020), where models are optimized to contain fewer biased documents for queries that are inherently unbiased. Rekabsaz et al. (Rekabsaz and Schedl, 2020) use adversarial learning to remove gender bias from the trained model, Zerwas et al. (Zerwas et al., 2020) optimize the query representation from a previously trained architecture instead.
Mitigating biases is often framed as a fairness task. Zehlike et al. (Zehlike et al., 2019; Zehlike et al., 2019) introduce a classification framework for fair ranking approaches, which we partly use to position our work in the existing fair ranking literature. As opposed to score-based fairness (Zerwas et al., 2020; Zehlike et al., 2019; Zerwas et al., 2020; Wang et al., 2020), where the ranking scores are assumed to be known,
Figure 1. Visualization of our method PUFR. Next to the mean ranking scores PUFR also considers the score distribution that we obtained from the ranking model (1a). Through intersecting confidence intervals (1b) that allow us to adjust the scores (1c) such that a not biased document, visualized in green, is swapping place with a higher ranked, biased document (1d).
in this work we focus on supervised learning to rank, where the ranking scores need to be determined with a ranking model.
A large body of work focuses on _merit-based_ fairness, where the goal is to distribute the user attention in some way proportional to the merit of either individual documents (individual fairness (e.g., 19; 25; 38)) or groups of documents (e.g., 3; 44; 39). In contrast, other work (e.g., 48; 50) focuses on _representational_ fairness, which is concerned with removing historical biases from the ranking or representing documents from different groups fairly w.r.t. some demographic within the ranking.
Independently of the notion of fairness, we differentiate between pre-processing (24), in-processing (2; 3; 33; 39; 40; 48; 49; 53), and post-processing (11; 22; 50) approaches to fairness interventions. These methods come into play either before the model is being trained, adjust the model or training process itself, or intervene after the model has been trained and the ranking scores are determined.
PUFR is a _post-processing_ approach that aims to mitigate bias (_representational_ unfairness) as opposed to prior in-processing work on the same task (33; 53). While other work on post-processing approaches (such as, e.g., 6; 48) intervene at the ranked output, our approach instead adjusts the score distribution. What distinguishes PUFR from prior work on fair ranking is that we aim to exploit the uncertainty that the ranking model has on the predicted relevance scores to increase the fairness of the rankings.
### Uncertainty in fair ranking
Prior work at the intersection of uncertainty and fairness can be grouped into two categories. The first category deals with uncertainty introduced when group membership cannot be determined with confidence. Ghosh et al. (2017) discover that, when group labels are inferred from data, the usage of fair ranking methods can invalidate fairness guarantees and even increase the disadvantage that protected groups might receive. Mehrotra and Vishnoi (2018) follow up on this work and develop a fair ranking framework for cases where socially-salient group attributes cannot be determined with certainty but are assumed to follow a given probability distribution.
The other category, which contains, among others, our work, considers the predictive uncertainty stemming from imperfect prediction of merits and ranking scores. Yang et al. (2017) are concerned with uncertainty in the relevance estimation. Unlike our work, the authors study an online setting where the relevance estimation is constantly updated. We target a static setting, not aiming to reduce the uncertainty for some exploration strategy but to exploit the uncertainty to obtain a better trade-off between fairness and utility.
Lastly, Singh et al. (2018) are concerned with uncertainty in merit due to observations of secondary attributes instead of directly observing the merit. The authors suggest a probabilistic fairness framework in the presence of such uncertainty. Their work defines a notion of fairness that takes the uncertainty in the merit prediction into account, while we exploit uncertainty to, for example, correct for historic biases in the data and ranking model.
In summary, where existing methods either ignore the predictive uncertainty of ranking scores, aim to either reduce uncertainty, or take it into account when defining fairness, our work is the first to harness uncertainty to improve the fairness-utility trade-off.
## 3. Method
We take an uncertainty-based approach to post hoc bias mitigation in ranking. We exploit the model's uncertainty over the predicted ranking scores to manipulate the ranking in a way that benefits documents that do not contain biases, which results in a fairer ranked list. By staying within a certain confidence range, we minimize the potential cost to utility. Following prior work (28; 33), we frame the task as a fair ranking problem.
Our method operates entirely through principled machinery and allows us to trade-off between user utility and fairness by adjusting a single coefficient. Furthermore, an existing ranker can be used as-is, without the need to retrain it, making it possible to use and adjust it for various levels of fairness, with little additional costs.
Below, in Section 3.1, we start by defining our notation and the fair ranking task. In Section 3.2, we introduce our method PUFR that, assuming that the predictive uncertainty over the ranking scores is given, uses those uncertainty values to develop a fair ranking approach. Finally, in Section 3.3 we follow with a description of how to attain the uncertainty of a given deterministic ranking model over its scores at inference time.
### Notation and preliminaries
Given a query \(q\), we consider the task of ranking documents from a candidate set \(\mathcal{D}_{q}=\{d_{q,i}\}_{i}\) w.r.t. their relevance, to \(q\). Regarding measured user utility only, an ideal ranked list would be ordered by decreasing document relevance. We assume a ranking model has been trained to order the documents w.r.t. the relevance to the query by predicting relevance scores. Most rankers are deterministic, outputting only a single predicted relevance score, \(\mu_{q,i}\). In Section 3.3 we will describe how to approximate the uncertainty of predicted scores for such a model. We write \(\sigma_{q,i}\) for the standard deviation of the predicted score \(\mu_{q,i}\) for document \(d_{q,i}\). Note that we implicitly assume the score distribution to be Gaussian.
Prior work has shown that models that are trained solely for maximizing the measured utility can be biased and contain unfair representations of the resulting ranked lists (34). In this work, as an additional objective, we aim to decrease the presence of biased documents in the ranked lists. We treat the task as a fair ranking problem, where we want to increase the exposure of the protected group \(\mathcal{D}_{q}^{\mathcal{D}}\subset\mathcal{D}_{q}\) of documents without biases and decrease the exposure of the non-protected group \(\mathcal{D}_{q}^{\mathcal{N}}\subset\mathcal{D}_{q}\) of documents that contain biases.
### PUFR: Uncertainty-aware fairness
In this section, we introduce our post-processing fairness intervention method **P**redictive **U**ncertainty based **F**air **R**anking, PUFR. The core idea of PUFR is to take advantage of the uncertainty of the model over the predicted ranking scores to adjust these scores proportional to the standard deviation of the predictive distribution for each document, allowing fairness adjustments with minimal cost to the utility. For now, we treat the score distribution for each document, \(\mathcal{N}(\mu_{q,i},\sigma_{q,i}^{2})\), as being given, but in Section 3.3 we describe how to obtain it for a deterministic ranker.
As the goal of PUFR is to mitigate bias and hence increase the fairness of the ranking system, PUFR accomplishes this by swapping some of the documents of the protected group, \(\mathcal{D}_{q}^{\mathcal{D}}\), with
higher ranked documents of the non-protected group, \(\mathcal{D}_{\mathbf{q}}^{N}\). Since the uncertainty of the scores for the documents within the same group can differ greatly, this allows for a tuned adjustment of the ranking scores where swaps only occur in settings where there exists a reasonable chance of the documents being equally relevant, quantified by the model's uncertainty, \(\alpha_{q,i}\).
In other words, we allow PUFR to pick ranking scores that maximize fairness in intervals \([\mu_{q,i}-\alpha\cdot\alpha_{q,i},\mu_{q,i}+\alpha\cdot\alpha_{q,i}]\), without re-ordering the documents within the same group. Here, \(\alpha\) is a user defined hyper-parameter that quantifies the chance of a utility violation when performing this procedure. A higher value of \(\alpha\) will result in a fairer ranking but at the cost of less accurate predicted scores, and hence potentially a drop in utility.
As shown in Algorithm 1, PUFR initially loops over all documents of the protected group \(\mathbf{d}_{q,i}\in\mathcal{D}_{\mathbf{q}}^{P}\), sorted w.r.t. decreasing ranking score, \(\mu_{q,i}\), see line 1. PUFR then increases the score as much as possible while staying within the confidence bounds, i.e.,
\[\tilde{\mu}_{q,i}=\mu_{q,i}+\alpha\cdot\alpha_{q,i}. \tag{1}\]
See line 2. To avoid intra-group swapping of documents, modified ranking scores are bounded by the lowest score of any higher ranked document within the same group:
\[\tilde{\mu}_{q,i}\leq\min_{\mathcal{D}_{\mathbf{q}}^{P},j\leq i}(\tilde{\mu}_{q,j }), \tag{2}\]
where \(j,i\) are rank positions, see line 3. Equivalently, for all documents of the non-protected group, \(\mathbf{d}_{q,i}\in\mathcal{D}_{\mathbf{q}}^{N}\), we decrease the score as follows, this time starting with the document with the lowest ranking score (see line 5):
\[\tilde{\mu}_{q,i}=\mu_{q,i}-\alpha\cdot\alpha_{q,i}, \tag{3}\]
see line 6. Again, to avoid the same intra-group swapping for the non-protected group, we lower bound the adjusted scores by the maximum score of all documents in the same group that are ranked lower in the original ranking:
\[\tilde{\mu}_{q,i}\geq\max_{\mathcal{D}_{\mathbf{q}}^{N},j\geq i}(\tilde{\mu}_{q,j }). \tag{4}\]
See line 7. PUFR then uses these adjusted scores \(\tilde{\mu}_{q,i}\) to re-rank the documents (line 9).
Note that even though we define PUFR for a setting with only one protected document group, it can be extended to several protected groups, that need to receive different treatments. Our approach allows us to adjust the strength of the score adjustment individually for each group, e.g., enabling a stronger correction for more disadvantaged groups, by allowing a group-wise choice of hyper-parameter \(\alpha_{q}\).
Many pre-trained ranking models do not output the uncertainty scores \(\sigma_{q,i}\) that PUFR employs to reorder rankings. Thus we need a way to approximate the uncertainty scores \(\sigma_{q,i}\) in a post-processing manner. Next, we show how to do this with the help of Laplace approximation.
### Attaining uncertainty scores from a deterministic ranking model
The goal is to attain effective uncertainty scores, \(\sigma\), from a ranking model at inference time; conventional uncertainty approaches fail to satisfy this condition (Hewis and Wasserman, 2011; Goyal et al., 2017; Goyal et al., 2017). Past approaches have relied on a specific training regime - Monte Carlo (MC) dropout - to achieve an effective Bayesian model. As PUFR is a post hoc method, we leverage an alternative form of uncertainty, _Laplace approximation_, that can be applied to any already trained ranking model.
The standard approach to training a deterministic model \(f\), where there exists a single output for each input, is to learn a set of parameters, \(\theta_{\text{MAP}}\), that minimizes the loss function
\[\mathcal{L}(\theta)=-\ln P(\theta\mid\mathcal{D})+r(\theta), \tag{5}\]
where \(r\) is some regularization on \(\theta\) and \(\mathcal{D}\) is the training dataset. While this is a probabilistic interpretation of the loss function and optimization process, prior work has mapped margin-based ranking losses to this framework (Hewis and Wasserman, 2011). At inference time, the model, \(f\), is evaluated using the single point \(\theta_{\text{MAP}}\), which minimizes \(\mathcal{L}(\theta)\). Alternatively, a Bayesian perspective captures the uncertainty of the model by considering all possible \(\theta\) values weighed by how likely they are based on the training data using the posterior \(P(\theta\mid\mathcal{D})\), with \(\theta_{\text{MAP}}\) as the most likely value. This produces a distribution over outputs, of which the variance \(\sigma^{2}\) represents the uncertainty present within the model and \(\mathcal{D}\):
\[P(y\mid x,\mathcal{D})=\int_{\theta}P(y\mid x,\theta)P(\theta\mid\mathcal{D})d\theta, \tag{6}\]
with \(x\) as the input and \(y\) as the output of the model. Unfortunately, capturing this distribution is intractable for all but the smallest models due to the nature of computing the posterior \(P(\theta\mid\mathcal{D})\). There exists prior work that approximates this distribution using MC Dropout (Hewis and Wasserman, 2011; Goyal et al., 2017; Goyal et al., 2017; Goyal et al., 2017). However, this requires a specific training regime, which would prevent the general application of PUFR to off-the-shelf architectures or previously trained ranking models.
```
0: mean ranking scores \(\{\mu_{q,i}\}_{d_{q,i}\in\mathcal{D}_{\mathbf{q}}}\), standard deviation \(\{\sigma_{q,i}\}_{d_{q,i}\in\mathcal{D}_{\mathbf{q}}}\), control parameter \(\alpha\), groups \(\mathcal{D}_{\mathbf{q}}^{P}\), \(\mathcal{D}_{\mathbf{q}}^{N}\)
1:for all\(d_{q,i}\in\mathcal{D}_{\mathbf{q}}^{P}\), sorted by decreasing \(\mu_{q,i}\)do
2:\(\tilde{\mu}_{q,i}\leftarrow\mu_{q,i}+\alpha\cdot\alpha_{q,i}\)
3:\(\tilde{\mu}_{q,i}\leftarrow\max_{\mathcal{D}_{\mathbf{q}}^{P},j\leq i}(\tilde{\mu}_ {q,j})\)
4:endfor
5:for all\(d_{q,i}\in\mathcal{D}_{\mathbf{q}}^{N}\), sorted by increasing \(\mu_{q,i}\)do
6:\(\tilde{\mu}_{q,i}\leftarrow\mu_{q,i}-\alpha\cdot\alpha_{q,i}\)
7:\(\tilde{\mu}_{q,i}\leftarrow\min_{\mathcal{D}_{\mathbf{q}}^{N},j\geq i}(\tilde{\mu}_ {q,j})\)
8:endfor
9: Obtain ranking \(L\) by sorting documents \(d_{q,i}\in\mathcal{D}_{\mathbf{q}}\) with respect to scores \(\tilde{\mu}_{q,i}\)
10:return\(L\)
```
**Algorithm 1** Predictive Uncertainty based **Fair Ranking** (PUFR)
**Using Laplace approximation for post-hoc uncertainty approximation.** We propose using Laplace approximations (LA), which can turn any conventionally trained deterministic model into a Bayesian model at inference time to produce the necessary \(\sigma\) values for PUFR (Goyal et al., 2017). LA encompass a family of approaches that fit a local Gaussian around the MAP estimate (5) via a second-order Taylor expansion of the log posterior:
\[\begin{split}\ln P(\theta\mid\mathcal{D})&\approx\ln P (\theta_{\text{MAP}}\mid\theta)\\ &\frac{1}{2}(\theta-\theta_{\text{MAP}})^{\intercal}\tilde{H}( \theta-\theta_{\text{MAP}}),\end{split} \tag{7}\]
where \(\tilde{H}\) is the expected Hessian at \(\theta_{\text{MAP}}\). The key observation is that the right side only requires the deterministic model, \(\theta_{\text{MAP}}\)
to produce the log Bayesian posterior distribution on the left side. Then, to recover the full posterior, exponentiating both sides reveals the Gaussian functional form for \(\theta\),
\[\begin{split} P(\theta\mid\mathcal{D})&\approx P( \theta_{\text{MAP}}\mid\mathcal{D})-\\ &\quad\exp\left(\frac{1}{2}(\theta-\theta_{\text{MAP}})^{\intercal} \bar{H}(\theta-\theta_{\text{MAP}})\right)\\ &\approx\mathcal{N}(\theta_{\text{MAP}},\bar{H}^{-1}).\end{split} \tag{8}\]
Thus, this approximation can take any twice differentiable off-the-shelf model and conveniently convert it to a Bayesian model at inference time by inverting the Hessian. While inverting to produce the covariance matrix is intractable for most models, we leverage past work by only inverting the last layers of a neural model to achieve actionable uncertainty estimates with near-zero cost (Beng et al., 2019; Chen et al., 2020) (Algorithm 2, lines 2-3). While there exists a closed form linearization of Eq. 8, we are able to achieve sufficient efficiency using Monte Carlo sampling to capture the predictive distribution \(P(y\mid x,f)\) by sampling from the Gaussian (line 5), \(\mathcal{N}(\theta_{\text{MAP}},\bar{H}^{-1})\)(Chen et al., 2020),
\[\begin{split} P(y\mid x,\mathcal{D})=&\int_{ \theta}P(y\mid x,\theta)P(\theta\mid\mathcal{D})d\theta\\ \approx&\frac{1}{N}\sum_{t=1}^{N}p(y\mid x,\theta_{t} ),\theta_{t}\sim\mathcal{N}(\theta_{\text{MAP}},\bar{H}^{-1}).\end{split} \tag{9}\]
Furthermore, as the covariance matrix \(H^{-1}\) is viewed as independent to the training process, we do not need to use the original loss function either (Srivastava et al., 2015). Lastly, for further efficiency, we exploit the property that the Hessian, \(H\), is equivalent to the Fisher information matrix, \(F\), at \(\theta_{\text{MAP}}\). As shown in Algorithm 2, we therefore approximate \(H\) by taking the diagonal of \(F\), which is a common approximation regime (line 3) (Srivastava et al., 2015; Srivastava et al., 2015).
After estimating \(\mathcal{N}(\theta_{\text{MAP}},\bar{H}^{-1})\) for the last layer of a neural model, we sample this distribution \(N\) times to produce \(N\) versions of the last layer, in order to produce \(\mu_{q_{i}}\). and \(\sigma_{q_{i}}^{2}\) as parameters of the predictive distribution \(P(y\mid x,\mathcal{D})=\mathcal{N}(\mu_{q_{i}},\sigma_{q_{i}}^{2})\) (line 7-8). These parameters are then used by PUFR as described in Section 3.2 to debias the ranked list.
## 4. Experimental Setup
We aim to answer the following research questions with our experiments: (RQ1) Based on empirical findings, are the uncertainty intervals around the ranking scores of a Bayesian ranking model sufficiently intersecting to allow for a re-ranking of documents, while staying within reasonable certainty bounds? (RQ2) Can PUFR be used to reduce the number of biased documents that are ranked on top of the list more effectively than prior methods? (RQ3) How do the various methods for fairness interventions compare with respect to controllability and computational efficiency?
There are four properties that we consider relevant to answer these questions: (i) We want to improve the fairness within the rankings. (ii) We want to do so with the least loss in utility possible. (iii) The next property is the controllability of the approach at hand. A human user/engineer should be able to easily adjust the trade-off between fairness and utility to fit their purposes. (iv) The last property is computational efficiency since this can also play a role when choosing a fairness method.
Next, we detail our experimental design. Then we discuss the evaluation metrics that we use to measure the four properties mentioned above (Section 4.2) and the dataset that we use (Section 4.3). Section 4.4 summarizes the baselines that we compare against.
### Experimental design
We perform our experiments on a web search task, where for each query, the objective is to rank documents that might be relevant to that query. In addition to the requirement of being relevant to the user, the ranked list should not contain any gender biases for queries that are naturally non-gendered (Srivastava et al., 2015). Therefore, we consider only non-gendered queries and expect a fair ranking model to not promote any documents that are biased towards some gender. See Section 4.3 for a discussion on the data used for this task.
To get an effective impression of the trade-off between utility and fairness, we perform a range of experiments per baseline, by varying some hyperparameter \(\alpha\). We define this hyperparameter individually for each baseline, based on the respective underlying algorithms (see Section 4.4).
To demonstrate the efficacy of PUFR on current search models, we use the BERT ranker introduced by Nogueira and Cho (Nogueira and Cho, 2019) as it represents a common language model architecture in current ranking regimes (Srivastava et al., 2015; Chen et al., 2020; Chen et al., 2020; Li et al., 2020). Due to hardware constraints, we use Bert-Mini (Srivastava et al., 2015), a distilled four-layer version of BERT that performs comparably to the full model in search and other related tasks. We note that in the case of uncertainty modeling, Cohen et al. (Cohen et al., 2020) demonstrate that a distilled model results in less expressive ranking uncertainty compared to larger variants of the same architecture on the same data. Thus, Bert-Mini represents a challenging setting and a conservative estimate of PUFR's performance.
To facilitate reproducibility of our work, all code and parameters are made available; see Section 7.
### Evaluation
User utility and fairness are measured per query. To get a single score to compare across methods, we report the mean over all queries. We measure significance with paired t-tests, where we treat the results of each query as one sample.
**User utility.** To measure user utility, we use the nDCG metric (normalized discounted cumulative gain). We use different cut-offs to measure the user utility in the top-10 documents, as well as for the first 100 documents.
**Fairness.** As discussed in Section 4.1, our task entails reducing the impact of strongly biased documents in the presented rankings. Therefore, we use the nFaiRR metric as a measure of fairness introduced by Rekabsaz et al. (2019). For a ranked list \(L\), the FaiRR score at cut-off \(k\) is defined as:
\[\text{FaiRR@k}(L)=\sum_{\begin{subarray}{c}\text{rank}_{L}(d_{i})\leq k \end{subarray}}n_{d_{i}}\cdot\frac{1}{\text{rank}_{L}(d_{i})}, \tag{10}\]
where \(\text{rank}_{L}(d_{i})\) describes the rank of candidate document \(d_{i}\) in \(L\), and the neutrality score \(n_{d_{i}}\in[0,1]\) is lower, the more biased a document is. Since the possible range of FaiRR scores depends on the distribution of neutrality scores of its candidate documents, to make the results easier to interpret and better comparable among queries, we use the _normalized FaiRR score_ (nFaiRR). For this, we normalize the FaiRR score with the highest attainable FaiRR score with the document candidates for this query, similar to how nDCG is calculated from DCG. In our experiments we measure the nFaiRR at a cut-off value of 10 and 50. We select a different cut-off than the utility measure (@100) so as to compare with reported values from the baseline evaluations.
**Controllability.** We follow prior work (Rajaj et al., 2019), and focus on a qualitative analysis of the results by investigating the predictability of the utility-fairness trade-off when adjusting the controllable hyperparameter of each of the methods. An ideal approach should have small change in utility and fairness for a small change in \(\alpha\). To this end, we compare the plots in Fig. 6 below.
**Computational efficiency.** For computational efficiency, we measure the run time of our implementation for each approach. We acknowledge that method-specific performance optimization might be able to further improve on the run times observed for the generic implementations used here, but assume that at least a rough execution time comparison can be gleaned. We measure the run time of each query and report the mean run time in Table 1.
**Significance testing.** To test the significance of observed differences in evaluation scores, we perform two-tailed paired t-tests on the metrics, treating the results of an approach of each query as a measurement of the same random variable. In Table 1, we mark results with an asterisk if they are significantly different from those of PUFR.
### Dataset
The retrieval models that we use are trained on the MS MARCO Passage Retrieval collection (Rekabsaz et al., 2019). For evaluation, we use MS MARCOPair, a subset consisting of 215 queries from the validation set that are non-gendered in nature - i.e., not containing any words or concepts that could be attributed to some gender (Rajaj et al., 2019). However, the top candidate documents for these queries are highly associated with gender (Rajaj et al., 2019; Rajaj et al., 2019). We quantify the degree of gender bias for each document using the neutrality scores provided by Rekabsaz and Schedl (2019) in order to measure fairness. We define documents with neutrality score 1 as the protected group for the post-processing baselines and PUFR.
### Baselines
The baseline fairness intervention methods that we consider include the two in-processing approaches that have been introduced for the same bias mitigation task and dataset used here (Rajaj et al., 2019; Rajaj et al., 2019). Since PUFR is a post-processing approach, we add two commonly used post-processing fairness approaches that have been slightly adjusted to fit the task. Both post-processing baselines as well as UNFAIR use the mean scores \(\mu_{q,i}\), produced by Algorithm 2 in Section 3.3 for the BERT-based ranker (see Section 4.1) as ranking scores. For each baseline the hyper-parameter \(\alpha\) that allows us to control the trade-off between utility and fairness, is defined individually.
**UNFAIR.** The ranking resulting from ordering the documents with respect to the mean scores \(\mu_{q,i}\), without considering fairness.
**ADV.** The (in-processing) adversarial fairness optimization from (Rajaj et al., 2019), which shares the same underlying BERT re-ranking architecture as discussed in Section 4.1. However, training is done using an adversarial discriminator head that attempts to predict whether the document is gendered or neutral by optimizing a classification loss function. The gradient from this loss is reversed within the main BERT architecture, therefore moving the parameters away from regions that can effectively capture gender (Rajaj et al., 2019). We implement this model using the source code and suggested hyperparameters provided by the authors. The controlling hyperparameter \(\alpha\) (originally \(\lambda\)) is defined by the scale of the reversed gradient.
**CODER.** This (in-processing) baseline (Rajaj et al., 2019) is intended for dense retrieval architectures. The method directly optimizes the query representation from a previously trained architecture, TAS-B (Kirk et al., 2019), by jointly optimizing thousands of candidate documents in a list-wise manner. While improving overall ranking performance, the large candidate pool within a list-wise loss provides a stable and competitive way to incorporate fairness directly during training. We include this baseline not as a direct comparison with respect to ranking performance, but to provide context on how a direct list-based fairness optimization approach compares to methods that operate entirely within a post hoc framework when viewed from a utility-fairness trade-off perspective. Here, the hyperparameter \(\alpha\) (in the original paper \(\lambda_{r}\)) is defined as the regularization coefficient for the neutrality loss.
**CVXOPT.** A (post-processing) convex optimization approach similar to (Brock et al., 2019). For each query we optimize the ranking \(L\) for utility, measured by nDCG, under a constraint on the nFaiRR score, \(\text{nFaiRR}(L)\geq\alpha\). To keep computational costs within a reasonable range, we only re-rank the first 50 documents of each query.
**FA*IR.** A (post-processing) approach suggested in (Rajaj et al., 2019). We use a significance parameter 0.1 as suggested in (Raj et al., 2019) and vary \(p\), the desired minimal proportions of documents with the protected attribute in the top-\(k\) for any value of \(k\). In the remainder of this paper we use \(\alpha:=p\), not to be confused with the significance parameter in the original paper, to match the other methods. For a fair comparison w.r.t. to computational efficiency, we use an efficient implementation that pre-computes the required number of protected documents for each rank upfront via an iterative algorithm.
## 5. Experimental Results
We present and discuss answers to our research questions.
### Intersections of uncertainty intervals
Recall (RQ1): _Based on empirical findings, are the uncertainty intervals around the ranking scores of a Bayesian ranking model sufficiently intersecting to allow for a re-ranking of documents, while staying within reasonable certainty bounds?_ To answer (RQ1), we analyze the confidence intervals of the ranking scores. If the uncertainty intervals do not intersect much, the ranking model is very certain about the ordering of its ranking scores. In such a case, our approach, or any uncertainty-aware approach in general, would not be able to re-rank the documents within an acceptable utility bracket. Previous work has shown that ranking models tend to be very certain for the ranking scores of highly ranked documents (Bordes and McAllester, 2017), but certainty decays when going down the ranked list. We are interested in how much flexibility a rank-aware fairness approach would offer in swapping documents by allowing the ranking scores to take values in a given certainty \([\mu_{q,i}-\alpha\cdot\sigma_{q,i},\mu_{q,i}+\alpha\cdot\sigma_{q,i}]\) interval around the mean score value \(\mu_{q,i}\). Fig. 2 shows the median number of documents with intersecting confidence intervals (i.e. the median number of documents that the document at that rank could swap position with) for \(\alpha=1\) resp. \(\alpha=2\) standard deviations.
Even for documents ranked at higher positions, there is flexibility to change the order of the ranking. For a confidence interval of 1 standard deviation, most documents in the top-10 each have at least 6 documents that they could swap rank with. If we look at confidence intervals of two standard deviations, this number increases to \(\sim\)10 documents that the document at rank 10 can swap place with. We therefore answer (RQ1) positively: The uncertainty intervals around the ranking scores of the Bayesian ranking model are sufficiently intersecting to allow for a re-ranking of documents, while staying within acceptable certainty bounds for utility.
Having confirmed that within the uncertainty of the model there is flexibility for an uncertainty-based fairness approach to change the order of documents, we address our second research question that asks whether the proposed approach can improve fairness.
### The fairness utility trade-off
Recall (RQ2): _Can PUFR be used to reduce the number of biased documents that are ranked on top of the list more effectively than prior methods?_ To answer this question we refer to Fig. 3 and 4, where we plot fairness on the x-axis against utility on the y-axis, for PUFR and the baselines discussed in Section 4.4, for different values of the respective hyper-parameter \(\alpha\) that controls the trade-off. In addition we use Table 1, where we compare the experimental outcomes with the best nFaiRR value for a given minimum utility requirement.
**Utility-fairness trade-off.** In Fig. 3 and 4, we observe that the CODER baseline starts with a better trade-off for the top-10 documents, which can be attributed to better ranking scores that it starts out with (PUFR uses a BERT-based model to obtain ranking scores). CODER's advantage quickly vanishes as the balancing parameter \(\alpha\) increases for more weight on fairness. Overall, PUFR offers a better trade-off between fairness and utility than the CODER based and the adversarial fairness optimization baseline (ADV).
If we compare PUFR to the post-processing baselines (CVXOPT and FA*IR), it clearly outperforms those baselines. Once a nFaiRR value of 0.96 is reached the advantage of PUFR over these baselines becomes smaller. For a possible explanation see Section 6.
Overall, PUFR outperforms all baselines for a large range of nFaiRR values, which we also highlight by comparing the fairness of the different approaches at two different utility levels (nDCG@100 = 0.31 and nDCG@100 = 0.30) in Table 1. We chose these levels of utility, assuming that, when taking a fair ranking approach in production there might be a certain (small) allowance for a drop in utility given, within which the best possible fairness value should be reached. We see that for these levels PUFR reaches significantly higher scores for nFaiRR than all baselines.
**Ablation study.** To ensure that the uncertainty estimates indeed do contribute to the success of PUFR, we conduct an ablation study. We compare PUFR with a similar approach that, instead of adjusting the scores relative to the standard deviation, in- or decreases all
Figure 4. Trade-off between fairness and utility evaluated on the first 50 resp. 100 documents.
Figure 3. Trade-off between fairness and utility evaluated on the first 10 documents.
Figure 2. MSMARCOFAIR: Median number of documents that have intersecting uncertainty intervals with the document placed at each rank for uncertainty intervals of 1 (left) resp. 2 (right) standard deviations.
scores by the same, constant value. In our experiments we use the mean uncertainty score over all queries and candidates documents, \(\sigma_{\text{mean}}=\text{mean}_{q,i}(\sigma_{q,i})\). The results of this ablation study are presented in Fig. 5. We see that by using the uncertainty scores instead of a uniform correction factor, we gain a better trade-off. For the top-10, these improvements are less visible (see Fig. 5 (a)). When considering the top-100 documents instead, the advantages of using uncertainty become much clearer (see Fig. 5 (b)). This might be due to fact that, as also noted by Cohen et al. (2018), for the top-10 documents the uncertainty scores tend to be fairly similar to each other, making our approach, if we only look at a small window, seem similar to the ablation study approach. When we look at a larger window, the uncertainty scores deviate more, emphasizing the advantages of PUFR.
We conclude this section and answer (RQ2) in the affirmative. PUFR performs competitively with baselines. In terms of fairness-utility trade-offs it significantly outperforms other post-processing schemes, and clearly beats the two state-of-the-art in-processing baselines. The ablation study confirms that this result is at least partially due to the use of the model's uncertainty in its scores. Hence, PUFR can be used to reduce the number of biased documents that are ranked on top of the list more effectively than prior methods.
Since a good utility-fairness trade-off is not the only relevant criterion when choosing a fair ranking method, our next research question (RQ3) concerns the degree of controllability and computational costs of the different methods.
### Controllability and computational efficiency
Next, we address (RQ3): _How do the various methods compare with respect to controllability and computational efficiency?_ As discussed in Section 4.2, we focus on a qualitative analysis of the \(\alpha\)-fairness and \(\alpha\)-utility curves, evaluating how predictable and hence controllable the utility-fairness trade-off is. Fig. 6 shows that for PUFR the nFaiRR score monotonically increases with increasing \(\alpha\). At the same time, utility, measured by nDCG, decreases. Both curves are highly predictable. Furthermore, since re-ranking is computationally very efficient, a broad range of rankings with different trade-offs can be explored to find the right choice of hyper-parameter for the desired trade-off between nFaiRR and nDCG. The CODER-based approach has similarly predictable trade-off curves as PUFR (Zhu et al., 2017). However, CODER is an in-processing approach, meaning that the model needs to be re-trained for each choice of hyper-parameter \(\alpha\), making it much less controllable in practice. The ADV method on the other hand, seems to be highly unpredictable, on top of the downsides that come with in-processing methods as discussed above. For the FA*IR baseline, although its curve seems to be fairly well controllable, the granularity in which we can produce results is much coarser. Due to space constraints we omit the figure for the convex optimization approach; because of computational efficiency, FA*IR or PUFR should be preferred over it.
With regard to computational efficiency, we recall that both in-processing approaches, ADV and CODER, once trained, do not have the post-processing overhead of the other methods. However, these methods need a large amount of training to gain a reasonable level of performance (Zhu et al., 2017; Zhu et al., 2017). Looking at Table 1, re-ranking with PUFR is much faster than with the other two post-processing approaches. Obtaining uncertainty labels can be done within microseconds. After adjusting the ranking scores there is a single re-sorting of the documents that dominates the execution time. Hence, when using PUFR in production and adjusting the score before the initial ordering of the documents, the execution of PUFR is nearly free.
## 6. Discussion
**Exploiting model uncertainty for the fairness-utility trade-off.** To increase the fairness of a ranking, we would commonly need to trade-off some predicted utility.Encouraging this trade-off to take place when the ranking model is less certain about the ranking scores will cause roughly equivalently relevant documents that the model cannot confidently rank, to swap place. Assuming that the ranking model is well calibrated, this might be the reason for the overall better trade-off that PUFR achieves, compared to models that do not consider predictive uncertainty. This quality is highlighted in Fig. 7, where we show the score distribution of the top-5 documents of two queries in the MSMARCOPair dataset. In the case of Fig. 6(a) and 6(b), the larger variance leads to overlapping score distribution, allowing PUFR to swap documents in the re-ranked list. On the other hand, Fig. 6(c) and 6(d) show a query where the model is very certain about the order of the documents. PUFR hence does not change the order of the documents, whereas FA*IR and CVXOPT both do adjust the ranking, leading to decreased user utility for those baselines.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & & & nDCG[\(\uparrow\)] & nFaiRR[\(\uparrow\)] & re-rank- & req. \\ Method & \(\alpha\) & @10 & @100 & @10 & @50 & time(s)\(\downarrow\) & train \\ \hline \multirow{5}{*}{
\begin{tabular}{} \end{tabular} } & UNFAIR & 0.0 & 0.26 & 0.32 & 0.858 & 0.873 & 0.00 & No \\ & ADV & 2.0 & 0.21 & 0.26 & 0.91 & 0.896 & - & Yes \\ \cline{1-1} & **PUFR** & 2.5 & **0.25** & **0.31** & **0.938** & **0.932** & 0.014 & No \\ & CODER & 3.0 & **0.25** & **0.31** & 0.920* & 0.920* & - & Yes \\ \cline{1-1} & CVXOPT & 0.8 & **0.25** & **0.31** & 0.906* & 0.905* & 0.123 & No \\ \cline{1-1} & FA*IR & 0.7 & **0.25** & **0.31** & 0.898* & 0.901* & 0.058 & No \\ \cline{1-1} & **PUFR** & 7.0 & 0.23 & **0.30** & **0.970** & **0.960** & 0.014 & No \\ \cline{1-1} & CODER & 4.0 & **0.24** & **0.30** & 0.927* & 0.926* & - & Yes \\ \cline{1-1} & CVXOPT & 0.91 & 0.23 & **0.30** & 0.949* & 0.931* & 0.123 & No \\ \cline{1-1} & FA*IR & 0.85 & 0.23 & **0.30** & 0.944* & 0.935* & 0.058 & No \\ \hline \hline \end{tabular}
\end{table}
Table 1. Results for experiment with best nFairr value for nDCG decrease not more than 0.01 and 0.02 respectively. ADV baseline does not fulfill the criteria of being at most 0.01 nDCG points worse than UNFAIR. \(\uparrow\) denotes significance w.r.t. PUFR via two tail paired students t-test of \(p<.05\).
Figure 5. Ablation study comparing PUFR (score adjustment proportional to the ranker’s uncertainty) with an ablation experiment with uniform score adjustment.
**Using PUFR outside the models confidence.** Our empirical results show that if we allow PUFR to adjust the scores too far outside of its confidence, its performance starts to decay (see Fig. 3). If \(\alpha\) is too high, the natural interpretation of adjusting the scores within plausible error-bounds gets lost and we cannot exploit the models knowledge of its own certainty any further. Without the certainty to back it up, PUFR becomes more arbitrary in its decisions where to trade-off predicted utility with fairness. Hence, PUFR is most effective for small values of \(\alpha\), roughly up to \(\alpha=4\) (see Fig. 6).
This observation means that a purely uncertainty-based fairness method might not be the best choice when the bias we want to correct for is too strong. In such cases, it might be beneficial to use uncertainty in combination with another approach that has proven effective for the task at hand.
## 7. Conclusion
We have introduced the notion of predictive uncertainty-based ranking fairness, aiming to exploit a ranking model's uncertainty as an indicator of which documents we should focus on when re-ordering for a fairer ranking which de-emphasizes documents containing biases. Through our empirical analysis we have found that the uncertainty intervals of the ranking scores are sufficiently intersecting to allow us to swap the position of some documents. We have also introduced an intuitive and principled post-processing method, PUFR, that adjusts the predicted ranking scores within some desired confidence bound. We have shown that by considering uncertainty, PUFR can achieve the best utility-fairness trade-off and has superior time complexity and good controllability.
We hope that our contribution makes the adoption of methods to remove bias in ranked results more attractive to practitioners working on real- world search and recommendation systems.
More experimentation is needed to confirm our findings in more settings. We see limitations of our approach as twofold. Firstly, PUFR allows a re-ordering of the documents only within the uncertainty of the model. This might make our method less effective in reducing unfairness when the model is very skewed towards documents containing biases. As a second limitation, we rely on uncertainty scores containing accurate information on which documents are more likely to be in the wrong order. Furthermore, the uncertainty intervals around the scores need to intersect sufficiently. In our experiments, we are using a neural ranking model on text data, which is a task that inherently carries a fair amount of uncertainty. For other tasks and fairness definitions, more research will be necessary to evaluate whether an uncertainty-based approach can be beneficial for the utility-fairness trade-off.
As to future work, an important next step would be to define ways to evaluate uncertainty scores in a listwise manner for ranking models. Without proper evaluation of the predictive uncertainty, we are unable to put trust on the score distribution and hence on an uncertainty-based fairness approach. Moreover, more work is needed to investigate whether PUFR could be extended to, for example, Bayesian learning-to-rank models or recommender systems. Finally, we see a clear need to create more datasets for large language models with fairness labels, on which methods such as ours can be tested.
**Data and code.** To facilitate reproducibility of our work, all code and parameters are shared at [https://github.com/MariaHeuss/2023-CIKM-uncertainty-based-bias-mitigation](https://github.com/MariaHeuss/2023-CIKM-uncertainty-based-bias-mitigation).
###### Acknowledgements.
The research was partially funded by the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, [https://hybrid-intelligence-centre.nl](https://hybrid-intelligence-centre.nl), and project LESSEN with project number NWA.1389.20.183 of the research program NWA ORC 2020/21, which is (partly) financed by the Dutch Research Council (NWO). All content represents the opinion of the authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
Figure 6. Controllability of different approaches visualized by plotting utility and fairness against the controlling hyperparameter \(\alpha\) on the x-axis (see Section 4.4 for a description of \(\alpha\) for each approach).
Figure 7. Examples of score distributions for the top-5 documents for two queries of the MS MARCOFair dataset. Protected documents in green, non-protected in red. Subfigs.7a and 7c show the ranking score before PUFR adjusts the scores, 7b and 7d show them after. Query 1089383 was scaled before plotting. |
2309.14862 | Embedding dimension gaps in sparse codes | We study the open and closed embedding dimensions of a convex 3-sparse code
$\mathcal{FP}$, which records the intersection pattern of lines in the Fano
plane. We show that the closed embedding dimension of $\mathcal{FP}$ is three,
and the open embedding dimension is between four and six, providing the first
example of a 3-sparse code with closed embedding dimension three and differing
open and closed embedding dimensions. We also investigate codes whose canonical
form is quadratic, i.e. ``degree two" codes. We show that such codes are
realizable by axis-parallel boxes, generalizing a recent result of Zhou on
inductively pierced codes.
We pose several open questions regarding sparse and low-degree codes. In
particular, we conjecture that the open embedding dimension of certain 3-sparse
codes derived from Steiner triple systems grows to infinity. | R. Amzi Jeffs, Henry Siegel, David Staudinger, Yiqing Wang | 2023-09-26T11:39:01Z | http://arxiv.org/abs/2309.14862v1 | # Embedding dimension gaps in sparse codes
###### Abstract
We study the open and closed embedding dimensions of a convex 3-sparse code \(\mathcal{FP}\), which records the intersection pattern of lines in the Fano plane. We show that the closed embedding dimension of \(\mathcal{FP}\) is three, and the open embedding dimension is between four and six, providing the first example of a 3-sparse code with closed embedding dimension three and differing open and closed embedding dimensions. We also investigate codes whose canonical form is quadratic, i.e. "degree two" codes. We show that such codes are realizable by axis-parallel boxes, generalizing a recent result of Zhou on inductively pierced codes.
We pose several open questions regarding sparse and low-degree codes. In particular, we conjecture that the open embedding dimension of certain 3-sparse codes derived from Steiner triple systems grows to infinity.
## 1 Introduction
A _(combinatorial) code_ is any set system \(\mathcal{C}\subseteq 2^{[n]}\). Elements of a code are called _codewords_. We typically abbreviate codewords by listing out their elements, e.g. \(\{1,2,3\}\) is expressed more concisely as 123. We also typically express inclusion-maximal codewords in boldface.
Given a collection \(\mathcal{U}=\{U_{1},\ldots,U_{n}\}\) of sets in \(\mathbb{R}^{d}\), we can use a code to record how the sets intersect and cover one another:
\[\text{code}(\mathcal{U})\stackrel{{\text{def}}}{{=}}\big{\{} \sigma\subseteq[n]\ \big{|}\text{ there exists }p\in\mathbb{R}^{d}\text{ such that }p\in U_{i}\text{ if and only if }i\in\sigma\big{\}}.\]
In other words, we label every point \(p\in\mathbb{R}^{d}\) according to which \(U_{i}\) contain it, then collect all such labels to form \(\text{code}(\mathcal{U})\). The collection \(\mathcal{U}\) is said to _realize_ a code \(\mathcal{C}\) when \(\text{code}(\mathcal{U})=\mathcal{C}\). We are particularly interested in studying codes that are _convex_, meaning that they can be realized by a collection of convex subsets of Euclidean space. For example, the code \(\mathcal{C}=\{\mathbf{124},\mathbf{13},\mathbf{234},12,23,24,1,2,3,4,\emptyset\}\) is convex and has a realization in \(\mathbb{R}^{2}\), as shown in Figure 1.
Convex codes were introduced by Curto, Itskov, Veliz-Cuba, and Youngs [5] to mathematically model hippocampal place cells. In this applied context, we are interested in _open convex_ codes--meaning that sets in a realization should be both convex and open--since the regions observed in experimental work are full-dimensional. One can analogously define _closed convex_ codes, and perhaps
surprisingly these two classes of codes differ. Lienkaemper, Shiu, and Woodstock [14] described a code that is closed convex but not open convex, and Cruz, Giusti, Itskov, and Kronholm [1] gave an example with the opposite behavior. On the other hand, Franke and Muthiah [6] showed that every code be realized by convex sets in a large enough dimension when no topological requirements are placed on the sets. The disparity between open and closed realizations motivated the introduction and study of _open and closed embedding dimensions_ of a code \(\mathcal{C}\subseteq 2^{[n]}\), defined respectively as
\[\operatorname{odim}(\mathcal{C}) \stackrel{{\mathrm{def}}}{{=}}\min\{d\mid\mathcal{C} \text{ has an open convex realization in }\mathbb{R}^{d}\},\quad\text{and}\] \[\operatorname{odim}(\mathcal{C}) \stackrel{{\mathrm{def}}}{{=}}\min\{d\mid\mathcal{C} \text{ has a closed convex realization in }\mathbb{R}^{d}\}.\]
Above, the minimum over the empty set is defined to be \(\infty\).
Recent work of Jeffs [12] shows that there can be arbitrary differences between the open and closed embedding dimensions of a code. In particular, for any \(2\leq a,b\leq\infty\), there exists a code \(\mathcal{C}\) with \(\operatorname{odim}(\mathcal{C})=a\) and \(\operatorname{cdim}(\mathcal{C})=b\). We are interested in whether or not such behavior remains present when we restrict to "simple" codes. This paper investigates two distinct notions of being "simple"--codes with low sparsity, and codes with low degree. Both notions are introduced below.
Sparse codes and \(\mathcal{FP}\).We say that \(\mathcal{C}\subseteq 2^{[n]}\) is _\(k\)-sparse_ if every codeword in \(\mathcal{C}\) has cardinality at most \(k\). For example, the code realized in Figure 1 is \(3\)-sparse. Jeffs, Omar, Suaysom, Wachtel, and Youngs [13] investigated \(2\)-sparse codes, in particular showing that if \(\mathcal{C}\) is open or closed convex and \(2\)-sparse, then \(\operatorname{cdim}(\mathcal{C})=\operatorname{odim}(\mathcal{C})\leq 3\). On the other hand, there are \(3\)-sparse convex codes with \(\operatorname{odim}(\mathcal{C})\neq\operatorname{cdim}(\mathcal{C})\), the first example being a code \(\mathcal{S}_{3}\) described in work of Jeffs [11] but implicit in earlier work of Lienkaemper, Shiu, and Woodstock [14].
We are interested in the open and closed embedding dimensions of the \(3\)-sparse _Fano plane code_,
\[\mathcal{FP}\stackrel{{\mathrm{def}}}{{=}}\{\mathbf{123}, \mathbf{145},\mathbf{167},\mathbf{246},\mathbf{257},\mathbf{347},\mathbf{356},1,2,3,4,5,6,7,\emptyset\}.\]
This is precisely the code obtained by recording the intersection pattern of lines in the Fano plane. Alternatively, this code arises from the unique Steiner triple system on seven indices by adding all singleton codewords and the empty codeword. See Section 7 for discussion on this latter perspective.
For now, observe that \(\mathcal{FP}\) is _intersection complete_, meaning that the intersection of any two codewords is again a codeword. This fact, together with the observations that \(\mathcal{FP}\) is \(3\)-sparse and has seven maximal codewords, and results of Cruz, Giusti, Itskov, and Kronholm [1] and Jeffs [11] imply that \(\operatorname{cdim}(\mathcal{FP})\leq 5\) and \(\operatorname{odim}(\mathcal{FP})\leq 6\). In fact, we can refine these bounds significantly.
**Theorem 1**.: The open and closed embedding dimensions of \(\mathcal{FP}\) satisfy
\[\operatorname{cdim}(\mathcal{FP})=3\qquad\text{and}\qquad 4\leq\operatorname{odim }(\mathcal{FP})\leq 6.\]
Theorem 1 provides the first example of a 3-sparse code with \(\operatorname{cdim}(\mathcal{C})=3\) and \(\operatorname{odim}(\mathcal{C})>\operatorname{cdim}(\mathcal{C})\). The general behavior of embedding dimensions for 3-sparse codes remains a wide open question. In particular, it is not known if there can be a gap of more than one between the open and closed embedding dimensions of 3-sparse codes, nor whether the closed dimension can exceed the open dimension. Perhaps most crucially, it remains unclear whether or not there is any uniform upper bound on the open or closed embedding dimensions of 3-sparse convex codes. These questions and potential avenues of progress will be further discussed in Section 7.
Receptive field relations, the canonical form, and the degree of a code.Curto, Itskov, Veliz-Cuba, and Youngs [5] took an algebraic approach to the study of codes and their realizations, generalizing the theory of Stanley-Reisner rings to arbitrary set systems. Their approach allows one to isolate minimal set-theoretic relationships that sets in a realization must satisfy. Below, we give a combinatorial and geometric account of their approach, which is equivalent to their algebraic framework.
Given a code \(\mathcal{C}\subseteq 2^{[n]}\) with a (not necessarily open, closed, or convex) realization \(\mathcal{U}=\{U_{1},\ldots,U_{n}\}\) in \(\mathbb{R}^{d}\), we say that a pair \((\sigma,\tau)\) with \(\sigma,\tau\subseteq[n]\) is a _receptive field relation_ or _RF relation_ if
\[\bigcap_{i\in\sigma}U_{i}\ \subseteq\ \bigcup_{j\in\tau}U_{j}.\]
As usual, the empty union is the empty set, and we adopt the convention that \(\bigcap_{i\in\emptyset}U_{i}\) is all of \(\mathbb{R}^{d}\). We will only consider codes realizable by bounded sets, and so we never have \((\emptyset,\tau)\) as an RF relation. Also, we note that the containment above is only interesting when \(\sigma\) and \(\tau\) are disjoint, and from here on we only work with RF relations where \(\sigma\cap\tau=\emptyset\). The RF relations of \(\mathcal{C}\) do not depend on the realization \(\mathcal{U}\), since \(\mathcal{C}\) fully encodes the intersection and covering information of any of its realizations.
We say that an RF relation \((\sigma,\tau)\) for \(\mathcal{C}\) is _minimal_ if \((\sigma\setminus\{i\},\tau)\) and \((\sigma,\tau\setminus\{j\})\) are not RF relations for any \(i\in\sigma\) or \(j\in\tau\). The _canonical form_ of a code \(\mathcal{C}\) is
\[\operatorname{CF}(\mathcal{C})\stackrel{{\text{def}}}{{=}}\{( \sigma,\tau)\ |\ (\sigma,\tau)\text{ is a minimal RF relation for }\mathcal{C}\}.\]
The canonical form exactly captures the minimal set-theoretic (i.e. intersection and covering) relationships between sets in any realization of \(\mathcal{C}\), and has been studied extensively from an algebraic perspective [5, 3, 7, 9, 4, 8]. We say that the _degree_ of an RF relation \((\sigma,\tau)\) is \(|\sigma|+|\tau|\), and the degree of a code \(\mathcal{C}\) is the maximum degree of the relations in \(\operatorname{CF}(\mathcal{C})\).
Curry, Jeffs, Youngs, and Zhao [2] showed that "inductively pierced" codes have degree two, and can be realized not just by convex sets, but by open balls. In a recent master's thesis, Zhou [16] showed that inductively pierced codes can also be realized by axis-parallel boxes. These results suggest a relationship between the degree of a code, and the complexity of its possible geometric realizations: codes with low degree should have simpler realizations. We add to the evidence of this trend for degree two codes with the following theorem, proved in Section 6.
**Theorem 2**.: Let \(\mathcal{C}\subseteq 2^{[n]}\) be a degree two code. Then \(\mathcal{C}\) can be realized by axis-parallel boxes in dimension \(\max\{1,n-1\}\).
Background and supporting lemmas.
Before proving our results, we first recall some useful geometric and combinatorial results, and justify several supporting lemmas. Throughout, we will use the notation \(\overline{pq}\) to denote the line segment between points \(p\) and \(q\) in \(\mathbb{R}^{d}\). We start by noting that a line spanned by a vertex of a simplex and one of its interior points must pass through the facet opposite the vertex in question. We set this apart as a lemma since we make use of it in several cases, but omit the proof.
**Lemma 3**.: Let \(P=\{p_{1},\ldots,p_{k+1}\}\subseteq\mathbb{R}^{d}\) with \(k\leq d\) such that its points are in general position, and let \(q\) be in the relative interior of the \(k\)-simplex \(\operatorname{conv}(P)\). Then the line \(L\) which passes through \(p_{i}\) and \(q\) intersects the facet \(\operatorname{conv}(P\setminus\{p_{i}\})\).
Radon partitions.Given a set \(P\subseteq\mathbb{R}^{d}\) with \(|P|=d+2\), Radon's theorem guarantees that there exists a partition \(P=P_{1}\sqcup P_{2}\) so that \(\operatorname{conv}(P_{1})\cap\operatorname{conv}(P_{2})\neq\emptyset\). In fact, when \(P\) is in general position such a partition is unique, and \(\operatorname{conv}(P_{1})\cap\operatorname{conv}(P_{2})\) consists of a single point, which we call the _Radon point_ of \(P\).
We will be interested in various cases based on the sizes of \(P_{1}\) and \(P_{2}\). To this end, the Radon partition \(P_{1}\sqcup P_{2}\) is called an \(n\)-\(m\)_split_ when \(|P_{1}|=n\) and \(|P_{2}|=m\) (we will assume \(n\geq m\) throughout). Given enough points in general position, we can always find a split among a subset of them that is as even as possible. The following lemma explains such a situation in \(\mathbb{R}^{3}\).
**Lemma 4**.: Let \(P=\{p_{1},\ldots,p_{6}\}\subseteq\mathbb{R}^{3}\) be in general position. Then there exists a \(5\)-element subset of \(P\) whose Radon partition is a \(3\)-\(2\) split.
Proof.: Consider the first five points. If these form a \(3\)-\(2\) split, we are done. Otherwise, they form a \(4\)-\(1\) split. Without loss of generality, let \(p_{5}\in\operatorname{conv}(p_{1},\ldots,p_{4})\). We consider two cases.
**Case 1:**\(p_{6}\notin\operatorname{conv}(p_{1},\ldots,p_{4})\).
The line segment \(\overline{p_{5}p_{6}}\) intersects a facet of the \(3\)-simplex \(\operatorname{conv}(p_{1},\ldots,p_{4})\). Without loss of generality, let this facet be \(\operatorname{conv}(p_{1},p_{2},p_{3})\). Then \(\{p_{1},p_{2},p_{3}\}\sqcup\{p_{5},p_{6}\}\) is a \(3\)-\(2\) Radon partition.
**Case 2:**\(p_{6}\in\operatorname{conv}(p_{1},\ldots,p_{4})\).
We may consider \(\operatorname{conv}(p_{1},\ldots,p_{4})\) as the union of four smaller simplices, each with \(p_{5}\) as a vertex, and the other three vertices coming from \(\{p_{1},p_{2},p_{3},p_{4}\}\). The point \(p_{6}\) lies in one of these simplices, say without loss of generality \(p_{6}\in\operatorname{conv}(p_{1},p_{2},p_{3},p_{5})\). Then the line segment \(\overline{p_{4}p_{6}}\) intersects a facet of this \(3\)-simplex, and as in the first case we obtain a \(3\)-\(2\) Radon partition.
Symmetries in the Fano plane.The Fano plane has a great deal of symmetry. For our purposes, the most important fact is that the symmetry group of the Fano plane is "doubly transitive." In our language, this means that any pair of maximal codewords can be mapped to any other pair of maximal codewords by a symmetry. We record this in a lemma below, which we will make extensive use of to reduce the casework in our arguments.
**Lemma 5**.: Let \(m_{1},\ldots,m_{7}\) be the maximal codewords of \(\mathcal{FP}\). Then for every choice of \((m_{i},m_{j})\), \((m_{k},m_{\ell})\) where \(i\neq j\), \(k\neq\ell\), there exists a permutation \(\Pi\) of [7] which maps \(\Pi(m_{i})=m_{k}\) and \(\Pi(m_{j})=m_{\ell}\), and under which \(\mathcal{FP}\) is invariant.
Sunflowers of convex open sets.We say that \(\mathcal{U}=\{U_{1},U_{2},\ldots,U_{n}\}\) is a _sunflower_ if
\[\operatorname{code}(\mathcal{U})=\{[n],1,2,\ldots,n,\emptyset\},\]
i.e. if all the sets have a common intersection, and each set only appears alone outside this intersection. Sunflowers of convex open sets have played an important role in the study of convex codes, in particular serving as building blocks to describe rich families of codes with gaps between open and closed embedding dimensions [10, 11]. The first implementation of these ideas was given by Lienkaemper, Shiu, and Woodstock [14] who proved the following result.
**Lemma 6** (Lemma 3.2 of [14]).: Let \(\{U_{1},U_{2},U_{3}\}\) be a sunflower of convex open sets in \(\mathbb{R}^{d}\). If a line \(L\) intersects each \(U_{i}\), then in fact \(L\) intersects \(U_{1}\cap U_{2}\cap U_{3}\).
The following lemma is an immediate consequence of this result, and will be used extensively in our analysis of the open embedding dimension of \(\mathcal{FP}\).
**Lemma 7**.: Let \(\{U_{1},U_{2},U_{3}\}\) be a sunflower of convex open sets in \(\mathbb{R}^{d}\). Let \(p_{1}\in U_{1},p_{2}\in U_{2}\), and \(p_{3}\in U_{3}\) be points such that \(p_{3}\in\overline{p_{1}p_{2}}\). Then \(p_{3}\in U_{1}\cap U_{2}\cap U_{3}\).
Proof.: Lemma 6 guarantees that \(\overline{p_{1}p_{2}}\) contains a point \(p_{123}\) in \(U_{1}\cap U_{2}\cap U_{3}\) We then have \(p_{3}\in\overline{p_{1}p_{123}}\) or \(p_{3}\in\overline{p_{123}p_{2}}\). The former case implies \(p_{3}\in U_{1}\) by convexity of \(U_{1}\), and similarly the latter case implies \(p_{3}\in U_{2}\). Since \(\{U_{1},U_{2},U_{3}\}\) is a sunflower, both cases in fact imply that \(p_{3}\) lies in \(U_{1}\cap U_{2}\cap U_{3}\), as desired.
A realization of \(\mathcal{FP}\) contains seven different sunflowers, one for each maximal codeword. In fact, \(\mathcal{FP}\) contains many induced copies of the previously mentioned code \(\mathcal{S}_{3}=\{\mathbf{123},\mathbf{14},\mathbf{24},\mathbf{34},1,2,3,4,\emptyset\}\) implicit in Lienkaemper, Shiu, and Woodstock's work [14]. One can use Lemma 6 to prove that \(\operatorname{cdim}(\mathcal{S}_{3})=2\) and \(\operatorname{odim}(\mathcal{S}_{3})=3\) (dimension-minimal realizations are shown in Figure 2). The Fano plane code \(\mathcal{FP}\) contains \(28\) different isomorphic copies of \(\mathcal{S}_{3}\): one may take any of the seven maximal codewords, and add any of the remaining four indices to find such a copy. Our Theorem 1 can be thought of as saying that these \(28\) copies of \(\mathcal{S}_{3}\) are "sufficiently entangled" in \(\mathcal{FP}\) that the closed dimension increases by one, and the open dimension increases by at least one.
Figure 2: Dimension-minimal closed and open realizations of \(\mathcal{S}_{3}\) in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\).
The closed dimension of \(\mathcal{FP}\)
Figure 3 shows a closed realization of \(\mathcal{FP}\) in \(\mathbb{R}^{3}\). The sets in this realization are four facets of the octahedron, and three axis-parallel line segments passing through opposite vertices of the octahedron. Formally, this realization is defined by
\[X_{1} =\operatorname{conv}\{e_{1},-e_{1}\} X_{4} =\operatorname{conv}\{-e_{1},-e_{2},e_{3}\}\] \[X_{2} =\operatorname{conv}\{e_{2},-e_{2}\} X_{5} =\operatorname{conv}\{-e_{1},e_{2},-e_{3}\}\] \[X_{3} =\operatorname{conv}\{e_{3},-e_{3}\} X_{6} =\operatorname{conv}\{e_{1},-e_{2},-e_{3}\}\] \[X_{7} =\operatorname{conv}\{e_{1},e_{2},e_{3}\}.\]
The remainder of this section is devoted to proving that the realization in Figure 3 is dimension-minimal. Throughout the proof below, we use the notation \(L(p,q)\) to denote the line through points \(p\) and \(q\).
**Theorem 8**.: There is no closed convex realization of \(\mathcal{FP}\) in the plane. Consequently, \(\operatorname{cdim}(\mathcal{FP})=3\).
Proof.: Suppose for contradiction that \(\{X_{1},\ldots,X_{7}\}\) is a closed convex realization \(\mathcal{FP}\) in \(\mathbb{R}^{2}\). By intersecting our sets with a sufficiently large closed ball, we may assume that the realization is compact. Choose a set of seven points
\[P=\{p_{ijk}\in X_{i}\cap X_{j}\cap X_{k}\mid\{i,j,k\}\in\mathcal{FP}\}.\]
Observe that any such \(P\) must have positive area. If not, \(P\) would be contained in a line \(L\), and applying Helly's theorem to the collection of segments \(L\cap X_{i}\) would yield a point in all seven sets, a contradiction. By compactness, we may choose \(P\) to minimize the area of \(\operatorname{conv}(P)\): we are choosing seven points from disjoint compact subsets of \(\mathbb{R}^{2}\), and the area of their convex hull varies continuously with the choices of points. From here on, we assume that \(P\) has minimum area among all possible choices of \(P\), and consider two cases.
**Case 1: \(\operatorname{conv}(P)\) is not a triangle.** It will suffice to restrict our attention to four vertices on the boundary of \(\operatorname{conv}(P)\). Consider the crossing point of the diagonals determined by these vertices. Up to symmetry (i.e. by Lemma 5), we may assume that \(p_{123}\) and \(p_{145}\) are the endpoints of one of these diagonals. Since the line segment between any two vertices is contained in some \(X_{i}\), the crossing point is contained in two sets, and hence also in a third. The possible codewords arising at the crossing point are \(123,145\), and \(167\), but the former two are impossible, for otherwise we could replace \(p_{123}\) or \(p_{145}\) by the crossing point, obtaining a smaller area for the convex hull of \(P\). Thus the codeword arising at the crossing point is \(167\), and since this point lies in the convex hull of the other points, we can assume without loss of generality that it is in fact equal to \(p_{167}\).
Observe that \(\mathcal{FP}\) is invariant under the permutation \((1)(2435)(67)\), and this permutation induces a cyclic permutation of the maximal codewords not containing \(1\), while transposing \(123\) and \(145\) and leaving \(167\) fixed. Applying this permutation, we can assume that \(p_{246}\) is one of our remaining two boundary points of interest, which leaves \(p_{356}\) as the only valid choice for the opposite point. The situation is shown in Figure 4.
Now, we claim that there is a point giving rise to one of the codewords \(257\) or \(347\) in the closed regions \(A\), \(B\), \(C\), or \(D\) in Figure 4. If \(p_{257}\) lies in one of these regions the claim is immediate. If \(p_{257}\) lies outside of the quadrilateral, then the segment \(\overline{p_{167}p_{257}}\) is contained in \(X_{7}\) and must cross one of the edges of the quadrilateral. The edges lie in \(X_{2}\), \(X_{3}\), \(X_{4}\), and \(X_{5}\) respectively, so this crossing point must give rise to one of the codewords \(257\) or \(347\).
Considering the various cases, we see that we have have arrived at a contradiction. If the codeword \(257\) appears in region \(A\) at a point \(q\), then the crossing point of \(\overline{qp_{123}}\) and \(\overline{p_{167}p_{246}}\) lies in \(X_{2}\), \(X_{4}\), and \(X_{6}\), and we could have replaced \(p_{246}\) with this point to obtain a smaller convex hull. Similar contradictions arise in regions \(B\), \(C\), and \(D\). For example, in region \(B\) the crossing point of the segments \(\overline{qp_{145}}\) and \(\overline{p_{167}p_{246}}\) will lie in \(X_{3}\), \(X_{5}\), and \(X_{6}\), contradicting our choice of \(p_{356}\). If the codeword \(347\) arises in region \(A\), \(B\), \(C\), or \(D\) then examining crossing points of appropriate line segments yields analogous contradictions.
**Case 2: \(\operatorname{conv}(P)\) is a triangle.** Up to symmetry, we may assume that two of the vertices of
Figure 4: Case 1 in the proof of Theorem 8.
this triangle are \(p_{123}\) and \(p_{145}\). The third vertex cannot be \(p_{167}\), since this would mean \(P\subseteq X_{1}\), a contradiction. We may assume without loss of generality that the third vertex is \(p_{246}\) by applying the permutation \((1)(2435)(67)\) that we used in the previous case. Applying an affine transformation, we can assume that \(\operatorname{conv}(P)\) is an equilateral triangle with \(p_{246}\) at its apex.
We now claim that any choice for the set \(Q=\{p_{167},p_{257},p_{347}\}\) cannot be collinear. Suppose for contradiction that \(Q\) can be chosen to lie on a line. The permutation \((124)(365)(7)\) is a symmetry of \(\mathcal{FP}\), and cyclically permutes the vertices of our equilateral triangle as well as the points in \(Q\). Applying this permutation, we may assume that \(p_{167}\) lies between \(p_{257}\) and \(p_{347}\). But \(p_{257}\) and \(p_{347}\) must lie outside the triangle \(\operatorname{conv}\{p_{123},p_{145},p_{167}\}\subseteq X_{1}\), and so one of the line segments \(\overline{p_{257}p_{123}}\) or \(\overline{p_{347}p_{123}}\) crosses the line segment \(\overline{p_{145}p_{167}}\) (the first case is shown in Figure 5). This crossing point is contained in \(X_{1}\), \(X_{2}\), and \(X_{3}\), and could have been chosen as \(p_{123}\) to yield a smaller area for \(\operatorname{conv}(P)\), a contradiction. Hence \(Q\) comprises the vertices of a triangle.
But now consider the point \(p_{356}\). This point cannot be contained in \(\operatorname{conv}(Q)\subseteq X_{7}\). Hence the line segment from \(p_{356}\) to one of the vertices of \(Q\) crosses the edge determined by the other two vertices of \(Q\). Again using the symmetry \((124)(365)(7)\), we can reduce to the situation shown in Figure 6. But here the crossing point of \(\overline{p_{167}p_{356}}\) and \(\overline{p_{257}p_{347}}\) lies in \(X_{6}\), \(X_{7}\), and hence also \(X_{1}\). Thus we could have chosen \(Q\) to be collinear, and we have arrived at a final contradiction.
Figure 6: The final contradiction in Case 2: the set \(Q\) could have been chosen to be collinear.
**Remark 9**.: We have shown that \(\mathcal{FP}\) cannot be realized by closed convex sets in \(\mathbb{R}^{2}\). Our investigations strongly indicate that in fact \(\mathcal{FP}\) cannot be realized by any convex sets (not necessarily open or closed) in \(\mathbb{R}^{2}\), but it seems that a complete proof in the style of the one given above would entail significant casework. We in fact conjecture the even stronger result that there is no collection of convex sets \(\{C_{1},\ldots,C_{7}\}\) with \(C_{i}\cap C_{j}\cap C_{k}\neq\emptyset\) precisely when \(ijk\in\mathcal{FP}\). In other words, we conjecture that the abstract simplicial complex \(\Delta(\mathcal{FP})\), which is obtained by adding every pair to \(\mathcal{FP}\), is not "2-representable." For background on \(d\)-representable complexes, we recommend Tancer's 2011 survey paper [15].
## 4 The open dimension of \(\mathcal{FP}\)
Cruz, Giusti, Itskov, and Kronholm [1] showed that every intersection complete code with \(m\) maximal codewords has open embedding dimension at most \(\max\{2,m-1\}\), and hence it follows that \(\operatorname{odim}(\mathcal{FP})\leq 6\). All that remains to establish Theorem 1 is to prove that \(\operatorname{odim}(\mathcal{FP})>3\), which we will do below.
**Theorem 10**.: The Fano plane code has no open convex realization in \(\mathbb{R}^{3}\). That is, \(\operatorname{odim}(\mathcal{FP})\geq 4\).
Proof.: Suppose for the sake of contradiction that \(\mathcal{U}=\{U_{1},\ldots,U_{7}\}\) realizes \(\mathcal{FP}\) with convex open sets in \(\mathbb{R}^{3}\). Choose \(P=\{p_{123},p_{145},p_{167},p_{246},p_{257},p_{347},p_{356}\}\) such that \(p_{ijk}\in U_{i}\cap U_{j}\cap U_{k}\) and the points are in general position. This can be done because \(U_{i}\cap U_{j}\cap U_{k}\) are open and nonempty for the chosen points.
Employing Lemma 4, we may choose a 3-2 Radon partition \(P_{1}\sqcup P_{2}\) on five out of the first six points. Let \(P_{1}\) be the set consisting of three points and \(P_{2}\) be the set consisting of two points. Let \(q\in\operatorname{conv}(P_{1})\cap\operatorname{conv}(P_{2})\) be the Radon point of this partition. By Lemma 5, we can assume without loss of generality that \(P_{2}=\{p_{123},p_{145}\}\) (note using this symmetry may mean that \(p_{356}\) ends up in \(P_{1}\), however this will not cause any problems). By convexity of \(U_{1}\), \(\operatorname{conv}(P_{2})\subseteq U_{1}\) so \(q\in U_{1}\). We now consider two possible cases, both of which will establish that \(q\) in fact lies in \(U_{1}\cap U_{6}\cap U_{7}\). These cases are illustrated in Figure 7.
**Case 1.** All elements of \(P_{1}\) are in a common set \(U_{6}\) or \(U_{7}\).
Without loss of generality, let \(P_{1}=\{p_{167},p_{246},p_{356}\}\subseteq U_{6}\). A similar argument can be applied to \(P_{1}=\{p_{167},p_{257},p_{347}\}\subset U_{7}\). By convexity of \(U_{6}\), \(\operatorname{conv}(P_{1})\subseteq U_{6}\) so \(q\in U_{6}\). Since \(\mathcal{U}\) realizes \(\mathcal{FP}\), the fact that \(q\) lies in \(U_{1}\cap U_{6}\) implies \(q\in U_{1}\cap U_{6}\cap U_{7}\).
**Case 2.** All elements of \(P_{1}\) are not in a common set.
Without loss of generality, let \(P_{1}=\{p_{167},p_{246},p_{347}\}\). This can be assumed because the following argument relies on two points of \(P_{1}\) being in a common set \(U_{6}\) and the remaining point being in \(U_{7}\) (or vice versa). Indeed, any 3 element subset of \(\{p_{246},p_{356},p_{167},p_{257},p_{347}\}\) contains 2 elements from \(U_{6}\) or from \(U_{7}\) by the pigeonhole principle. By assumption, the remaining point will not be in a common set with these two and will therefore be in the opposing set (\(U_{7}\) or \(U_{6}\), respectively).
Now, consider the line \(L\) through \(p_{246}\) and \(q\). By Lemma 3, we may choose \(q^{\prime}\in L\,\cap\,\overline{p_{167}p_{347}}\). By convexity of \(U_{7}\), \(q^{\prime}\in U_{7}\). Now, noting that \(q\in\overline{p_{246},q^{\prime}}\), \(p_{246}\in U_{6}\), \(q\in U_{1}\), \(q^{\prime}\in U_{7}\), and \(\{U_{1},U_{6},U_{7}\}\) form a sunflower, we can apply Lemma 7 and conclude that \(q\in U_{1}\cap U_{6}\cap U_{7}\).
We have shown that in either case we have \(q\in U_{1}\cap U_{6}\cap U_{7}\), and we are ready to derive our final contradiction. Since \(q\in\overline{p_{123}p_{145}}\), \(p_{123}\in U_{2}\), \(p_{145}\in U_{4}\), and \(q\in U_{6}\) and the collection \(\{U_{2},U_{4},U_{6}\}\) forms a sunflower, we can apply Lemma 7 to conclude that \(q\in U_{2}\cap U_{4}\cap U_{6}\). But then \(q\) lies in \(U_{1}\cap U_{2}\cap U_{4}\cap U_{6}\cap U_{7}\), which contradicts the assumption that \(\mathcal{U}\) realizes \(\mathcal{FP}\).
## 5 Studying \(\mathcal{FP}\) in dimension four.
We are not sure whether or not \(\operatorname{odim}(\mathcal{FP})\) exceeds four. This section will discuss the limitations of the argument used to prove Theorem 10 when applied in \(\mathbb{R}^{4}\). One may suppose for contradiction that \(\mathcal{U}=\{U_{1},\ldots,U_{7}\}\) is an open convex realization of \(\mathcal{FP}\) in \(\mathbb{R}^{4}\), and similarly choose a point set \(P=\{p_{123},p_{145},p_{167},p_{246},p_{257},p_{347},p_{356}\}\) with each point chosen from the intersection of sets it is labeled by. We still have sufficiently many points to apply Radon's theorem, but now we have three cases to consider: a 5-1 split, a 4-2 split, and a 3-3 split.
The first two cases are easier to consider, since the smaller part of the split will be contained in some \(U_{i}\), and hence the Radon point will be contained in some \(U_{i}\). In fact, similar--albeit lengthier and more technical--geometric arguments to the ones used to prove Theorem 10 are sufficient to rule these cases out.
When a Radon partition of points in \(P\) is a 3-3 split, the Radon point may or may not be contained in some \(U_{i}\). For example, if \(P=\{p_{123},p_{246},p_{347}\}\sqcup\{p_{145},p_{167},p_{356}\}\), then neither part of the partition is contained in any \(U_{i}\), and it is not clear how to proceed. However, since \(P\) has seven points, there are seven different subsets of size six which we can try. Could all of these lead to the "bad" 3-3 split case?
The answer is in fact yes, with the relevant arrangement being given by points along the moment curve in \(\mathbb{R}^{4}\). In particular, suppose that the points of \(P\) appear in the following order along the moment curve:
\[p_{123}<p_{145}<p_{246}<p_{167}<p_{347}<p_{356}<p_{257}.\]
The relevant property of this ordering is that all six pairs of adjacent points, as well as the pair of endpoints \(p_{123}\) and \(p_{257}\), are contained in a unique set \(U_{i}\). Radon partitions of points along the moment curve are always 3-3 splits whose parts "interlace," i.e. alternate with one another. For example, the Radon partition of the last six points is given by \(P_{1}=\{p_{145},p_{167},p_{356}\}\) and \(P_{2}=\{p_{246},p_{347},p_{257}\}\). One may check that all seven Radon partitions which follow this alternating pattern will yield a
Figure 7: The two cases in the proof of Theorem 10.
3-3 split such that \(P_{1}\) and \(P_{2}\) are both not a subset of any \(U_{i}\). Hence we are not able to derive a contradiction analogous to our previous arguments in this case.
It is reasonable to speculate that such an arrangement could be used to construct an open convex realization of \(\mathcal{FP}\) in \(\mathbb{R}^{4}\). Starting with the seven triangles which are convex hulls of the points containing a given index, we have a closed convex realization of \(\mathcal{FP}\). Could this be "thickened" appropriately to obtain an open realization, for example by taking a Minkowski sum with a carefully chosen open convex set? This may be the most promising path towards determining the exact open embedding dimension of \(\mathcal{FP}\).
## 6 Realizing degree two codes with boxes
We will use and inductive approach to prove Theorem 2, facilitated by the lemmas below. Before proceeding, we note several features of degree two codes that will be useful in our proofs. First, there are only two types of RF relation with degree two:
\[(\{i,j\},\emptyset)\text{ which corresponds to }U_{i}\cap U_{j}=\emptyset, \text{ and }\] \[(\{i\},\{j\})\text{ which corresponds to }U_{i}\subseteq U_{j}.\]
We will write these relations more concisely as \((ij,\emptyset)\) and \((i,j)\) respectively. We pause briefly to justify that deleting an index does not increase the degree of a code.
**Lemma 11**.: Let \(\mathcal{C}\subseteq 2^{[n]}\) be a degree two code. For any \(i\in[n]\), the code
\[\mathcal{C}\setminus i\stackrel{{\text{def}}}{{=}}\{c\setminus \{i\}\ |\ c\in\mathcal{C}\}\]
is degree at most two.
Proof.: Let \((\sigma,\tau)\) be a minimal RF relation of \(\mathcal{C}\setminus i\). Every RF relation for \(\mathcal{C}\setminus i\) is an RF relation for \(\mathcal{C}\), and hence \((\sigma,\tau)\) is in fact a minimal RF relation for \(\mathcal{C}\). Hence \((\sigma,\tau)\) has degree at most two, and it follows that \(\mathcal{C}\setminus i\) is degree at most two.
The only RF relations with degree one are of the form \((i,\emptyset)\), corresponding to \(U_{i}=\emptyset\). By relabeling such indices to the end, and then forgetting about them, we can always reduce a degree two code \(\mathcal{C}\) to an equivalent code where every RF relation has degree _exactly_ two, and in particular all sets in a realization of \(\mathcal{C}\) are nonempty.
We say that an index \(i\in[n]\) is _inclusion minimal_ in a code \(\mathcal{C}\) if there is no \(j\neq i\) with \((j,i)\) an RF relation of \(\mathcal{C}\) (equivalently, if \(U_{i}\) is inclusion-minimal among all sets in any realization of \(\mathcal{C}\)). Note that inclusion minimal indices always exist, provided that \(\mathcal{C}\) does not have any indices which appear in identical sets of codewords. If two indices \(i\) and \(j\) do appear in identical sets of codewords, then we must have \(U_{i}=U_{j}\) in every realization of \(\mathcal{C}\), and thus we must have \((i,j)\) and \((j,i)\) both as RF relations for \(\mathcal{C}\). If we are only interested in forming a realization of \(\mathcal{C}\) of a certain type (by convex sets, or boxes, for example) then we can apply Lemma 11 to delete one of these indices. In this way, we can reduce to the case where every two indices in \(\mathcal{C}\) have distinct behavior.
One last important feature of degree two codes is that they are _intersection complete_: the intersection of any two codewords is again a codeword. This fact is nontrivial to prove, but can be inferred
as an immediate consequence of work of Curto, Gross, Jeffries, Morrison, Rosen, Shiu, and Youngs [4, Proposition 3.7]. This fact streamlines the proof of the following lemma.
**Lemma 12**.: Let \(\mathcal{C}\subseteq 2^{[n]}\) be a degree two code, and let \(i\in[n]\) be an inclusion minimal index. Then \(\mathcal{C}\setminus i\) is a subset of \(\mathcal{C}\).
Proof.: Let \(c\in\mathcal{C}\) be a codeword with \(i\in c\). We must argue that \(c\setminus\{i\}\) is also a codeword of \(\mathcal{C}\). For contradiction, suppose not. Observe that every codeword containing \(c\setminus\{i\}\) must also contain \(i\): otherwise we could intersect such a codeword with \(c\) to obtain \(c\setminus\{i\}\) as a codeword, since degree two codes are intersection complete. This means that \((c\setminus\{i\},i)\) is an RF relation. Since \(\mathcal{C}\) is degree two, this relation must reduce to a minimal relation of degree at most two. A relation of the form \((\{j,k\},\emptyset)\) with \(j,k\in c\setminus\{i\}\) is not possible since \(c\) is a codeword, and so we must have a relation \((j,i)\) where \(j\in c\setminus\{i\}\). This contradicts the fact that \(i\) is inclusion minimal.
The most important tool for our proof is the following lemma, which allows us to extend a realization of \(\mathcal{C}\setminus n\) in \(\mathbb{R}^{d}\) to a realization of \(\mathcal{C}\) in \(\mathbb{R}^{d+1}\) whenever \(\mathcal{C}\) is degree two and \(n\) is inclusion-minimal.
**Lemma 13**.: Let \(\mathcal{C}\subseteq 2^{[n]}\) be a degree two code, and suppose \(n\) is an inclusion-minimal index. Define
\[\sigma =\{i\in[n-1]\mid(n,i)\text{ is an RF relation}\},\quad\text{and}\] \[\tau =\{i\in[n-1]\mid(\{i,n\},\emptyset)\text{ is not an RF relation}\}.\]
Given a realization \(\mathcal{U}=\{U_{1},\dots,U_{n-1}\}\) of \(\mathcal{C}\setminus n\) in \(\mathbb{R}^{d}\), the collection \(\mathcal{V}=\{V_{1},\dots,V_{n}\}\) given by
\[V_{i}=\begin{cases}U_{i}\times[0,1]&\text{if }i\in[n-1]\setminus\tau,\\ U_{i}\times[0,3]&\text{if }i\in\tau,\\ \left(\bigcap\nolimits_{j\in\sigma}U_{j}\right)\times[2,3]&\text{if }i=n.\end{cases}\]
is a realization of \(\mathcal{C}\) in \(\mathbb{R}^{d+1}\).
Proof.: Fix \(p\in\mathbb{R}^{d+1}\), and consider the codeword that arises at \(p\) in the realization \(\mathcal{V}\). If the last coordinate of \(p\) lies outside the range \([0,3]\), then we simply obtain the empty codeword at \(p\). Let \(q\) denote the projection of \(p\) to \(\mathbb{R}^{d}\), i.e. the point obtained by setting the last coordinate of \(p\) to zero. Let \(c\in\mathcal{C}\setminus n\) denote the codeword that arises at \(q\) in the realization \(\mathcal{U}\). If the last coordinate of \(p\) lies in the range \([0,1]\), then the codeword arising at \(p\) in the realization \(\mathcal{V}\) is exactly \(c\). In particular, the codewords arising for such \(p\) are exactly those in \(\mathcal{C}\setminus n\), which is a subset of \(\mathcal{C}\) by Lemma 12.
It remains to consider the case that the last coordinate of \(p\) lies in the range \((1,3]\). Here we must carefully consider several cases.
**Case 1:**\(p\notin V_{n}\).
The codeword arising at \(p\) in \(\mathcal{V}\) will be precisely \(c\cap\tau\). Let \(\gamma\) denote \(c\cap\tau\) and suppose for contradiction that \(\gamma\) is not a codeword of \(\mathcal{C}\), and in particular not a codeword of \(\mathcal{C}\setminus n\). This means that \((\gamma,\delta)\) is an RF relation of \(\mathcal{C}\setminus n\) for some \(\delta\subseteq[n-1]\setminus\tau\). Since \(\mathcal{C}\setminus n\) is degree two and \(c\) is a codeword of \(\mathcal{C}\setminus n\) this reduces to an RF relation \((i,j)\) where \(i\in\gamma\) and \(j\in[n-1]\setminus\tau\). The latter condition implies
that \((\{j,n\},\emptyset)\) is an RF relation in \(\mathcal{C}\). But these two relations together imply that \((\{i,n\},\emptyset)\) is an RF relation in \(\mathcal{C}\), contradicting the fact that \(i\in\tau\).
**Case 2:**\(p\in V_{n}\). Observe that the codewords arising at such \(p\) in \(\mathcal{V}\) are precisely of the form \((c\cap\tau)\cup\{n\}\) where \(c\) is a codeword of \(\mathcal{C}\setminus n\) that contains \(\sigma\). It thus suffices to show that these are precisely the codewords of \(\mathcal{C}\) that contain \(n\).
First suppose that \(\tilde{c}\) is a codeword of \(\mathcal{C}\) containing \(n\). Then \(\sigma\subseteq\tilde{c}\) because \(\sigma\) by definition records the indices in \([n-1]\) that appear in every codeword of \(\mathcal{C}\) containing \(n\), and \(\tilde{c}\setminus\{n\}\subseteq\tau\) since every index in \(\tilde{c}\setminus\{n\}\) appears together with \(n\) in the codeword \(\tilde{c}\). Setting \(c=\tilde{c}\setminus\{n\}\), we see that \(c\) is a codeword of \(\mathcal{C}\setminus n\) containing \(\sigma\), and \(\tilde{c}=(c\cap\tau)\cup\{n\}\) as desired.
For the converse, let \(c\) be a codeword of \(\mathcal{C}\setminus n\) that contains \(\sigma\). Let \(\gamma=c\cap\tau\), and note that the argument from Case 1 shows that \(\gamma\) is a codeword of \(\mathcal{C}\setminus n\). Suppose for contradiction that \(\gamma\cup\{n\}\) is not a codeword of \(\mathcal{C}\). Then \((\gamma\cup\{n\},\delta)\) is an RF relation for \(\mathcal{C}\), for some \(\delta\subseteq[n-1]\setminus\sigma\). This reduces to a degree two relation, but each possibility leads to a contradiction. A relation \((\{i,j\},\emptyset)\) where \(\{i,j\}\subseteq\gamma\) is not possible since \(\gamma\) is a codeword. A relation \((\{i,n\},\emptyset)\) with \(i\in\gamma\) is not possible since \(\gamma\subseteq\tau\). A relation \((i,j)\) where \(i\in\gamma\) and \(j\in\delta\) is not possible since \((\gamma,\delta)\) is not a relation. Finally, a relation \((n,i)\) where \(i\in\delta\) is not possible since \(\delta\) is disjoint from \(\sigma\).
We have thus shown that the codewords arising inside \(V_{n}\) in \(\mathcal{V}\) are exactly the codewords of \(\mathcal{C}\) that contain \(n\), concluding the proof.
**Example 14**.: Consider the code
\[\mathcal{C}=\{\mathbf{123},\mathbf{1345},135,145,134,12,13,1,4,\emptyset\}.\]
The minimal RF relations for this code are
\[(\{2,4\},\emptyset),\ (\{2,5\},\ \emptyset),\ (2,1),\ (3,1),\ (5,1),\ (5,3).\]
In particular, it is a degree two code. Moreover, \(\mathcal{C}\setminus 5\) has a realization by intervals in \(\mathbb{R}^{1}\). Figure 8 shows the construction of Lemma 13 applied to this realization, in order to obtain a realization of \(\mathcal{C}\) by axis-parallel boxes in \(\mathbb{R}^{2}\). The sets \(\sigma\) and \(\tau\) of Lemma 13 are \(\sigma=\{3\}\) and \(\tau=\{1,3,4\}\) in this case.
**Theorem 2**.: Let \(\mathcal{C}\subseteq 2^{[n]}\) be a degree two code. Then \(\mathcal{C}\) can be realized by axis-parallel boxes in dimension \(\max\{1,n-1\}\).
Proof.: We proceed by induction on \(n\). The base cases \(n=1\) and \(n=2\) can be verified straightforwardly, since every code on one or two indices is degree two and also realizable by intervals in \(\mathbb{R}^{1}\). For the inductive step with \(n\geq 3\), fix a degree two code \(\mathcal{C}\subseteq 2^{[n]}\). By Lemma 11, \(\mathcal{C}\setminus n\) is also degree two, and by inductive hypothesis there exists a realization \(\{U_{1},\ldots,U_{n-1}\}\) of \(\mathcal{C}\setminus n\) in \(\mathbb{R}^{n-2}\) by axis-parallel boxes. The realization of \(\mathcal{C}\) in \(\mathbb{R}^{n-1}\) provided by Lemma 13 consists of products of the various \(U_{i}\) and their intersections with intervals, and hence also consists of axis-parallel boxes.
## 7 Conclusion
Several lines of investigation remain open. Perhaps the most pressing question is to resolve the ambiguity regarding the open embedding dimension of \(\mathcal{FP}\).
**Question 15**.: What is the precise value of \(\operatorname{odim}(\mathcal{FP})\)?
Our study of \(\mathcal{FP}\) was motivated by the broader question of studying 3-sparse codes. A more general family of 3-sparse codes can be obtained from "Steiner triple systems," which are sets of triples in \([n]\) where every pair in \(n\) appears in a unique triple in the system. A Steiner triple system on \(n\) exists precisely when \(n\equiv 1\) or \(n\equiv 3\) modulo 6, and the maximal codewords of \(\mathcal{FP}\) form the smallest nontrivial Steiner triple system, which is in fact the unique such system on seven indices. Given any Steiner triple system, one can form an associated convex code by adding the singletons and the empty codeword. Call such a code a _Steiner triple code_. Every Steiner triple code is 3-sparse and intersection complete, and so has closed embedding dimension at most five (see [11, Theorem 1.9]). However, our Theorem 1 shows that the open embedding dimension can exceed the closed embedding dimension in a Steiner triple code. As we have seen, a reason for this is that realizations of such codes contain many sunflowers of three sets, to which we can potentially apply Lemma 7. We posit that as the Steiner systems in question grow, so must the open embedding dimension.
**Conjecture 16**.: For every \(d\geq 1\) there exists a Steiner triple code \(\mathcal{C}\) with \(\operatorname{odim}(\mathcal{C})\geq d\).
The route to establishing this conjecture is not at all straightforward. Our proof that \(\mathcal{FP}\) is not open convex in \(\mathbb{R}^{3}\) made frequent use of the Fano plane's symmetries, and also the property that any two maximal codewords share a unique index. This poses a challenge to generalizing our methods to higher order Steiner triple codes, and new techniques may be needed.
As regards codes of low degree, a natural next step is to investigate degree three codes. Another interesting question is to study codes which are both sparse _and_ low degree.
**Question 17**.: Among convex degree three codes, what pairs of embedding dimensions can arise?
**Question 18**.: Can we determine bounds on the embedding dimensions of \(k\)-sparse, degree \(\ell\) codes, in terms of \(k\) and \(\ell\)?
|
2307.16501 | On the depth of simplicial affine semigroup rings | We recall and delve into the different characterizations of the depth of an
affine semigroup ring, providing an original characterization of depth two in
three and four dimensional cases which are closely related to the existence of
a maximal element in certain Apery sets. | Raheleh Jafari, Ignacio Ojeda | 2023-07-31T08:55:18Z | http://arxiv.org/abs/2307.16501v1 | # On the depth of simplicial affine semigroup rings
###### Abstract.
We recall and delve into the different characterizations of the depth of an affine semigroup ring, providing an original characterization of depth two in three and four dimensional cases which are closely related to the existence of a maximal element in certain Apery sets.
Key words and phrases:Affine semigroups, simplicial affine semigroups, semigroup rings, depth, projective dimension, Betti numbers, Apery sets 2020 Mathematics Subject Classification: 13F65, 20M14, 13C15 The first author was in part supported by a grant from IPM (No.1402130111). The second author is patially supported by project PID2022-138906NB-C21 funded by MCIN/AEI/10.13039/501100011033 and by European Union NextGenerationEU/ PRTR, by research group FQM024 funded by Junta de Extremadura (Spain)/FEDER funds, by the Proyecto de Excelencia de la Junta de Andalucia (ProyExcel_00868) and by the Proyecto de investigacii del Plan Propio - UCA 2022-2023 (PR2022-011).
for the Apery set of a simplicial affine semigroup with respect to a subset of extremal rays to have a maximal element with respect to the partial order determined by the semigroup (Proposition 3.8).
In Section 4 we show the combinatorial characterization of the Betti numbers graded by an affine semigroup that will be fundamental in Section 5 and to a lesser extent in Section 6. Now, in Section 5, we give a necessary and sufficient condition for a simplicial affine subsemigroup of \(\mathbb{N}^{3}\) to have depth two (Theorem 5.2). Notice that this completes all possible depth cases for dimension three, since depth one and depth three are already characterized. In Section 6, we use Koszul's complexes to characterize depth two in dimension four (Theorem 6.4) and we provide a combinatorial interpretation of our result in terms of the simplicial complexes introduced in Section 4 (Proposition 6.6).
We end the paper with Conjecture 6.7 which claims that if the depth of a simplicial affine semigroup is two, then there exists a subset of extremal rays of cardinality two with respect to which the corresponding Apery set has a maximal element. Our conjecture is optimistically motivated by the cases of extreme depth (Proposition 3.4 and Proposition 3.5) and the results obtained in Section 5 and Section 6 where it is shown to be true for \(d\leq 4\). We end the paper by discussing why the conjecture cannot be extended to higher depths, in general.
## 2. Generalities and notation
Throughout this paper \(\mathcal{S}\) denotes a simplicial affine semigroup with (fixed) minimal generating set \(\mathcal{A}:=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{e}\}\subset\mathbb{N}^{d}\). Without loss of generality, we suppose that \(\operatorname{rank}\mathbb{Z}\mathcal{A}=d\), where \(\mathbb{Z}\mathcal{A}=\sum_{i=1}^{e}\mathbb{Z}\mathbf{a}_{i}\) is the subgroup of \(\mathbb{Z}^{d}\) generated by \(\mathcal{A}\).
Recall that the fact that \(\mathcal{S}\) is _simplicial_ means that the rational cone
\[\operatorname{pos}(\mathcal{A}):=\left\{\sum_{i=1}^{e}q_{i}\mathbf{a}_{i}\mid q _{i}\in\mathbb{Q}_{\geq 0},\ i=1,\ldots,e\right\}\subset\mathbb{Q}^{d}\]
has \(d\) minimal generators also called _extremal rays_. Without lost of generality, from now on we suppose that \(E:=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{d}\}\) is \(\mathbb{Q}-\)linearly independent, generates \(\operatorname{pos}(\mathcal{A})\) and \(\mathbf{a}_{i},\ i=1,\ldots,d\), is the component-wise smallest vector of \(\mathcal{A}\) in the corresponding extremal ray.
Let \(\Bbbk[\mathbf{x}]=\Bbbk[x_{1},\ldots,x_{e}]\) be the polynomial ring in \(e\) indeterminates over an arbitrary field \(\Bbbk\) and let
\[\Bbbk[S]=\bigoplus_{\mathbf{a}\in\mathcal{S}}\Bbbk\{\mathbf{t}^{\mathbf{a}}\}\]
be the _affine semigroup ring of \(\mathcal{S}\)_.
The ring \(\Bbbk[\mathbf{x}]\) has a natural \(\mathcal{S}-\)graded structure given by assigning degree \(\mathbf{a}_{i}\) to \(x_{i},\ i=1,\ldots,e\); indeed,
\[\Bbbk[\mathbf{x}]=\bigoplus_{\mathbf{a}\in\mathcal{S}}\Bbbk[\mathbf{x}]_{ \mathbf{a}},\]
where \(\Bbbk[\mathbf{x}]_{\mathbf{a}}\) denotes the \(\Bbbk-\)vector space generated by the monomials \(\mathbf{x}^{\mathbf{u}}:=x_{1}^{u_{1}}\cdots x_{e}^{u_{e}}\) such that \(\sum_{i=1}^{e}u_{i}\mathbf{a}_{i}=\mathbf{a}\), and \(\Bbbk[\mathbf{x}]_{\mathbf{a}}\cdot\Bbbk[\mathbf{x}]_{\mathbf{a}^{\prime}}= \Bbbk[\mathbf{x}]_{\mathbf{a}+\mathbf{a}^{\prime}}\). The surjetive \(\mathcal{S}-\)graded ring homomorphism
\[\varphi_{0}:\Bbbk[\mathbf{x}]\longrightarrow\Bbbk[S];x_{i}\mapsto\mathbf{ t}^{\mathbf{a}_{i}}\]
endows \(\Bbbk[S]\) with a structure of \(\mathcal{S}-\)graded \(\Bbbk[\mathbf{x}]-\)module. The kernel of \(\varphi_{0}\), denoted \(I_{\mathcal{A}}\), is the _toric ideal of \(\mathcal{S}\)_; clearly \(\Bbbk[S]\cong\Bbbk[\mathbf{x}]/I_{\mathcal{A}}\). Thus, minimal generating systems of \(I_{\mathcal{A}}\) gives rise minimal representations of \(\Bbbk[S]\) as \(\Bbbk[\mathbf{x}]-\)module. Indeed, if \(M_{\mathcal{A}}:=\{f_{1},\ldots,f_{\beta_{1}}\}\) is a minimal system of generators of \(I_{\mathcal{A}}\), then
\[\Bbbk[\mathbf{x}]^{\beta_{1}}\xrightarrow{\varphi_{1}}\Bbbk[\mathbf{x}] \xrightarrow{\varphi_{0}}\Bbbk[\mathcal{S}]\to 0\]
is an exact sequence, where \(\varphi_{1}\) is the homomorphism of \(\Bbbk[\mathbf{x}]-\)modules whose matrix with respect to the corresponding standard bases is \((f_{1},\ldots,f_{\beta_{1}})\). Since \(I_{\mathcal{A}}\) is \(\mathcal{S}-\)homogeneous (equivalently, a binomial ideal, see e.g. [12, Theorem 1]), then \(\varphi_{1}\) is also \(\mathcal{S}-\)graded.
Now, if \(\ker(\varphi_{1})\neq 0\), we can consider a minimal system of \(\mathcal{S}-\)graded generators of \(\ker(\varphi_{1})\), proceed as above defining a \(\mathcal{S}-\)graded homomorphism of \(\Bbbk[\mathbf{x}]-\)modules \(\varphi_{2}\) and so on. By the Hilbert Syzygy Theorem, this process cannot continue indefinitely, giving rise to the \(\mathcal{S}-\)_graded minimal free resolution of \(\Bbbk[\mathcal{S}]\)_:
\[0\to\Bbbk[\mathbf{x}]^{\beta_{p}}\xrightarrow{\varphi_{p}}\cdots\xrightarrow {\varphi_{2}}\Bbbk[\mathbf{x}]^{\beta_{1}}\xrightarrow{\varphi_{1}}\Bbbk[ \mathbf{x}]\xrightarrow{\varphi_{0}}\Bbbk[\mathcal{S}]\to 0.\]
For \(\mathbf{b}\in\mathcal{S}\), we write \(\beta_{i,\mathbf{b}}\) for the number of minimal generators of \(\ker\varphi_{i}\) of \(\mathcal{S}-\)degree \(\mathbf{b}\). Of course, \(\beta_{i,\mathbf{b}}\) may be \(0\). Here it is convenient to recall that \(\beta_{i,\mathbf{b}}=\dim_{\Bbbk}\operatorname{Tor}_{i}^{\Bbbk[\mathbf{x}]} (\Bbbk,\Bbbk[\mathcal{S}])_{\mathbf{b}}\) (see, e.g. [13, Lemma 1.32]) is an invariant of \(\Bbbk[\mathcal{S}]\) for every \(i>0\) and \(\mathbf{b}\in\mathcal{S}\). The integer number \(\beta_{i,\mathbf{b}}\) is called the \(i-\)_th Betti number of \(\Bbbk[\mathcal{S}]\) in degree \(\mathbf{b}\)_ and \(\beta_{i}=\sum_{\mathbf{b}\in\mathcal{S}}\beta_{i,\mathbf{b}}\) is called the \(i-\)_th (total) Betti number of \(\Bbbk[\mathcal{S}]\)_. Clearly, \(\Bbbk[\mathbf{x}]^{\beta_{i}}=\bigoplus_{\mathbf{b}\in\mathcal{S}}\Bbbk[ \mathbf{x}]^{\beta_{i,\mathbf{b}}}\), for every \(i=1,\ldots,p\).
Notice that there are finitely many nonzero Betti numbers. The elements \(\mathbf{b}\in\mathcal{S}\) such that \(\beta_{1,\mathbf{b}}\neq 0\) are called in literature Betti elements and the set of Betti elements of \(\mathcal{S}\) is usually denoted by \(\operatorname{Betti}(\mathcal{S})\) (see, [9] for more details).
The maximum \(i\) such that \(\beta_{i}\neq 0\) is called the _projective dimension of \(\Bbbk[\mathcal{S}]\)_, denoted by \(\operatorname{pd}_{\Bbbk[\mathbf{x}]}(\Bbbk[S])\). By the Auslander-Buchsbaum formula (see, e.g. [2, Theorem 1.3.3]), one has
\[\operatorname{depth}(\Bbbk[S])=e-\operatorname{pd}_{\Bbbk[\mathbf{x}]}( \Bbbk[S]). \tag{2.1}\]
Recall that when \(\operatorname{depth}(\Bbbk[\mathcal{S}])=d\) (equivalently, \(\operatorname{pd}_{\Bbbk[\mathbf{x}]}(\Bbbk[S])=\operatorname{codim}( \Bbbk[S])=e-d\)), then \(\Bbbk[\mathcal{S}]\) is Cohen-Macaulay. We extend this terminology to \(\mathcal{S}\), by saying that \(\mathcal{S}\) is _Cohen-Macaulay_ when \(\Bbbk[\mathcal{S}]\) is.
## 3. Apery sets and depth
The _Apery set_ of an element \(\mathbf{b}\in\mathcal{S}\) is defined as
\[\operatorname{Ap}(\mathcal{S},\mathbf{b}):=\{\mathbf{a}\in\mathcal{S}\ \mid\ \mathbf{a}-\mathbf{b}\notin\mathcal{S}\}.\]
Since \(\mathcal{S}\subset\mathbb{N}^{d}\), for \(\mathbf{b}\neq\mathbf{0}\) we have \(\mathbf{0}\in\operatorname{Ap}(\mathcal{S},\mathbf{b})\). For a finite subset \(\mathcal{B}\) of \(\mathcal{S}\), we define
\[\operatorname{Ap}(\mathcal{S},\mathcal{B}):=\{\mathbf{a}\in\mathcal{S}\ ;\ \mathbf{a}- \mathbf{b}\notin S,\ \text{for all}\ \mathbf{b}\in\mathcal{B}\}=\bigcap_{\mathbf{b}\in\mathcal{B}} \operatorname{Ap}(\mathcal{S},\mathbf{b}).\]
It is known that \(\operatorname{Ap}(\mathcal{S},\mathcal{B})\) is finite if and only if \(\operatorname{pos}(\mathcal{A})=\operatorname{pos}(\mathcal{B})\) (see, e.g. [1, Theorem 2.6]). In particular, \(\operatorname{Ap}(\mathcal{S},E)=\cap_{i=1}^{d}\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{i})\) is a finite set.
Given \(\delta\subseteq\{1,\ldots,d\}\) and a monomial order \(\prec\) on \(\Bbbk[\mathbf{x}]\), set
\[Q:=\{\mathbf{x}^{\mathbf{u}}\in\Bbbk[\{x_{i}\}_{i\notin\delta}]\}\setminus \operatorname{in}_{\prec}(I_{\mathcal{A}}+\langle\{x_{i}\}_{i\in\delta} \rangle).\]
The following result is a generalization of [14, Theorem 3.3], which can also be deduced from [1, Theorem 2.1].
**Proposition 3.1**.: _With the above notation, the map_
\[Q\longrightarrow\bigcap_{i\in\delta}\operatorname{Ap}(\mathcal{S},\mathbf{a} _{i});\quad\mathbf{x}^{\mathbf{u}}\longmapsto\sum_{i\notin\delta}u_{i} \mathbf{a}_{i}\]
_is a bijection._
Proof.: Let us prove that the map is a well-defined bijection. If \(\mathbf{x}^{\mathbf{u}}\in Q\), then \(\mathbf{q}=\sum_{i\not\in\delta}u_{i}\mathbf{a}_{i}\in\bigcap_{i\in\delta} \operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\); otherwise, there exists \(j\in\delta\) such that \(\mathbf{q}-\mathbf{a}_{j}=\sum_{i=1}^{e}v_{i}\mathbf{a}_{i}\in\mathcal{S}\). So, \(\mathbf{x}^{\mathbf{u}}-x_{j}\mathbf{x}^{\mathbf{v}}\in I_{\mathcal{A}}\) and, consequently, \(\mathbf{x}^{\mathbf{u}}\in\operatorname{in}_{\prec}(I_{\mathcal{A}}+\langle\{x _{i}\}_{i\in\delta\}\rangle)\), a contradiction. Moreover, if there exists \(\mathbf{x}^{\mathbf{w}}\in Q\) with \(\mathbf{q}=\sum_{i\not\in\delta}w_{i}\mathbf{a}_{i}\), then \(\mathbf{x}^{\mathbf{u}}-\mathbf{x}^{\mathbf{w}}\in I_{\mathcal{A}}\). So, either \(\mathbf{x}^{\mathbf{u}}\) or \(\mathbf{x}^{\mathbf{w}}\) lie in \(\operatorname{in}_{\prec}(I_{\mathcal{A}}+\langle\{x_{i}\}_{i\in\delta\}\rangle)\) which is not possible by hypothesis. Thus, the map is injective. Finally, if \(\mathbf{q}\in\bigcap_{i\in\delta}\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\), then \(\mathbf{q}=\sum_{i\not\in\delta}v_{i}\mathbf{a}_{i}\) for some \(v_{i}\in\mathbb{N},\ i\not\in\delta\). Now, if \(\mathbf{x}^{\mathbf{u}}\) is the remainder of the division of \(\mathbf{x}^{\mathbf{v}}\) by \(I_{\mathcal{A}}+\langle\{x_{i}\}_{i\in\delta\}\rangle\), then \(\mathbf{x}^{\mathbf{u}}\in Q\) with \(\sum_{i\not\in\delta}u_{i}\mathbf{a}_{i}=\mathbf{q}\).
**Notation 3.2**.: Let \(\preceq_{\mathcal{S}}\) be the partial order on \(\mathcal{S}\) given by \(\mathbf{a}\preceq_{\mathcal{S}}\mathbf{a}^{\prime}\) if and only if \(\mathbf{a}^{\prime}-\mathbf{a}\in\mathcal{S}\). Notice that \(\mathbf{0}\in\mathbb{N}^{d}\) is the only minimal element of \(S\) for \(\preceq_{\mathcal{S}}\). Moreover, if \(\mathbf{a}^{\prime}\in\operatorname{Ap}(\mathcal{S},\mathcal{B})\) and \(\mathbf{a}\in\mathcal{S}\) is such that \(\mathbf{a}\preceq_{\mathcal{S}}\mathbf{a}^{\prime}\), then \(\mathbf{a}\in\operatorname{Ap}(\mathcal{S},\mathcal{B})\).
**Corollary 3.3**.: _With the above notation, \(\mathbf{x}^{\mathbf{u}}\in Q\) divides \(\mathbf{x}^{\mathbf{v}}\in Q\) if and only if \(\sum_{i\not\in\delta}v_{i}a_{i}\in\bigcap_{i\in\delta}\operatorname{Ap}( \mathcal{S},\mathbf{a}_{i})\) and \(\sum_{i\not\in\delta}u_{i}a_{i}\preceq_{\mathcal{S}}\sum_{i\not\in\delta}v_{i}a _{i}\); in particular, \(\sum_{i\not\in\delta}u_{i}a_{i}\in\bigcap_{i\in\delta}\operatorname{Ap}( \mathcal{S},\mathbf{a}_{i})\)._
Proof.: If \(\mathbf{x}^{\mathbf{u}}\in Q\) divides \(\mathbf{x}^{\mathbf{v}}\in Q\), then \(\mathbf{x}^{\mathbf{v}}=\mathbf{x}^{\mathbf{w}}\mathbf{x}^{\mathbf{u}}\) for some \(\mathbf{x}^{\mathbf{w}}\in\Bbbk[\{x_{i}\}_{i\not\in\delta}]\). If \(\mathbf{x}^{\mathbf{w}}\in\operatorname{in}_{\prec}(I_{\mathcal{A}}+\langle\{x _{i}\}_{i\in\delta\}\rangle)\) then \(\mathbf{x}^{\mathbf{v}}\not\in Q\), in contradiction with the hypothesis. So \(\mathbf{x}^{\mathbf{w}}\in Q\) and, by Proposition 3.1, we have that \(\sum_{i\not\in\delta}v_{i}a_{i}\in\bigcap_{i\in\delta}\operatorname{Ap}( \mathcal{S},\mathbf{a}_{i})\) and \(\sum_{i\not\in\delta}v_{i}a_{i}-\sum_{i\not\in\delta}u_{i}a_{i}=\sum_{i\not\in \delta}w_{i}a_{i}\in\bigcap_{i\in\delta}\operatorname{Ap}(\mathcal{S},\mathbf{a }_{i})\subset\mathcal{S}\), that is, \(\sum_{i\not\in\delta}u_{i}a_{i}\preceq_{\mathcal{S}}\sum_{i\not\in\delta}v_{i}a _{i}\).
Conversely, if \(\sum_{i\not\in\delta}v_{i}a_{i}\in\bigcap_{i\in\delta}\operatorname{Ap}( \mathcal{S},\mathbf{a}_{i})\) and \(\sum_{i\not\in\delta}u_{i}a_{i}\preceq_{\mathcal{S}}\sum_{i\not\in\delta}v_{i}a _{i}\), then \(\sum_{i\not\in\delta}v_{i}\mathbf{a}_{i}-\sum_{i\not\in\delta}u_{i}\mathbf{a}_{i}\)\(=\sum_{i=1}^{e}w_{i}\mathbf{a}_{i}\in\mathcal{S}\). If \(\sum_{i=1}^{e}w_{i}\mathbf{a}_{i}\not\in\bigcap_{i\in\delta}\operatorname{Ap}( \mathcal{S},\mathbf{a}_{i})\), then there exists \(j\in\delta\) such that \(\sum_{i=1}^{e}w_{i}\mathbf{a}_{i}-\mathbf{a}_{j}\in\mathcal{S}\) and consequently, \(\sum_{i\not\in\delta}v_{i}\mathbf{a}_{i}-\mathbf{a}_{j}=\sum_{i\not\in\delta}u _{i}\mathbf{a}_{i}+\sum_{i=1}^{e}w_{i}\mathbf{a}_{i}-\mathbf{a}_{j}\in\mathcal{S}\), that is, \(\sum_{i\not\in\delta}v_{i}\mathbf{a}_{i}\not\in\bigcap_{i\in\delta} \operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\), in contradiction with the hypothesis. Analogously, we have that \(\sum_{i\not\in\delta}u_{i}\mathbf{a}_{i}\in\bigcap_{i\in\delta}\operatorname{Ap} (\mathcal{S},\mathbf{a}_{i})\). Therefore, by Proposition 3.1, \(\mathbf{x}^{\mathbf{u}}\in Q\) divides \(\mathbf{x}^{\mathbf{v}}\in Q\).
The following characterization of \(\Bbbk[\mathcal{S}]\) to have depth one is a consequence of [8, Theorem 6 and Proposition 16].
**Proposition 3.4**.: _The ring \(\Bbbk[\mathcal{S}]\) has depth one if and only if \(\operatorname{Ap}(\mathcal{S},\mathbf{b})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\) for some (equivalently all) \(\mathbf{b}\in\mathcal{S}\)._
Note that, by Corollary 3.3 and Proposition 3.4, \(\operatorname{depth}(\Bbbk[\mathcal{S}])=1\) if and only if the corresponding set \(Q\) has a maximal element for the partial order given by divisibility of monomials of \(\Bbbk[\mathbf{x}]\).
The case of \(\Bbbk[\mathcal{S}]\) having (maximal) depth \(d\), that is, \(\mathcal{S}\) is Cohen-Macaulay, is also characterized in terms of the Apery sets.
**Proposition 3.5**.: _[_15_, Corollary 1.6]__. The semigroup \(\mathcal{S}\) is Cohen-Macaulay if and only if for all \(\mathbf{a},\mathbf{b}\in\operatorname{Ap}(\mathcal{S},E)\) such that \(\mathbf{b}-\mathbf{a}\in\sum_{i=1}^{d}\mathbb{Z}\mathbf{a}_{i}\) we have \(\mathbf{a}=\mathbf{b}\)._
Let us show other connections of Apery sets with the depth of the semigroup ring that are valid beyond extreme cases of depth.
**Proposition 3.6**.: _Let \(e\geq 3\) and \(i\neq j\), the monomial \(x_{j}\) is a zero-divisor of \(\Bbbk[\mathbf{x}]/(I_{\mathcal{A}}+\langle x_{i}\rangle)\) if and only if there exists \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) such that \(\mathbf{a}_{j}+\mathbf{b}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\). In this case, \(\operatorname{depth}(\Bbbk[\mathcal{S}])>1\)._
Proof.: By [6, Proposition 1.10], the indeterminate \(x_{j}\) is a zero-divisor of \(\Bbbk[\mathbf{x}]/(I_{\mathcal{A}}+\langle x_{i}\rangle)\) if and only if there exists \(\mathbf{x}^{\mathbf{u}}\not\in I_{\mathcal{A}}+\langle x_{i}\rangle\) such that \(x_{j}\mathbf{x}^{\mathbf{u}}\in I_{\mathcal{A}}+\langle x_{i}\rangle\). Clearly, \(\mathbf{x}^{\mathbf{u}}\not\in I_{\mathcal{A}}+\langle x_{i}\rangle\) if and only if \(\mathbf{b}=\sum_{k=1}^{e}u_{k}\mathbf{a}_{k}\in\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{i})\). Moreover, \(\mathbf{b}+\mathbf{a}_{j}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) if and only if there exists \(\mathbf{b}^{\prime}\in\mathcal{S}\) such that \(\mathbf{b}+\mathbf{a}_{j}=\mathbf{b}^{\prime
Observe that if \((x_{i},x_{j})\) is a regular sequence on \(\Bbbk[\mathbf{x}]/I_{\mathcal{A}}\) then \(\mathbf{a}_{j}+\mathbf{b}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) for every \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\); in particular, \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) does not have a maximal element with respect to \(\preceq_{\mathcal{S}}\), as expected by Proposition 3.4.
**Corollary 3.7**.: _Let \(d\geq 2\) and \(1\leq i<j\leq d\). The following statements are equivalent._
1. \((x_{i},x_{j})\) _is a regular sequence on_ \(\Bbbk[\mathbf{x}]/I_{\mathcal{A}}\)_._
2. _For_ \(\mathbf{b}_{1},\mathbf{b}_{2}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i}) \cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\)_, if_ \(\mathbf{b}_{1}-\mathbf{b}_{2}\in\mathbb{Z}\mathbf{a}_{i}+\mathbb{Z}\mathbf{a} _{j}\)_, then_ \(\mathbf{b}_{1}=\mathbf{b}_{2}\)_._
Proof.: Suppose that \((x_{i},x_{j})\) is a regular sequence on \(\Bbbk[\mathbf{x}]/I_{\mathcal{A}}\) and let \(\mathbf{b}_{1},\mathbf{b}_{2}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i} )\cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\) such that \(\mathbf{b}_{1}=\mathbf{b}_{2}+z_{1}\mathbf{a}_{i}+z_{2}\mathbf{a}_{j}\), for some \(z_{1},z_{2}\in\mathbb{Z}\). Clearly, \(z_{1}z_{2}\leq 0\); so, without loss of generality, suppose \(z_{1}\leq 0\) and \(z_{2}\geq 0\), so that \(\mathbf{b}_{1}+(-z_{1})\mathbf{a}_{i}=\mathbf{b}_{2}+z_{2}\mathbf{a}_{j}\). Now, since, by Proposition 3.6, \(\mathbf{b}_{1}+u\mathbf{a}_{i}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\) and \(\mathbf{b}_{2}+v\mathbf{a}_{j}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) for every \(u,v\in\mathbb{N}\), we conclude that \(z_{1}=z_{2}=0\).
Conversely, suppose that (2) holds and let us see that \((x_{i},x_{j})\) is a regular sequence on \(\Bbbk[\mathbf{x}]/I_{\mathcal{A}}\). Since \(x_{i}\) is a nonzero-divisor of \(\Bbbk[\mathbf{x}]/I_{\mathcal{A}}\), it suffices to prove that \(x_{j}\) is is a nonzero-divisor of \(\Bbbk[\mathbf{x}]/(I_{\mathcal{A}}+\langle x_{i}\rangle)\). Let \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\), if \(\mathbf{b}+\mathbf{a}_{j}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\), then there exists \(\mathbf{b}^{\prime}\in\mathcal{S}\) such that \(\mathbf{b}+\mathbf{a}_{j}=\mathbf{b}^{\prime}+\mathbf{a}_{i}\). Let \(u,v,w\in\mathbb{N}\) be the smallest non-negative integer such that \(\mathbf{c}=\mathbf{b}-u\mathbf{a}_{j}\in\operatorname{Ap}(\mathcal{S},\mathbf{ a}_{j})\) and \(\mathbf{c}^{\prime}=\mathbf{b}^{\prime}-v\mathbf{a}_{i}-w\mathbf{a}_{j}\in \operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap}(\mathcal{S },\mathbf{a}_{j})\). Clearly, \(\mathbf{c}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{j})\) and \(\mathbf{c}-\mathbf{c}^{\prime}=(v+1)\mathbf{a}_{i}+(w-(u+1))\mathbf{a}_{j}\). So, by hypothesis, \(\mathbf{c}=\mathbf{c}^{\prime}\) and consequently \((v+1)\mathbf{a}_{i}=((u+1)-w)\mathbf{a}_{j}\) which is not possible because \(\mathbf{a}_{i},\mathbf{a}_{j}\in E\) and elements of \(E\) are supposed to be \(\mathbb{Q}-\)linearly independent.
Notice that for \(d=2\) the above result is nothing but Proposition 3.5.
We end this section with a characterization of the existence of a maximal element in certain Apery sets that will be very useful later on.
**Proposition 3.8**.: _Let \(E^{\prime}\subset E\). The following statements are equivalent._
1. \(\operatorname{Ap}(\mathcal{S},E^{\prime})\) _has a maximal element with respect to_ \(\preceq_{\mathcal{S}}\)_._
2. _There exists_ \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) _such that_ \(\mathbf{b}+\mathbf{a}_{i}\notin\operatorname{Ap}(\mathcal{S},E^{\prime})\) _for every_ \(i\in E\setminus E^{\prime}\)_._
Proof.: The statement (1) clearly implies (2). Conversely, let \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) such that \(\mathbf{b}+\mathbf{a}\notin\operatorname{Ap}(\mathcal{S},E^{\prime})\) for every \(\mathbf{a}\in E\setminus E^{\prime}\). In particular, \(\mathbf{b}+\mathbf{c}\notin\operatorname{Ap}(\mathcal{S},E^{\prime})\) for every \(\mathbf{c}\in\mathcal{S}\setminus\operatorname{Ap}(\mathcal{S},E)\). Indeed, if \(\mathbf{c}\in\mathcal{S}\setminus\operatorname{Ap}(\mathcal{S},E)\), then \(\mathbf{c}-\mathbf{a}\in\mathcal{S}\) for some \(\mathbf{a}\in E\), that is, \(\mathbf{c}=\mathbf{a}+\mathbf{c}^{\prime}\) for some \(\mathbf{a}\in E\) and \(\mathbf{c}^{\prime}\in\mathcal{S}\). Now, on the one hand, if \(\mathbf{a}\in E\setminus E^{\prime}\), then \(\mathbf{b}+\mathbf{c}=\mathbf{b}+\mathbf{a}+\mathbf{c}^{\prime}\not\in \operatorname{Ap}(\mathcal{S},E^{\prime})\), otherwise, \(\mathbf{b}+\mathbf{a}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) by Corollary 3.3; and, on the other hand, if \(\mathbf{a}\in E^{\prime}\), then \(\mathbf{b}+\mathbf{c}-\mathbf{a}=\mathbf{b}+\mathbf{c}^{\prime}\in\mathcal{S}\) and, consequently, \(\mathbf{b}+\mathbf{c}\not\in\operatorname{Ap}(\mathcal{S},E^{\prime})\).
So, if \(\mathbf{b}+\mathbf{c}\notin\operatorname{Ap}(\mathcal{S},E^{\prime})\) for all \(\mathbf{c}\in\operatorname{Ap}(\mathcal{S},E)\), we are done. Otherwise, \(\mathbf{b}+\mathbf{c}_{1}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) for some \(\mathbf{c}_{1}\in\operatorname{Ap}(\mathcal{S},E)\). Since \(\mathbf{b}+\mathbf{c}_{1}+\mathbf{a}\notin\operatorname{Ap}(\mathcal{S},E^{ \prime})\) for every \(\mathbf{a}\in E\setminus E^{\prime}\), we may repeat the same argument with \(\mathbf{b}+\mathbf{c}_{1}\) instead of \(\mathbf{b}\). So either \(\mathbf{b}+\mathbf{c}_{1}\) is maximal or there exists \(\mathbf{c}_{2}\in\operatorname{Ap}(\mathcal{S},E)\) such that \(\mathbf{c}_{1}\preceq_{\mathcal{S}}\mathbf{c}_{2}\), \(\mathbf{b}+\mathbf{c}_{2}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) and \(\mathbf{b}+\mathbf{a}+\mathbf{c}_{2}\notin\operatorname{Ap}(\mathcal{S},E^{ \prime})\) for every \(\mathbf{a}\in E\setminus E^{\prime}\). Since \(\operatorname{pos}(\mathcal{A})=\operatorname{pos}(E)\), then \(\operatorname{Ap}(\mathcal{S},E)\) is finite (see, e.g., [1, Theorem 2.6]) and this process necessarily stops. Hence \(\operatorname{Ap}(\mathcal{S},E^{\prime})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\).
## 4. Betti numbers and depth
Let us start by recalling the combinatorial characterization of the Betti numbers of \(\Bbbk[\mathcal{S}]\), which will be very useful later on. For \(\mathbf{b}\in\mathcal{S}\) consider the simplicial complex
\[\Delta_{\mathbf{b}}=\left\{F\subseteq\mathcal{A}\ \mid\ \mathbf{b}-\sum_{\mathbf{a}\in F} \mathbf{a}\in\mathcal{S}\right\}.\]
The following result is [13, Theorem 9.2].
**Proposition 4.1**.: _The Betti number \(\beta_{i+1,\mathbf{b}}\) of \(\Bbbk[\mathcal{S}]\) equals the dimension over \(\Bbbk\) of the \(i-\)th reduced homology group \(\widetilde{H}_{i}(\Delta_{\mathbf{b}};\Bbbk)\), for every \(i\geq 0\) and \(\mathbf{b}\in\mathcal{S}\)._
Thus,
\[\operatorname{depth}(\Bbbk[\mathcal{S}]) =e-\max\{i\mid\beta_{i,\mathbf{b}}\neq 0,\text{ for some }\mathbf{b}\in\mathcal{S}\}\] \[=e-\max\{i\mid\dim(\widetilde{H}_{i-1}(\Delta_{\mathbf{b}}; \Bbbk))\neq 0,\text{ for some }\mathbf{b}\in\mathcal{S}\}.\]
Consider now the simplicial complex
\[T_{\mathbf{b}}=\left\{F\subseteq E\ \mid\ \mathbf{b}-\sum_{\mathbf{a}\in F} \mathbf{a}\in\mathcal{S}\right\}.\]
Let \(D(j)=\{\mathbf{b}\in\mathcal{S}\ \mid\ \widetilde{H}_{j}(T_{\mathbf{b}})\neq 0\}\) and
\[C_{i}=\left\{\mathbf{b}\in\mathcal{S}\ \mid\ \mathbf{b}-\sum_{\mathbf{a}\in F }\mathbf{a}\in D(j),\text{ for some }j\geq-1\text{ and }F\subseteq\mathcal{A}\setminus E\text{ with }\#F=i-j\right\}.\]
The following result is a reformulation of [3, Proposition 3.3] and provides a sufficient condition for \(\Bbbk[\mathcal{S}]\) to have a nonzero \((i+1)-\)th Betti number in degree \(\mathbf{b}\).
**Proposition 4.2**.: _If \(\beta_{i+1,\mathbf{b}}\neq 0\), then \(\mathbf{b}\in C_{i}\)._
Notice that, if \(C_{k}=\varnothing\), then \(\operatorname{pd}_{\Bbbk[\mathbf{x}]}(\Bbbk[S])\leq k\) and, consequently, \(\operatorname{depth}(\Bbbk[S])\geq e-k\).
Let us now characterize the elements in \(D(0)\) in terms of Apery sets.
**Lemma 4.3**.: _Let \(\mathbf{b}\in\mathcal{S}\). Then \(\mathbf{b}\in D(0)\) if and only if there exists \(E^{\prime}\subset E\) such that \(\mathbf{b}\not\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) and \(\mathbf{b}-\sum_{\mathbf{a}\in E\setminus E^{\prime}}\mathbf{a}\in \operatorname{Ap}(\mathcal{S},E^{\prime})\)._
Proof.: Since \(D(0)=\{\mathbf{b}\in\mathcal{S}\ \mid\ \widetilde{H}_{0}(T_{\mathbf{b}})\neq 0\}\) and the dimension of \(\widetilde{H}_{0}(T_{\mathbf{b}})\) as a \(\Bbbk\)-vector space is one less than the number of connected components of \(T_{\mathbf{b}}\), we have that \(D(0)\neq 0\) precisely when \(T_{\mathbf{b}}\) is not connected. Let \(E_{1},\ldots,E_{k}\) be the set of vertices of the connected components of \(T_{\mathbf{b}}\). Then \(k\geq 2\) and
\[\mathbf{b}=\mathbf{b}_{1}+\sum_{\mathbf{a}\in E_{1}}\mathbf{a}=\cdots= \mathbf{b}_{k}+\sum_{\mathbf{a}\in E_{k}}\mathbf{a},\]
with \(\mathbf{b}_{j}\in\operatorname{Ap}(\mathcal{S},E_{i})\) for each \(j\neq i\) and \(i=1,\ldots,k\). Thus, taking \(E^{\prime}=E\setminus E_{i}\) for some \(i\in\{1,\ldots,k\}\) we get the direct implication.
Conversely, if there exists a subset \(E^{\prime}\subset E\) such that \(\mathbf{b}\not\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) and \(\mathbf{b}-\sum_{\mathbf{a}\in E\setminus E^{\prime}}\mathbf{a}\in \operatorname{Ap}(\mathcal{S},E^{\prime})\), then \(T_{\mathbf{b}}\) has at least two connected components and we are done.
The Betti degrees appearing in the leftmost syzygygy module of the \(\mathcal{S}-\)graded minimal free resolution of \(\Bbbk[\mathcal{S}]\) (that is, the integers \(\beta_{e-\operatorname{depth}(\Bbbk[\mathcal{S}]),\mathbf{b}}\neq 0\), for some \(\mathbf{b}\in S\)) are combinatorially described in the following result.
**Proposition 4.4**.: _Let \(q=\operatorname{depth}(\Bbbk[\mathcal{S}])\). If \(\beta_{e-q,\mathbf{b}}\neq 0\), then \(\mathbf{b}-\sum_{\mathbf{a}\in\mathcal{A}\setminus E}\mathbf{a}\in D(d-q-1)\). Moreover, if \(\operatorname{depth}(\Bbbk[\mathcal{S}])=d-1\), then there exists a subset \(E^{\prime}\subset E\) such that_
\[\mathbf{b}-\sum_{\mathbf{a}\in\mathcal{A}\setminus E^{\prime}}\mathbf{a}\in \operatorname{Ap}(\mathcal{S},E^{\prime})\quad\text{and}\quad\mathbf{b}-\sum_{ \mathbf{a}\in\mathcal{A}\setminus E}\mathbf{a}\notin\operatorname{Ap}( \mathcal{S},E^{\prime}).\]
Proof.: By Proposition 4.2, \(\mathbf{b}=\mathbf{b}^{\prime}+\sum_{\mathbf{a}\in F}\mathbf{a}\), where \(\mathbf{b}^{\prime}\in D(j)\) for some \(j\geq-1\) and \(F\subseteq\{d+1,\ldots,e\}\) with \(e-q-1-j\#F\leq e-d\); in particular, \(d-q-1\leq j\). Since \(\operatorname{depth}(\Bbbk[\mathcal{S}])=q\), by [3, Theorem 4.1], \(D(j)=\varnothing\) for \(j\geq e-q\). Therefore, \(j=d-q-1\) and \(F=\mathcal{A}\setminus E\).
Finally, if \(\operatorname{depth}(\Bbbk[\mathcal{S}])=d-1\), by Lemma 4.3, there exists \(E^{\prime}\subset E\) such that \(\mathbf{b}^{\prime}\notin\operatorname{Ap}(\mathcal{S},E^{\prime})\) and \(\mathbf{b}^{\prime}-\sum_{\mathbf{a}\in E\setminus E^{\prime}}\mathbf{a}\in \operatorname{Ap}(\mathcal{S},E^{\prime})\).
The following result follows easily from the definition of \(D(j)\).
**Corollary 4.5**.: _Let \(q=\operatorname{depth}(\Bbbk[\mathcal{S}])\). If \(\beta_{e-q,\mathbf{b}}\neq 0\), then \(\widetilde{H}_{d-q-1}(T_{\mathbf{b}-\sum_{\mathbf{a}\in\mathcal{A}\setminus E }})\neq 0\). In particular, \(D(d-q-1)\neq\varnothing\)._
As the following example shows, the converse of the above results is not true.
**Example 4.6**.: Let \(\mathcal{A}=\{\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}, \mathbf{a}_{5},\mathbf{a}_{6}\}\subset\mathbb{N}^{3}\) be such that \(\mathbf{a}_{i}\) is the \(i-\)th column of the following matrix:
\[\left(\begin{array}{cccccc}5&4&1&8&7&3\\ 3&1&5&5&4&4\\ 1&7&2&6&5&2\end{array}\right).\]
Using Singular [4], one can easily check that \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\) and that \(\beta_{e-2}=\beta_{4}=6\). Moreover, one can compute the set \(B\) of elements \(\mathbf{b}\in\mathcal{S}\), such that \(\beta_{4,\mathbf{b}}\neq 0\), namely,
\[B=\{\mathbf{b}_{1}=(79,80,63),\mathbf{b}_{2}=(89,87,66),\mathbf{b} _{3}=(82,72,62),\] \[\mathbf{b}_{4}=(91,78,69),\mathbf{b}_{5}=(97,77,72),\mathbf{b}_{6} =(106,72,80)\}.\]
Let \(I_{13}=\langle x_{1},x_{3},x_{2}x_{5}x_{5}^{5},x_{5}^{3}x_{6}^{5},x_{4}^{3}x_ {5}^{2},x_{2}x_{4}^{2}x_{6}^{6},x_{2}^{2}x_{6}^{11},x_{4}^{5}x_{6},x_{6}^{16},x _{4}^{2}x_{6}^{11},x_{5}^{8},x_{2}x_{5}^{7}x_{6}^{4},x_{4}^{11}\rangle\) be the initial ideal of \(I_{\mathcal{A}}+\langle x_{1},x_{3}\rangle\) with respect to the \(\mathcal{A}-\)graded reverse lexicographical ordering \(\prec\) such that \(x_{3}\prec x_{2}\prec x_{1}\prec x_{6}\prec x_{5}\prec x_{4}\).
Observe that \(x_{4}^{2}x_{5}^{7}x_{6}^{4}\not\in I_{13}\) and \(x_{4}^{2}x_{5}^{7}x_{6}^{4}x_{i}\in I_{13}\) for every \(i\in\{1,\ldots,6\}\), so Corollary 3.3 implies that \(\mathbf{c}=2\mathbf{a}_{4}+7\mathbf{a}_{5}+4\mathbf{a}_{6}=(77,54,55)\in \max_{\prec\mathcal{S}}\operatorname{Ap}(\mathcal{S},\{\mathbf{a}_{1},\mathbf{ a}_{3}\})\); in particular, \(\mathbf{c}+\mathbf{a}_{2}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{1}) \cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{3})\). Moreover, using the GAP ([7]) package numericalsgps ([5]), one can check that \(\mathbf{c}+\mathbf{a}_{2}\) has two factorizations, \((0,0,1,10,0,0)\) and \((0,1,0,2,7,4)\), so \(\mathbf{c}+\mathbf{a}_{2}\in D(0)\), as expected by Lemma 5.1 However, \(\mathbf{b}=\mathbf{c}+\mathbf{a}_{2}+\sum_{\ell=4}^{6}\mathbf{a}_{i}=(81,55,62) \not\in B\), that is, \(\beta_{4,\mathbf{b}}=0\).
In spite of this, one can check that there exists \(\mathbf{c}_{i}\in\max_{\preceq S}\operatorname{Ap}(\mathcal{S},\{\mathbf{a}_{ 1},\mathbf{a}_{2}\})\) such that \(\mathbf{b}_{i}=\mathbf{c}_{i}+\mathbf{a}_{3}+\sum_{j=4}^{6}\mathbf{a}_{j}\), for each \(i\in\{1,\ldots,6\}\). Concretely, in this case, \(\mathbf{c}_{1}=(60,62,48),\mathbf{c}_{2}=(70,69,51),\mathbf{c}_{3}=(63,54,47), \mathbf{c}_{4}=(72,60,54),\ \mathbf{c}_{5}=(78,59,57)\) and \(\mathbf{c}_{6}=(87,54,65)\).
## 5. Depth two in three-dimensional case
Let \(d=3\). As before \(\mathcal{A}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{e}\}\) and now \(E=\{\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\}\). In this case, the semigroup ring of \(\mathcal{S}\) generated by \(\mathcal{A},\Bbbk[\mathcal{S}]\), has positive depth lesser than or equal to three. As mentioned in Section 3, the extreme cases, namely \(\operatorname{depth}(\Bbbk[\mathcal{S}])=1\) and \(\operatorname{depth}(\Bbbk[\mathcal{S}])=3\), are already characterized in terms of Apery sets. Thus, in this section, we focus our attention on the case of depth two.
The following is Lemma 4.3 for \(d=3\).
**Lemma 5.1**.: _One has that \(\mathbf{b}\in D(0)\) if and only if \(\mathbf{b}\notin\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{ Ap}(\mathcal{S},\mathbf{a}_{j})\) and \(\mathbf{b}-\mathbf{a}_{k}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap \operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\), for some \(\{i,j,k\}=\{1,2,3\}\)._
The following is a necessary and sufficient condition for \(\Bbbk[\mathcal{S}]\) to have depth two, when \(d=3\).
**Theorem 5.2**.: _The ring \(\Bbbk[\mathcal{S}]\) has depth two if and only if \(\operatorname{Ap}(\mathcal{S},\mathbf{b})\) does not have a maximal element for some (equivalently all) \(\mathbf{b}\in\mathcal{S}\), and \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{j})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\), for some \(1\leq i<j\leq 3\)._
Proof.: If \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\), then, by Proposition 3.4, \(\operatorname{Ap}(\mathcal{S},\mathbf{b})\) does not have a maximal element for some (equivalently all) \(\mathbf{b}\in\mathcal{S}\). Moreover, by [3, Theorem 4.1], there exists \(\mathbf{b}\in D(0)\). So, by Lemma 5.1, there exists a permutation \(\{i,j,k\}=\{1,2,3\}\) such that \(\mathbf{b}\notin\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname {Ap}(\mathcal{S},\mathbf{a}_{j})\) and \(\mathbf{b}-\mathbf{a}_{k}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i}) \cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\). Now, Proposition 3.8 implies that \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\).
Conversely, if \(\mathbf{c}\in\max_{\prec_{\mathcal{S}}}\operatorname{Ap}(\mathcal{S},\mathbf{ a}_{i})\cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\), then \(\mathbf{b}=\mathbf{c}+\mathbf{a}_{k}\notin\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{i})\cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\). By Lemma 5.1, \(\mathbf{b}\in D(0)\). Thus, \(\operatorname{depth}(\Bbbk[\mathcal{S}])\leq 2\), by [3, Theorem 4.1]. Since, by Proposition 3.4, \(\operatorname{depth}(\Bbbk[\mathcal{S}])>1\), we conclude that \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\).
The above result is not true for every choice \(1\leq i<j\leq 3\).
**Example 5.3**.: Let \(\mathcal{A}=\{\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}, \mathbf{a}_{5},\mathbf{a}_{6}\}\subset\mathbb{N}^{3}\) be such that \(\mathbf{a}_{i}\) is the \(i-\)th column of the following matrix:
\[\left(\begin{array}{cccccc}2&0&0&9&3&7\\ 0&2&0&7&9&3\\ 0&0&2&3&7&5\end{array}\right).\]
Using Singular [4], one can easily check that \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\). Let \(1\leq i<j\leq 3\), considering Proposition 3.1, let us denote by \(I_{ij}\) the initial ideal of \(I_{\mathcal{A}}+\langle x_{i},x_{j}\rangle\) with respect to the \(\mathcal{A}-\)graded reverse lexicographical ordering \(\prec\) such that \(x_{3}\prec x_{2}\prec x_{1}\prec x_{6}\prec x_{5}\prec x_{4}\). In this case, we have
\[I_{12}=\langle x_{1},x_{2},x_{3}x_{4}\rangle+\langle x_{4},x_{5},x_{6}\rangle ^{2},\quad I_{13}=\langle x_{1},x_{3}\rangle+\langle x_{4},x_{5},x_{6}\rangle ^{2}\]
and
\[I_{23}=\langle x_{2},x_{3},x_{1}^{2}x_{5}\rangle+\langle x_{4},x_{5},x_{6} \rangle^{2}.\]
Observe that \(x_{4}\not\in I_{12}\) and \(x_{4}x_{i}\in I_{12}\) for every \(i\in\{1,\dots,6\}\), so, by Corollary 3.3, we obtain \(\mathbf{a}_{4}\in\max_{\preceq\mathcal{S}}\operatorname{Ap}(\mathcal{S},\{ \mathbf{a}_{1},\mathbf{a}_{2}\})\). Analogously, \(x_{1}x_{5}\not\in I_{23}\) and \(x_{1}x_{5}x_{i}\in I_{13}\) for every \(i\in\{1,\dots,6\}\), implies that \(\mathbf{a}_{1}+\mathbf{a}_{5}=(5,9,7)\in\max_{\preceq\mathcal{S}} \operatorname{Ap}(\mathcal{S},\{\mathbf{a}_{2},\mathbf{a}_{3}\}\). However, since \(x_{2}\) does not belong to the support of any of the generators of \(I_{13}\), by Corollary 3.3, \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{1})\cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{3})\) does not have any maximal element.
The following result is Proposition 4.4 for \(d=3\).
**Proposition 5.4**.: _Let \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\). If \(\beta_{e-2,\mathbf{b}}\neq 0\), then there exist a permutation \(\{i,j,k\}=\{1,2,3\}\) and \(\mathbf{c}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{j})\) such that \(\mathbf{c}+\mathbf{a}_{k}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i}) \cap\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\) and_
\[\mathbf{b}=\mathbf{c}+\mathbf{a}_{k}+\sum_{\ell=4}^{e}\mathbf{a}_{\ell}.\]
The following example shows that the subscripts \(i,j\) are not fixed for all Betti degrees in Proposition 5.4, in general.
**Example 5.5**.: Let \(\mathcal{A}=\{\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}, \mathbf{a}_{5},\mathbf{a}_{6}\}\subset\mathbb{N}^{3}\) be such that \(\mathbf{a}_{i}\) is the \(i-\)th column of the following matrix:
\[\left(\begin{array}{cccccc}2&0&0&11&5&9\\ 0&2&0&9&9&5\\ 0&0&2&5&9&11\end{array}\right).\]
Using Singular [4], one can easily check that \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\) and \(\beta_{4}=2\). In this case, \(\beta_{4,\mathbf{b}}\neq 0\) if and only if \(\mathbf{b}\in\{\mathbf{b}_{1}=(34,32,36),\mathbf{b}_{2}=(36,32,34)\}\). Let \(\mathbf{c}_{1}=\mathbf{b}_{1}-\mathbf{a}_{2}-\sum_{\ell=4}^{6}\mathbf{a}_{ \ell}=\mathbf{a}_{2}+\mathbf{a}_{6}=(9,7,11)\) and \(\mathbf{c}_{2}=\mathbf{b}_{2}-\mathbf{a}_{1}-\sum_{\ell=4}^{6}\mathbf{a}_{ \ell}=2\mathbf{a}_{1}+\mathbf{a}_{5}=(9,9,9)\). Observe that \(\mathbf{c}_{1}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap \operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\) if and only if \(\{i,j\}=\{1,3\}\) and that \(\mathbf{c}_{2}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap \operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\) if and only if \(\{i,j\}=\{2,3\}\).
Observe that \(\mathbf{c}_{1}\) and \(\mathbf{c}_{2}\) are maximal elements of \(\operatorname{Ap}(\mathcal{S},\{\mathbf{a}_{1},\mathbf{a}_{3}\})\) and \(\operatorname{Ap}(\mathcal{S},\{\mathbf{a}_{2},\mathbf{a}_{3}\})\), respectively. For this reason, _we wonder if \(\mathbf{c}\) in Proposition 5.4 can always be selected from a maximal element_.
The last result of this section complements the Corollary 3.7, in such a way that we can conclude that \((x_{i},x_{j}),(x_{i},x_{k})\) or \((x_{i},x_{j}-x_{k}),\ \{i,j,k\}=\{1,2,3\},\) is a regular sequence in \(\Bbbk[\mathcal{S}]\) when \(d=3\) and \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\).
**Proposition 5.6**.: _Let \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\) and \(\{i,j,k\}=\{1,2,3\}\). If \(x_{j}\) and \(x_{k}\) are zero-divisors of \(\Bbbk[\mathbf{x}]/(I+\langle x_{i}\rangle)\), then \(x_{j}-x_{k}\) is a nonzero-divisor of \(\Bbbk[\mathbf{x}]/(I+\langle x_{i}\rangle)\)._
Proof.: Assume contrary that \(x_{j}-x_{k}\) is a zero-divisor of \(\Bbbk[\mathbf{x}]/(I_{\mathcal{A}}+\langle x_{i}\rangle)\). Equivalently, since \(I_{\mathcal{A}}\) is a prime ideal, by [6, Proposition 1.10], there exists \(\mathbf{x}^{\mathbf{u}}\not\in I_{\mathcal{A}}+\langle x_{i}\rangle\) such that \(x_{j}\mathbf{x}^{\mathbf{u}}\in I_{\mathcal{A}}+\langle x_{i}\rangle\) and \(x_{k}\mathbf{x}^{\mathbf{u}}\in I_{\mathcal{A}}+\langle x_{i}\rangle\). So, if \(\mathbf{b}=\sum_{l=1}^{e}u_{l}\mathbf{a}_{l}\in\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{i})\), we have that \(\mathbf{b}+\mathbf{a}_{j}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) and \(\mathbf{b}+\mathbf{a}_{k}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\). Thus, \(\mathbf{b}+\mathbf{c}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) for \(\mathbf{c}\in\mathcal{S}\setminus\operatorname{Ap}(\mathcal{S},E)\).
If \(\mathbf{b}+\mathbf{c}\notin\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) for all \(\mathbf{c}\in\operatorname{Ap}(\mathcal{S},E)\), then \(\mathbf{b}\) is a a maximal element for \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\). Otherwise, \(\mathbf{b}+\mathbf{c}_{1}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) for some \(\mathbf{c}_{1}\in\operatorname{Ap}(\mathcal{S},E)\). Since \(\mathbf{b}+\mathbf{c}_{1}\notin\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\), we may repeat the same argument with \(\mathbf{b}+\mathbf{c}_{1}\) instead of \(\mathbf{b}\). So, either \(\mathbf{b}+\mathbf{c}_{1}\) is maximal or there exists \(\mathbf{c}_{2}\in\operatorname{Ap}(\mathcal{S},E)\) such that \(\mathbf{c}_{1}\preceq_{\mathcal{S}}\mathbf{c}_{2}\), \(\mathbf{b}+\mathbf{c}_{2}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) and \(\mathbf{b}+\mathbf{c}_{2}\notin\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\). As \(\operatorname{Ap}(\mathcal{S},E)\) is finite, this process stops. Therefore, \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) has maximal, a contradiction by Proposition 3.6.
## 6. Depth two in four-dimensional case
Let \(d=4\) and, according to our notation, \(E=\{\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}\}\). Let us characterize the property that \(\Bbbk[\mathcal{S}]\) has depth two in this case. To do this, we resort directly to the Koszul homology techniques on which the combinatorial constructions used in the previous sections are based.
We begin by establishing the notation and briefly recalling the notion of Koszul complex.
Let \(\overline{\mathbf{t}}\) be the sequence \((\mathbf{t}^{\mathbf{a}_{1}},\ldots,\mathbf{t}^{\mathbf{a}_{d}})\) of elements of \(\Bbbk[\mathcal{S}]\). Let \(K_{0}=\Bbbk[\mathcal{S}]\) and \(K_{p}=\bigoplus\Bbbk[\mathcal{S}]\mathbf{e}_{i_{1}\ldots i_{d}}\) be the free \(\Bbbk[\mathcal{S}]-\)module of rank \(\binom{d}{p}\) with basis \(\{\mathbf{e}_{i_{1}\ldots i_{p}}\ ;\ 1\leq i_{1}<\cdots<i_{p}\leq d\}\) for each \(1\leq p\leq d\). Set
\[\phi_{p}:K_{p}\longrightarrow K_{p-1};\ \mathbf{e}_{i_{1}\ldots i_{p}}\mapsto \sum_{j=1}^{p}(-1)^{j-1}\mathbf{t}^{\mathbf{a}_{i_{j}}}\mathbf{e}_{i_{1}\ldots \widehat{i_{j}}\ldots i_{p}},\ p=2,\ldots,d,\]
and \(\phi_{1}:K_{1}=\bigoplus_{i=1}^{d}\Bbbk[\mathcal{S}]\mathbf{e}_{i}\to K_{0}= \Bbbk[\mathcal{S}];\mathbf{e}_{i}\mapsto\mathbf{t}^{\mathbf{a}_{i}}\). One can check that \(\phi_{p}\circ\phi_{p-1}=0\), for every \(p\in\{1,\ldots,d\}\). Thus, we have that
\[K_{\bullet}(\overline{\mathbf{t}}):0\to K_{d}\xrightarrow{\phi_{p}}K_{d-1} \longrightarrow\cdots\longrightarrow K_{1}\xrightarrow{\phi_{1}}K_{0}\to 0\]
is a chain complex of \(\Bbbk[\mathcal{S}]-\)modules. This complex is called the Koszul complex associated to \(\overline{t}\).
The Koszul complex has homology groups
\[H_{p}(K_{\bullet}(\overline{\mathbf{t}}),\Bbbk[\mathcal{S}]):=\frac{\ker\phi_{ p}}{\operatorname{Im}\ \phi_{p+1}},\quad p=0,\ldots,d,\]
and \(H_{p}(\overline{x},\Bbbk[\mathcal{S}])=0\) for every \(p>d\).
The following result is an immediate consequence of [11, 16.8 and 16.6].
**Proposition 6.1**.: _With the above notation, \(\operatorname{depth}(\Bbbk[\mathcal{S}])=d-\max\{p\mid H_{p}(K_{\bullet}( \overline{t}),\Bbbk[\mathcal{S}])\neq 0\}\)._
Before characterizing the case where the depth of \(\Bbbk[\mathcal{S}]\) is two for \(d=4\), we need to prove a couple of technical lemmas valid for all \(d\geq 4\).
**Lemma 6.2**.: _Let \(d\geq 4\) and \(\{i<j<k\}\subseteq\{1,\ldots,d\}\). There exists \(\mathbf{f}=f_{ij}\mathbf{e}_{ij}-f_{ik}\mathbf{e}_{ik}+f_{jk}\mathbf{e}_{jk} \in\ker\phi_{2}\setminus\operatorname{Im}\ \phi_{3},\) for some \(f_{ij},f_{ik},f_{jk}\in\Bbbk[\mathcal{S}]\) if and only if for every \(i_{4}\in\{1,\ldots,d\}\setminus\{i,j,k\}\) there exist a permutation \(\{i_{1},i_{2},i_{3}\}=\{i,j,k\}\) and \(\mathbf{a}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i_{3}})\cap\operatorname{ Ap}(\mathcal{S},\mathbf{a}_{i_{4}})\) such that \(\mathbf{t}^{\mathbf{a}}\) appears with nonzero coefficient in \(f_{ij_{1}i_{2}}\) and both \(\mathbf{a}+\mathbf{a}_{i_{1}}-\mathbf{a}_{i_{3}}\) and \(\mathbf{a}+\mathbf{a}_{i_{2}}-\mathbf{a}_{i_{3}}\) belong to \(\mathcal{S}\)._
Proof.: Let \(f_{jk}=\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{\mathbf{a}}^{jk}\mathbf{t}^{ \mathbf{a}}\). If \(f_{jk}=0\), then
\[0=\phi_{2}(\mathbf{f}) =f_{ij}(\mathbf{t}^{\mathbf{a}_{j}}\mathbf{e}_{i}-\mathbf{t}^{ \mathbf{a}_{i}}\mathbf{e}_{j})-f_{ik}(\mathbf{t}^{\mathbf{a}_{k}}\mathbf{e}_{i} -\mathbf{t}^{\mathbf{a}_{i}}\mathbf{e}_{k})+f_{jk}(\mathbf{t}^{\mathbf{a}_{j}} \mathbf{e}_{k}-\mathbf{t}^{\mathbf{a}_{k}}\mathbf{e}_{j})\] \[=(f_{ij}\mathbf{t}^{\mathbf{a}_{j}}-f_{ik}\mathbf{t}^{\mathbf{a}_{ k}})\mathbf{e}_{i}-(f_{jk}\mathbf{t}^{\mathbf{a}_{k}}+f_{ij}\mathbf{t}^{ \mathbf{a}_{i}})\mathbf{e}_{j}+(f_{ik}\mathbf{t}^{\mathbf{a}_{i}}+f_{jk} \mathbf{t}^{\mathbf{a}_{j}})\mathbf{e}_{k}\]
implies, \(\mathbf{f}=0\), a contradiction. Hence, for each \(\mathbf{a}\in\mathcal{S}\) such that \(\lambda_{\mathbf{a}}^{jk}\neq 0\), there exist \(\mathbf{c}_{\mathbf{a}},\mathbf{c}_{\mathbf{a}}^{\prime}\in\mathcal{S}\) such that \(\mathbf{a}+\mathbf{a}_{j}=\mathbf{c}_{\mathbf{a}}+\mathbf{a}_{i}\) and \(\mathbf{a}+\mathbf{a}_{k}=\mathbf{c}_{\mathbf{a}}^{\prime}+\mathbf{a}_{i}\); in particular, both \(\mathbf{a}+\mathbf{a}_{j}-\mathbf{a}_{i}\) and \(\mathbf{a}+\mathbf{a}_{k}-\mathbf{a}_{i}\) belong to \(S\). Now, if \(\mathbf{a}\not\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})\), then \(\mathbf{c}_{\mathbf{a}}-\mathbf{a}_{j}=\mathbf{c}_{\mathbf{a}}^{\prime}- \mathbf{a}_{k}=\mathbf{a}-\mathbf{a}_{i}\in\mathcal{S}\) and, consequently,
\[\mathbf{g}_{a}:=\mathbf{t}^{\mathbf{c}_{\mathbf{a}}^{\prime}}\mathbf{e}_{ij}+ \mathbf{t}^{\mathbf{c}_{\mathbf{a}}}\mathbf{e}_{ik}-\mathbf{t}^{\mathbf{a}} \mathbf{e}_{jk}=\phi_{3}(\mathbf{t}^{\mathbf{a}-\mathbf{a}_{i}}\mathbf{e}_{ ijk})\in\mathrm{Im}\ \phi_{3}\subset\ker\phi_{2}\]
Therefore, if \(\mathbf{a}\not\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})\), for every \(\mathbf{a}\in\mathcal{S}\) with \(\lambda_{\mathbf{a}}^{jk}\neq 0\), then
\[\mathbf{g}:=\mathbf{f}-\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{\mathbf{a}}^{jk }\mathbf{g}_{\mathbf{a}}=\left(f_{ij}-\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{ \mathbf{a}}^{jk}\mathbf{t}^{\mathbf{c}_{\mathbf{a}}^{\prime}}\right)\mathbf{e} _{ij}-\left(f_{ik}-\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{\mathbf{a}}^{jk} \mathbf{t}^{\mathbf{c}_{\mathbf{a}}}\right)\mathbf{e}_{ik}\in\ker\phi_{2}.\]
However, \(\phi_{2}(\mathbf{g})=0\) implies \(\mathbf{g}=0\), that is, \(\mathbf{f}=\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{\mathbf{a}}^{jk}\mathbf{g}_ {\mathbf{a}}\in\mathrm{Im}\ \phi_{3}\) which is a contradiction. So, there exists \(\mathbf{b}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})\), for some \(\mathbf{b}\in\mathcal{S}\) with \(\lambda_{\mathbf{b}}^{jk}\neq 0\).
Let \(\mathbf{h}=\mathbf{f}-\sum_{\mathbf{a}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i} )}\lambda_{\mathbf{a}}^{jk}\mathbf{g}_{\mathbf{a}}\). By the previous arguments, \(\mathbf{h}=h_{ij}\mathbf{e}_{ij}-h_{ik}\mathbf{e}_{ik}+h_{jk}\mathbf{e}_{jk} \in\ker\phi_{2}\setminus\mathrm{Im}\ \phi_{3}\), with \(h_{jk}=\sum_{\mathbf{a}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})}\lambda_{ \mathbf{a}}^{jk}\mathbf{t}^{\mathbf{a}}\neq 0\). Let \(l\in\{1,\ldots,d\}\setminus\{i,j,k\}\). If \(\mathbf{a}\not\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{l})\) for every \(\mathbf{a}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})\) with \(\lambda_{jk}\neq 0\), then \(h_{jk}=\mathbf{t}^{\mathbf{a}_{i}}\tilde{h}_{jk}\) and both \(h_{ij}\) and \(h_{ik}\) are divisible by \(\mathbf{t}^{\mathbf{a}_{i}}\), then we may replace \(\mathbf{h}\) by \(\mathbf{h}/\mathbf{t}^{\mathbf{a}_{l}}\). Thus, without loss of generality, we suppose that there exists \(\mathbf{b}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})\) with \(\lambda_{\mathbf{b}}^{jk}\neq 0\) such that, at least, one of \(\mathbf{b},\mathbf{c}_{\mathbf{b}}\) or \(\mathbf{c}_{\mathbf{b}}^{\prime}\) belongs to \(\mathrm{Ap}(\mathcal{S},\mathbf{a}_{l})\). So, we distinguish three cases:
* If \(\mathbf{b}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{l})\), then \(\mathbf{b}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\mathrm{Ap}(\mathcal{S}, \mathbf{a}_{l})\). We already know that \(\mathbf{b}+\mathbf{a}_{k}-\mathbf{a}_{i}\) and \(\mathbf{b}+\mathbf{a}_{j}-\mathbf{a}_{i}\) belong to \(\mathcal{S}\).
* If \(\mathbf{c}_{\mathbf{b}}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{l})\), then \(\mathbf{c}_{\mathbf{b}}-\mathbf{a}_{j}=\mathbf{a}-\mathbf{a}_{i}\not\in \mathcal{S}\). Thus, \(\mathbf{c}_{\mathbf{b}}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{j})\cap\mathrm{Ap}( \mathcal{S},\mathbf{a}_{l})\). Moreover, \(\mathbf{c}_{\mathbf{b}}+\mathbf{a}_{i}-\mathbf{a}_{j}=\mathbf{a}\in\mathcal{S}\) and, since \(\phi_{2}(\mathbf{h})=0\) there exists a monomial \(\mathbf{t}^{\mathbf{d}}\) of \(f_{ik}\) such that \(\mathbf{c}_{\mathbf{b}}+\mathbf{a}_{k}=\mathbf{d}+\mathbf{a}_{j}\), that is, \(\mathbf{c}_{\mathbf{b}}+\mathbf{a}_{k}-\mathbf{a}_{j}\in\mathcal{S}\).
* If \(\mathbf{c}_{\mathbf{b}}^{\prime}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{l})\), then \(\mathbf{c}_{\mathbf{b}}^{\prime}-\mathbf{a}_{k}=\mathbf{a}-\mathbf{a}_{i}\not\in \mathcal{S}\). Thus, \(\mathbf{c}_{\mathbf{b}}^{\prime}\in\mathrm{Ap}(\mathcal{S},\mathbf{a}_{k})\cap \mathrm{Ap}(\mathcal{S},\mathbf{a}_{l})\). Moreover, \(\mathbf{c}_{\mathbf{b}}^{\prime}+\mathbf{a}_{i}-\mathbf{a}_{k}=\mathbf{a}\in \mathcal{S}\) and, since \(\phi_{2}(\mathbf{h})=0\) there exists a monomial \(\mathbf{t}^{\mathbf{d}}\) of \(f_{jk}\) such that \(\mathbf{c}_{\mathbf{b}}^{\prime}+\mathbf{a}_{i}=\mathbf{d}+\mathbf{a}_{j}\), that is, \(\mathbf{c}_{\mathbf{b}}^{\prime}+\mathbf{a}_{i}-\mathbf{a}_{j}\in\mathcal{S}\).
Conversely, let \(\mathbf{f}=\mathbf{t}^{\mathbf{a}+\mathbf{a}_{k}-\mathbf{a}_{i}}\mathbf{e}_{ij}+ \mathbf{t}^{\mathbf{a}+\mathbf{a}_{j}-\mathbf{a}_{i}}\mathbf{e}_{ik}-\mathbf{t}^{ \mathbf{a}}\mathbf{e}_{jk}\). Clearly, \(\mathbf{f}\in\ker\phi_{2}\), and \(\mathbf{f}\not\in\mathrm{Im}\ \phi_{3}\) because \(\mathbf{a}-\mathbf{a}_{i}\not\in\mathcal{S}\).
**Lemma 6.3**.: _Let \(d\geq 4\) and \(\{i<j<k<l\}\subseteq\{1,\ldots,d\}\). There exists \(\mathbf{f}=f_{ik}\mathbf{e}_{ik}+f_{il}\mathbf{e}_{il}+f_{jk}\mathbf{e}_{jk}+f_{ jl}\mathbf{e}_{jl}\in\ker\phi_{2}\setminus\mathrm{Im}\ \phi_{3}\) for some \(f_{ik},f_{il},f_{jk},f_{jl}\in
implies, \(\mathbf{f}=0\), a contradiction. Hence, for each \(\mathbf{a}\in\mathcal{S}\) such that \(\lambda_{\mathbf{a}}^{jk}\neq 0\), there exist \(\mathbf{c}_{\mathbf{a}},\mathbf{c}_{\mathbf{a}}^{\prime}\in\mathcal{S}\) such that \(\mathbf{a}+\mathbf{a}_{j}=\mathbf{c}_{\mathbf{a}}+\mathbf{a}_{i}\) and \(\mathbf{a}+\mathbf{a}_{k}=\mathbf{c}_{\mathbf{a}}^{\prime}+\mathbf{a}_{l}\); in particular, both \(\mathbf{a}+\mathbf{a}_{j}-\mathbf{a}_{i}\) and \(\mathbf{a}+\mathbf{a}_{k}-\mathbf{a}_{l}\) belong to \(S\). Moreover, there exists \(\mathbf{c}_{\mathbf{a}}^{\prime\prime}\in\mathcal{S}\) such that \(\mathbf{c}_{\mathbf{a}}+\mathbf{a}_{k}=\mathbf{c}_{\mathbf{a}}^{\prime\prime} +\mathbf{a}_{l}\). So, \(\mathbf{a}+\mathbf{a}_{j}+\mathbf{a}_{k}=\mathbf{c}_{\mathbf{a}}+\mathbf{a}_{ i}+\mathbf{a}_{k}=\mathbf{c}_{\mathbf{a}}^{\prime\prime}+\mathbf{a}_{i}+\mathbf{a}_{l}\), that is, \(\mathbf{a}+\mathbf{a}_{j}+\mathbf{a}_{k}-(\mathbf{a}_{i}+\mathbf{a}_{l})\) belongs to \(\mathcal{S}\). Now, if \(\mathbf{a}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap} (\mathcal{S},a_{l})\), then we get (1). Otherwise, without loss of generality, we suppose \(\mathbf{a}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\). Then
\[\mathbf{g}_{a}:=\mathbf{t}^{\mathbf{a}-\mathbf{a}_{i}+\mathbf{a}_{k}}\mathbf{e }_{ij}-\mathbf{t}^{\mathbf{c}_{\mathbf{a}}}\mathbf{e}_{ik}+\mathbf{t}^{ \mathbf{a}}\mathbf{e}_{jk}=\phi_{3}(\mathbf{t}^{\mathbf{a}-\mathbf{a}_{i}} \mathbf{e}_{ijk})\in\operatorname{Im}\ \phi_{3}\subset\ker\phi_{2}\]
Therefore, if \(\mathbf{a}\not\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\), for every \(\mathbf{a}\in\mathcal{S}\) with \(\lambda_{\mathbf{a}}^{jk}\neq 0\), then
\[\mathbf{g}:=\mathbf{f}-\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{\mathbf{a}}^{jk} \mathbf{g}_{\mathbf{a}}=-\left(\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{ \mathbf{a}}^{jk}\mathbf{t}^{\mathbf{a}-\mathbf{a}_{i}+\mathbf{a}_{k}}\right) \mathbf{e}_{ij}+\left(f_{ik}+\sum_{\mathbf{a}\in\mathcal{S}}\lambda_{\mathbf{a }}^{jk}\mathbf{t}^{\mathbf{c}_{\mathbf{a}}}\right)\mathbf{e}_{ik}+f_{il} \mathbf{e}_{il}+f_{jl}\mathbf{e}_{jl}\in\ker\phi_{2}\backslash\operatorname{Im }\ \phi_{3}.\]
Finally, since \(\phi_{2}(\mathbf{g})=0\), we conclude that the coefficient of \(\mathbf{e}_{ik}\) is zero and, consequently, that (2) holds.
Conversely, we treat the two cases separately. On the one hand, if (1) holds, then
\[\mathbf{f}=\mathbf{t}^{\mathbf{b}+\mathbf{a}_{i_{2}}-\mathbf{a}_{i_{1}}} \mathbf{e}_{i_{1}i_{3}}+\mathbf{t}^{\mathbf{b}+\mathbf{a}_{i_{3}}-\mathbf{a}_{ i_{4}}}\mathbf{e}_{i_{2}i_{4}}-\mathbf{t}^{\mathbf{b}}\mathbf{e}_{i_{2}i_{3}}- \mathbf{t}^{\mathbf{b}+\mathbf{a}_{i_{2}}+\mathbf{a}_{i_{3}}-(\mathbf{a}_{i_{ 1}}+\mathbf{a}_{i_{4}})}\mathbf{e}_{i_{1}i_{4}}\in\ker\phi_{2};\]
moreover, \(\mathbf{f}\not\in\operatorname{Im}\ \phi_{3}\) because \(\mathbf{b}-\mathbf{a}_{i_{1}}\not\in\mathcal{S}\) and \(\mathbf{b}-\mathbf{a}_{i_{2}}\not\in\mathcal{S}\), and therefore the third addend cannot come from any generator of \(\operatorname{Im}\ \phi_{3}\). On the other hand, if (2) holds, then arranging indexes if necessary, we have that
\[\mathbf{f}=\mathbf{g}+\phi_{3}(g_{i_{1}i_{2}}\mathbf{e}_{i_{1}i_{2}i_{4}})=g_{ i_{1}i_{3}}\mathbf{e}_{i_{1}i_{3}}+g_{i_{2}i_{3}}\mathbf{e}_{i_{2}i_{3}}+ \mathbf{t}^{\mathbf{a}_{i_{1}}}g_{i_{1}i_{2}}\mathbf{e}_{i_{2}i_{4}}-\mathbf{ t}^{\mathbf{a}_{i_{2}}}g_{i_{1}i_{2}}\mathbf{e}_{i_{1}i_{4}}\in\ker\phi_{2} \setminus\operatorname{Im}\ \phi_{3},\]
with \(i_{4}\in\{i,j,k,l\}\setminus\{i_{1},i_{2},i_{3}\}\)
We are now in a position to state and prove our characterization of depth two for \(d=4\).
**Theorem 6.4**.: _If \(d=4\) then \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\) if and only if \(\operatorname{Ap}(\mathcal{S},\mathbf{a})\) does not have a maximal element for some (every) \(\mathbf{a}\in\mathcal{S}\) and there exists a permutation \(\{i,j,k,l\}=\{1,2,3,4\}\) and \(\mathbf{b}\in\mathcal{S}\) such that one of the following conditions holds:_
1. \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap} (\mathcal{S},\mathbf{a}_{j})\) _such that_ \(\mathbf{b}+\mathbf{a}_{k}-\mathbf{a}_{i}\) _and_ \(\mathbf{b}+\mathbf{a}_{l}-\mathbf{a}_{i}\) _belong to_ \(S\)_._
2. \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap} (\mathcal{S},\mathbf{a}_{j})\) _such that_ \(\mathbf{b}+\mathbf{a}_{k}-\mathbf{a}_{i},\mathbf{b}+\mathbf{a}_{l}-\mathbf{a}_{j}\) _and_ \(\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}-(\mathbf{a}_{i}+\mathbf{a}_{j})\) _belong to_ \(S\)_._
_In both cases, \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{j})\) has maximal with respect to \(\preceq_{\mathcal{S}}\)._
Proof.: If \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\), by Proposition 3.4, \(\operatorname{Ap}(\mathcal{S},\mathbf{a})\) does not have a maximal element for some (every) \(\mathbf{a}\in\mathcal{S}\). Moreover, by Proposition 6.1, \(H_{2}(K_{\bullet}(\overline{x}),\Bbbk[\mathcal{S}])\neq 0\). Let \(\mathbf{f}=\sum_{1\leq i<j\leq 4}f_{ij}\mathbf{e}_{ij}\in\ker\phi_{2} \setminus\operatorname{Im}\ \phi_{3}\), where \(f_{ij}\in\Bbbk[\mathcal{S}],\ 1\leq i<j\leq 4\). we distinguish two cases:
1. If there exists \(h_{ij}^{k}\in\Bbbk[\mathcal{S}],\ 1\leq i<j\leq 4\) and \(k\in\{1,2\}\), such that \(\mathbf{f}\) can written in the form \[(\mathbf{t}^{a_{3}}h_{12}^{1}+\mathbf{t}^{a_{4}}h_{12}^{2})\mathbf{e}_{12} +(\mathbf{t}^{a_{4}}h_{13}^{1}-\mathbf{t}^{a_{2}}h_{13}^{2})\mathbf{e}_{13}-( \mathbf{t}^{a_{2}}h_{14}^{1}+\mathbf{t}^{a_{3}}h_{14}^{2})\mathbf{e}_{14}+\] \[+(\mathbf{t}^{a_{1}}h_{23}^{1}+\mathbf{t}^{a_{4}}h_{23}^{2}) \mathbf{e}_{23}+(\mathbf{t}^{a_{1}}h_{24}^{1}-\mathbf{t}^{a_{3}}h_{24}^{2}) \mathbf{e}_{24}+(\mathbf{t}^{a_{1}}h_{34}^{1}+\mathbf{t}^{a_{2}}h_{34}^{2}) \mathbf{e}_{34},\] then \[\mathbf{g} =\mathbf{f}-\phi_{3}(h_{12}^{1}\mathbf{e}_{123}+h_{12}^{2}\mathbf{e }_{124}+h_{34}^{1}\mathbf{e}_{134}+h_{34}^{2}\mathbf{e}_{234})\] \[=(\mathbf{t}^{a_{4}}h_{13}^{1}-\mathbf{t}^{a_{2}}h_{13}^{2}) \mathbf{e}_{13}-(\mathbf{t}^{a_{2}}h_{14}^{1}+\mathbf{t}^{a_{3}}h_{14}^{2}
2. If there exists a permutation \(\{i,j,k,l\}=\{1,2,3,4\}\) such that \(\pm f_{kl}=\sum_{\mathbf{b}\in\mathcal{S}}\lambda_{\mathbf{b}}^{kl}\mathbf{t}^{b}\) cannot be written in the form \(\mathbf{t}^{\mathbf{a}i}g_{j}\pm\mathbf{t}^{\mathbf{a}j}g_{i}\), then there exists \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{j})\) with \(\lambda_{\mathbf{b}}^{kl}\neq 0\). For simplicity, rearranging indices if necessary, we suppose \(i=1,j=2,k=3\) and \(l=4\). Therefore, \(\mathbf{f}=\sum_{1\leq i<j\leq 4}f_{ij}\mathbf{e}_{ij}\) and there exists a monomial \(\mathbf{t}^{\mathbf{b}}\) of \(f_{34}\) such that \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{1})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{2})\). Now, since \[0=\phi_{2}(\mathbf{f})=f_{12}(\mathbf{t}^{\mathbf{a}_{2}}\mathbf{e}_{1}- \mathbf{t}^{\mathbf{a}_{1}}\mathbf{e}_{2})+f_{13}(\mathbf{t}^{\mathbf{a}_{3}} \mathbf{e}_{1}-\mathbf{t}^{\mathbf{a}_{1}}\mathbf{e}_{3})+f_{14}(\mathbf{t}^{ \mathbf{a}_{4}}\mathbf{e}_{1}-\mathbf{t}^{\mathbf{a}_{1}}\mathbf{e}_{4})\\ +f_{23}(\mathbf{t}^{\mathbf{a}_{3}}\mathbf{e}_{2}-\mathbf{t}^{ \mathbf{a}_{2}}\mathbf{e}_{3})+f_{24}(\mathbf{t}^{\mathbf{a}_{4}}\mathbf{e}_{2 }-\mathbf{t}^{\mathbf{a}_{2}}\mathbf{e}_{4})+f_{34}(\mathbf{t}^{\mathbf{a}_{4} }\mathbf{e}_{3}-\mathbf{t}^{\mathbf{a}_{3}}\mathbf{e}_{4}),\] in particular the coefficients \(-f_{13}\mathbf{t}^{\mathbf{a}_{1}}-f_{23}\mathbf{t}^{\mathbf{a}_{2}}+f_{34} \mathbf{t}^{\mathbf{a}_{4}}\) and \(-f_{14}\mathbf{t}^{\mathbf{a}_{1}}-f_{24}\mathbf{t}^{\mathbf{a}_{2}}-f_{34} \mathbf{t}^{\mathbf{a}_{3}}\) of \(\mathbf{e}_{3}\) and \(\mathbf{e}_{4}\), respectively, are zero. Therefore, there exist \(\mathbf{c},\mathbf{c}^{\prime}\in\mathcal{S}\) such that \(\mathbf{b}+\mathbf{a}_{4}=\mathbf{c}+\mathbf{a}_{i}\) and \(\mathbf{b}+\mathbf{a}_{3}=\mathbf{c}^{\prime}+\mathbf{a}_{j}\) with \(i,j\in\{1,2\}.\) If \(i=j\), then \(\mathbf{b}-\mathbf{a}_{i}=\mathbf{c}^{\prime}-\mathbf{a}_{3}=\mathbf{c}- \mathbf{a}_{4}\), so \(\mathbf{f}-\phi_{3}(\mathbf{t}^{\mathbf{b}-\mathbf{a}_{i}}\mathbf{e}_{i34})= \mathbf{f}-\mathbf{t}^{\mathbf{b}}\mathbf{e}_{34}+\mathbf{t}^{\mathbf{c}^{ \prime}}\mathbf{e}_{i4}-\mathbf{t}^{\mathbf{c}}\mathbf{e}_{i3}\in\ker\phi_{2} \setminus\operatorname{Im}\,\,\phi_{3}\). Thus, we may suppose \(i\neq j\), say \(i=1\) and \(j=2\), so that \(\mathbf{b}+\mathbf{a}_{4}=\mathbf{c}+\mathbf{a}_{1}\) and \(\mathbf{b}+\mathbf{a}_{3}=\mathbf{c}^{\prime}+\mathbf{a}_{2}\), that is, \(\mathbf{b}+\mathbf{a}_{4}-\mathbf{a}_{1}\) and \(\mathbf{b}+\mathbf{a}_{3}-\mathbf{a}_{2}\). Finally, since the coefficient \(f_{12}\mathbf{t}^{\mathbf{a}_{2}}+f_{13}\mathbf{t}^{\mathbf{a}_{3}}+f_{14} \mathbf{t}^{\mathbf{a}_{4}}\) of \(\mathbf{e}_{1}\) in \(\phi_{2}(\mathbf{f})\) must be zero, there exists \(\mathbf{c}^{\prime\prime}\in\mathcal{S}\) such that 1. \(\mathbf{c}+\mathbf{a}_{3}=\mathbf{c}^{\prime\prime}+\mathbf{a}_{2}\). So, \(\mathbf{b}+\mathbf{a}_{3}+\mathbf{a}_{4}=\mathbf{c}^{\prime\prime}+\mathbf{a}_ {1}+\mathbf{a}_{2}\) which implies \(\mathbf{b}+\mathbf{a}_{3}+\mathbf{a}_{4}-(\mathbf{a}_{1}+\mathbf{a}_{2})\in \mathcal{S}\) or 2. \(\mathbf{c}+\mathbf{a}_{3}=\mathbf{c}^{\prime\prime}+\mathbf{a}_{4}\). So, \(\mathbf{b}+\mathbf{a}_{3}+\mathbf{a}_{4}=\mathbf{c}^{\prime\prime}+\mathbf{a}_ {4}+\mathbf{a}_{1}\) which implies \(\mathbf{c}^{\prime}+\mathbf{a}_{2}=\mathbf{b}+\mathbf{a}_{3}=\mathbf{c}^{ \prime\prime}+\mathbf{a}_{1}\). Moreover, since the coefficient \(f_{24}\mathbf{t}^{\mathbf{a}_{4}}-f_{12}\mathbf{t}^{\mathbf{a}_{1}}+f_{23} \mathbf{t}^{\mathbf{a}_{3}}\) of \(\mathbf{e}_{2}\) in \(\phi_{2}(\mathbf{f})\) must be zero, there exists \(\mathbf{c}^{\prime\prime\prime}\in\mathcal{S}\) such that \(\mathbf{c}+\mathbf{a}_{3}=\mathbf{c}^{\prime\prime\prime}+\mathbf{a}_{4}\). So \(\mathbf{b}+\mathbf{a}_{3}+\mathbf{a}_{4}=\mathbf{c}^{\prime\prime\prime}+ \mathbf{a}_{4}+\mathbf{a}_{1}\) which implies \(\mathbf{c}^{\prime}+\mathbf{a}_{2}=\mathbf{b}+\mathbf{a}_{3}=\mathbf{c}^{ \prime\prime\prime}+\mathbf{a}_{1}\). Therefore, \(\mathbf{b}-\mathbf{a}_{1}=\mathbf{c}^{\prime\prime\prime}-\mathbf{a}_{3}= \mathbf{c}-\mathbf{a}_{4}\), that is \(\mathbf{f}-\phi_{3}(\mathbf{t}^{\mathbf{b}-\mathbf{a}_{1}}\mathbf{e}_{134})= \mathbf{f}-\mathbf{t}^{\mathbf{b}}\mathbf{e}_{34}+\mathbf{t}^{\mathbf{c}^{ \prime\prime\prime}}\mathbf{e}_{14}-\mathbf{t}^{\mathbf{c}}\mathbf{e}_{i3}\in \ker\phi_{2}\setminus\operatorname{Im}\,\,\phi_{3}\). Thus, we may suppose that \(\mathbf{c}+\mathbf{a}_{3}=\mathbf{c}^{\prime\prime\prime}+\mathbf{a}_{2}\) and, consequently, \(\mathbf{b}+\mathbf{a}_{3}+\mathbf{a}_{4}=\mathbf{c}^{\prime\prime\prime}+ \mathbf{a}_{1}+\mathbf{a}_{2}\) which implies \(\mathbf{b}+\mathbf{a}_{3}+\mathbf{a}_{4}-(\mathbf{a}_{1}+\mathbf{a}_{2})\in \mathcal{S}\).
Conversely, by Lemmas 6.3 and 6.2, \(\ker\phi_{2}\setminus\operatorname{Im}\,\,\phi_{3}\neq\varnothing\). So, \(1\leq\operatorname{depth}(\Bbbk[\mathcal{S}])\leq 2\), and since, by Proposition 3.4\(\operatorname{depth}(\Bbbk[\mathcal{S}])\neq 1\), we are done.
Finally, by Proposition 3.8 we conclude that \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{j})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\).
Unlike the case \(d=3\), the existence of a maximal element is not sufficient to guarantee depth two as the following example shows.
**Example 6.5**.: Let \(\mathcal{S}\subset\mathbb{N}^{4}\) be the affine semigroup generated by the columns \(\mathbf{a}_{1},\dots,\mathbf{a}_{7}\) of the following matrix
\[A=\left(\begin{array}{cccccc}2&0&0&0&5&7&5\\ 0&2&0&0&5&5&7\\ 0&0&2&0&7&5&7\\ 0&0&0&2&7&7&5\end{array}\right).\]
Since \(\mathbf{a}_{5}+\mathbf{a}_{1}=\mathbf{a}_{3}+\mathbf{a}_{6}\) and \(\mathbf{a}_{5}+\mathbf{a}_{2}=\mathbf{a}_{4}+\mathbf{a}_{7}\), \(\operatorname{Ap}(\mathcal{S},\mathbf{a}_{3})\cap\operatorname{Ap}( \mathcal{S},\mathbf{a}_{4})\) has a maximal element by Proposition 3.8. Moreover \(\Bbbk[\mathcal{S}]\) is not Cohen-Macaulay, by Proposition 3.5. One can easily check that \(x_{3},x_{4},x_{1}+x_{2}\) is a regular sequence on \(\Bbbk[\mathcal{S}]\), which implies \(\operatorname{depth}(\Bbbk[\mathcal{S}])=3\). This shows that the last condition on \(\mathbf{b}\) in the second statement of Theorem 6.4 is necessary.
Conditions (1) and (2) in Theorem 6.4 have a clear combinatorial meaning; For a better understanding of the following result, it is convenient to take into account the notation and the results established at the beginning of Section 4.
**Proposition 6.6**.: _If \
1. _The hollow triangle with vertices_ \(i,k\) _and_ \(l\)_._
2. _The hollow triangle with vertices_ \(i,k\) _and_ \(l\) _and the edge_ \(\{i,j\}\)_._
3. _A hollow tetrahedron_ \(i,j,k\) _and_ \(l\) _without, at least, the faces_ \(\{i,k,l\}\) _and_ \(\{j,k,l\}\)_._
4. _The square with edges_ \(\{i,j\},\{j,k\},\{k,l\}\) _and_ \(\{i,l\}\)_, and the edge (diagonal)_ \(\{i,k\}\)_._
5. _The square with edges_ \(\{i,j\},\{j,k\},\{k,l\}\) _and_ \(\{i,l\}\)_._
Proof.: Suppose that there exists a permutation \(\{i,j,k,l\}=\{1,2,3,4\}\) and \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{j})\) such that \(\mathbf{b}+\mathbf{a}_{k}-\mathbf{a}_{i}\) and \(\mathbf{b}+\mathbf{a}_{l}-\mathbf{a}_{i}\) belong to \(S\). In this case, if \(\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}\in\operatorname{Ap}(\mathcal{S}, \mathbf{a}_{j})\), then \(T_{\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}}\) is the howllow triangle with vertices \(i,k\) and \(l\); otherwise either \(T_{\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}}\) is the hollow triangle without vertices \(i,k\) and \(l\) and the edge \(\{i,j\}\) or \(T_{\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}}\) contains two hollow triangles; namely, the one with vertices \(i,k\) and \(l\) and the one with vertices \(j,k\) and \(l\). Suppose now that there exists a permutation \(\{i,j,k,l\}=\{1,2,3,4\}\) and \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{j})\) such that \(\mathbf{b}+\mathbf{a}_{k}-\mathbf{a}_{i},\mathbf{b}+\mathbf{a}_{l}-\mathbf{a}_ {j}\) and \(\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}-(\mathbf{a}_{i}+\mathbf{a}_{j})\) belong to \(S\). In this case, if \(\mathbf{b}+\mathbf{a}_{l}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\) and \(\mathbf{b}+\mathbf{a}_{k}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{j})\), then \(T_{\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}}\) is a square without interior; otherwise, we arrive at one of the configurations of the previous cases.
In cases (a)-(d), we have, among other conditions, that \(\mathbf{b}=\mathbf{c}-\mathbf{a}_{k}-\mathbf{a}_{l}\in\mathcal{S}\) and that both \(\mathbf{b}+\mathbf{a}_{k}-\mathbf{a}_{i}=\mathbf{c}-\mathbf{a}_{i}-\mathbf{a} _{l}=\) and \(\mathbf{b}+\mathbf{a}_{l}-\mathbf{a}_{i}=\mathbf{c}-\mathbf{a}_{i}-\mathbf{a} _{k}\) belongs to \(\mathcal{S}\), and that \(\mathbf{b}-\mathbf{a}_{i}=\mathbf{c}-\mathbf{a}_{i}-\mathbf{a}_{k}-\mathbf{a} _{l}\not\in\mathcal{S}\) and \(\mathbf{b}-\mathbf{a}_{j}=\mathbf{c}-\mathbf{a}_{j}-\mathbf{a}_{k}-\mathbf{a} _{l}=\not\in\mathcal{S}\), that is, \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\mathbf{a}_{i})\cap\operatorname{Ap }(\mathcal{S},\mathbf{a}_{j})\); so, condition (1) in Theorem 6.4 holds. And, in case (e), we have the previous conditions plus \(\mathbf{b}+\mathbf{a}_{k}+\mathbf{a}_{l}-\mathbf{a}_{i}-\mathbf{a}_{j}=\mathbf{ c}-\mathbf{a}_{i}-\mathbf{a}_{j}\in\mathcal{S}\); so, condition (2) in Theorem 6.4 holds.
Observe that cases (a)-(b) implies that there exists \(\mathbf{c}\in\mathcal{S}\) such that \(\widetilde{H}_{1}(T_{\mathbf{c}})\neq 0\), that is, \(\mathbf{c}\in D(1)\). However, conditions (a)-(e) in Proposition 6.6 may be not replaced by \(D(1)\neq\varnothing\) because the following configuration for \(T_{\mathbf{c}}\) such that \(\mathbf{c}\in D(1)\) does not appear in Proposition 6.6.
Taking into account the our results about depth two, we dare to propose the following optimistic conjecture.
**Conjecture 6.7**.: If \(\operatorname{depth}(\Bbbk[\mathcal{S}])=2\), then there exists \(E^{\prime}\subseteq E\) of cardinality \(2\) such that \(\operatorname{Ap}(\mathcal{S},E^{\prime})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\).
The conjecture holds for \(d\leq 4\). Indeed, for \(d\leq 2\), because \(\operatorname{Ap}(\mathcal{S},E)\) is a finite set (see, e.g. [14, Proposition 3.2.] or [1, Theorem 2.6]), for \(d=3\) by Theorem 5.2 and for \(d=4\) by Theorem 6.4.
We emphasize that we cannot replace \(2\) by \(3\) in Conjecture 6.7 as the following example shows.
Figure 1. Simplicial complex with four vertices consisting of one triangle and one hollow triangle.
**Example 6.8**.: Let \(\mathcal{S}\subset\mathbb{N}^{4}\) be the affine semigroup generated by the columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{6}\) of the following matrix
\[A=\left(\begin{array}{cccccc}2&0&0&0&5&7\\ 0&2&0&0&5&7\\ 0&0&2&0&7&5\\ 0&0&0&2&7&5\end{array}\right).\]
Using Singular [4], one can easily check that \(\operatorname{depth}(\Bbbk[\mathcal{S}])=3\) and, with the help of Proposition 3.1, that \(\operatorname{Ap}(\mathcal{S},\{i,j,k\})\) does not have maximal elements for every \(1\leq i<j<k\leq 4\).
In spite of the above example, it may happen that \(\Bbbk[\mathcal{S}]\) have depth three and \(\operatorname{Ap}(\mathcal{S},E^{\prime})\) has a maximal element for some \(E^{\prime}\subset E\) of cardinality three.
**Example 6.9**.: Let \(\mathcal{S}\subset\mathbb{N}^{4}\) be the affine semigroup generated by the columns \(\mathbf{a}_{1},\ldots,\mathbf{a}_{8}\) of the following matrix
\[A=\left(\begin{array}{cccccc}2&0&0&0&3&4&2&5\\ 0&3&0&0&3&1&3&0\\ 0&0&2&0&3&2&1&7\\ 0&0&0&3&3&5&7&1\end{array}\right).\]
Using Singular [4], one can easily check that \(\operatorname{depth}(\Bbbk[\mathcal{S}])=3\).
Let \(\mathbf{b}=2\mathbf{a}_{6}+\mathbf{a}_{7}+5\mathbf{a}_{8}\), using the GAP ([7]) package numericalsgps ([5]), one can check that \(\mathbf{b}-\mathbf{a}_{i}\not\in\mathcal{S}\) for \(i\in\{1,3,4\}\), that is, \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},\{1,3,4\})\), and that \(\mathbf{b}+\mathbf{a}_{i}\not\in\operatorname{Ap}(\mathcal{S},\{1,3,4\})\) for every \(i\in\{1,\ldots,8\}\). So, \(\mathbf{b}\) is a maximal element of \(\operatorname{Ap}(\mathcal{S},\{1,3,4\})\) with respect to \(\preceq_{\mathcal{S}}\).
This last result gives a necessary and sufficient condition for what illustrated in the above example occur.
**Proposition 6.10**.: _Let \(d=4\). If \(\operatorname{depth}(\Bbbk[\mathcal{S}])=3\), then \(\operatorname{Ap}(\mathcal{S},E^{\prime})\) has a maximal element with respect to \(\preceq_{\mathcal{S}}\) for some \(E^{\prime}\subset E\) of cardinality three if and only if there exists \(\mathbf{b}\in\mathcal{S}\) such that \(T_{\mathbf{b}}\) is not connected and has, at least, an isolated vertex._
Proof.: If \(\operatorname{depth}(\Bbbk[\mathcal{S}])=3\), then, by [3, Theorem 4.1], there exists \(\mathbf{b}\in D(0)\), that is, there exists \(\mathbf{b}\in\mathcal{S}\) such that \(T_{\mathbf{b}}\) is disconnected.
Suppose that the simplicial complex \(T_{\mathbf{b}}\) does not have isolated vertices for every \(\mathbf{b}\in D(0)\). We claim that \(\mathbf{c}+u\mathbf{a}_{l}\in\operatorname{Ap}(\mathcal{S},E\setminus\{ \mathbf{a}_{l}\})\) for every \(u\in\mathbb{N},\mathbf{c}\in\operatorname{Ap}(\mathcal{S},E)\) and \(1\leq l\leq 4\). Contrary, let us suppose that there exist \(u\in\mathbb{N},\mathbf{b}\in\operatorname{Ap}(\mathcal{S},E)\) and \(1\leq l\leq 4\) such that \(\mathbf{b}+u\,\mathbf{a}_{l}\not\in\operatorname{Ap}(\mathcal{S},E\setminus\{ \mathbf{a}_{l}\})\); without loss of generality, we also assume that \(u\) is the smallest with this property. Since \(\mathbf{b}+u\,\mathbf{a}_{l}\not\in\operatorname{Ap}(\mathcal{S},E\setminus \{\mathbf{a}_{l}\})\), there exists \(1\leq i\leq 4\), such that \(\mathbf{b}+u\,\mathbf{a}_{l}-\mathbf{a}_{i}\in\mathcal{S}\) and, by the minimality of \(u\), \(\mathbf{b}+u\mathbf{a}_{l}-\mathbf{a}_{j}-\mathbf{a}_{l}\not\in\mathcal{S}\) for every \(1\leq j\leq 4\). So, if \(\mathbf{c}=\mathbf{b}+u\mathbf{a}_{l}\), we have that \(\mathbf{a}_{i}\) and \(\mathbf{a}_{l}\) are vertices of \(T_{\mathbf{c}}\) and that \(\mathbf{a}_{l}\) is isolated. Therefore \(\mathbf{c}\in D(0)\) and \(T_{\mathbf{c}}\) has at least one isolated vertex in contradiction with the hypothesis. Now, given \(E^{\prime}=E\setminus\{\mathbf{a}_{l}\}\) and \(\mathbf{b}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\), let \(v\in\mathbb{N}\) the smallest such that \(\mathbf{c}=\mathbf{b}-v\mathbf{a}_{l}\in\mathcal{S}\). Since \(\mathbf{c}\in\operatorname{Ap}(\mathcal{S},E)\), by our previous claim, we have that \(\mathbf{c}+u\mathbf{a}_{l}\in\operatorname{Ap}(\mathcal{S},E^{\prime})\) for every \(u\in\mathbb{N}\). In particular, if \(u>v\), then \(\mathbf{b}\preceq_{\mathcal{S}}\mathbf{b}+(u-v)\mathbf{a}_{l}\in\operatorname{ Ap}(\mathcal{S},E^{\prime})\). Thus, \(\operatorname{Ap}(\mathcal{S},E^{\prime})\) does not have maximal elements.
Conversely, if there exists \(\mathbf{b}\in D(0)\) such that \(T_{\mathbf{b}}\) has, at least, an isolated vertex, say \(\mathbf{a}_{l}\), then we have that \(\mathbf{b}\not\in\operatorname{Ap}(S,\{\mathbf{a}_{i},\mathbf{a}_{j},\mathbf{a }_{k}\})\) and \(\mathbf{b}-\mathbf{a}_{l}\in\operatorname{Ap}(S,\{\mathbf{a}_{i},\mathbf{a}_{j},\mathbf{a}_{k}\})\) for some permutation \(\{i,j,k,l\}=\{1,2,3,4\}\); thus, by Proposition 3.8, we conclude that \(\operatorname{Ap}(S,\{\mathbf{a}_{i},\mathbf{a}_{j},\mathbf{a}_{k}\})\) has a maximal element, for some \(1\leq i<j<k\leq 4\).
**Acknowledgments.** This research began during a visit by the first author to the Departamento de Matematicas of the Universidad de Extremadura (Badajoz, SPAIN). Both authors would like to thank the Departamento de Matematicas for its hospitality and support. |
2309.08293 | A theoretical approach to the complex chemical evolution of phosphorus
in the interstellar medium | The study of phosphorus chemistry in the interstellar medium has become a
topic of growing interest in astrobiology, because it is plausible that a wide
range of P-bearing molecules were introduced in the early Earth by the impact
of asteroids and comets on its surface, enriching prebiotic chemistry. Thanks
to extensive searches in recent years, it has become clear that P mainly
appears in the form of PO and PN in molecular clouds and star-forming regions.
Interestingly, PO is systematically more abundant than PN by factors typically
of $\sim1.4-3$, independently of the physical properties of the observed
source. In order to unveil the formation routes of PO and PN, in this work we
introduce a mathematical model for the time evolution of the chemistry of P in
an interstellar molecular cloud and analyze its associated chemical network as
a complex dynamical system. By making reasonable assumptions, we reduce the
network to obtain explicit mathematical expressions that describe the abundance
evolution of P-bearing species and study the dependences of the abundance of PO
and PN on the system's kinetic parameters with much faster computation times
than available numerical methods. As a result, our model reveals that the
formation of PO and PN is governed by just a few critical reactions, and fully
explains the relationship between PO and PN abundances throughout the evolution
of molecular clouds. Finally, the application of Bayesian methods constrains
the real values of the most influential reaction rate coefficients making use
of available observational data. | Marina Fernaández-Ruz, Izaskun Jimeénez-Serra, Jacobo Aguirre | 2023-09-15T10:16:26Z | http://arxiv.org/abs/2309.08293v1 | # A theoretical approach to the complex chemical evolution of phosphorus in the interstellar medium
###### Abstract
The study of phosphorus chemistry in the interstellar medium has become a topic of growing interest in astrobiology, because it is plausible that a wide range of P-bearing molecules were introduced in the early Earth by the impact of asteroids and comets on its surface, enriching prebiotic chemistry. Thanks to extensive searches in recent years, it has become clear that P mainly appears in the form of PO and PN in molecular clouds and star-forming regions. Interestingly, PO is systematically more abundant than PN by factors typically of \(\sim 1.4-3\), independently of the physical properties of the observed source. In order to unveil the formation routes of PO and PN, in this work we introduce a mathematical model for the time evolution of the chemistry of P in an interstellar molecular cloud and analyze its associated chemical network as a complex dynamical system. By making reasonable assumptions, we reduce the network to obtain explicit mathematical expressions that describe the abundance evolution of P-bearing species and study the dependences of the abundance of PO and PN on the system's kinetic parameters with much faster computation times than available numerical methods. As a result, our model reveals that the formation of PO and PN is governed by just a few critical reactions, and fully explains the relationship between PO and PN abundances throughout the evolution of molecular clouds. Finally, the application of Bayesian methods constrains the real values of the most influential reaction rate coefficients making use of available observational data.
Interstellar medium (847), Interstellar molecules (849), Astrochemistry (75), Astrobiology (74), Chemical reaction network models (2237), Bayesian statistics (1900) +
Footnote †: journal: APJ
0000-0002-2882-788X]Marina Fernandez-Ruz
0000-0002-1881-788X]Izaskun Jimenez-Serra
0000-0002-1882-788X]Jacobo Aguirre
0000-0002-4070-347X]Jagguirre
## 1 Introduction
Phosphorus (P) is an essential element for life, being the fifth most abundant element in unicellular organisms, and the sixth in multicellular organisms (Macia-Barber, 2020). P is present in phosphate groups, which can be found in several biomolecules including the informational polymers ribonucleacic acid (RNA) and deoxyribonucleic acid (DNA), the phospholipids of the cell membrane, and the energetic molecules adenosine triphosphate (ATP) and guanosine triphosphate (GTP). Therefore, P must have played a key role in the early Earth prebiotic chemistry that, around 4 billion years ago, led to the origin of life in our planet.
During the past decade it has been proposed that a significant part of the P reservoir on the early Earth surface might be of extraterrestrial origin (Lefloch et al., 2016; Rivilla et al., 2016, 2020; Bergner et al., 2020, 2022). Indeed, key volatile species such as PO have recently been detected in the comet 67P/Churyumov-Gerasimenko (Altwegg et al., 2016; Rivilla et al., 2020), supporting the hypothesis that comets and asteroids that fell abundantly onto our planet during the Late Heavy Bombardment period, enriched prebiotic chemistry with essential ingredients for the formation of the precursors of the building blocks of life, including P. In consequence, the astrobiological importance of P chemistry in the interstellar medium (ISM) relies on the fact that the chemical richness and complexity present in those comets was inherited from the chemistry occurred in the parental molecular cloud
where our solar system was formed (Altwegg et al., 2016; Rivilla et al., 2020; Bergner et al., 2022).
Interestingly, P is more scarce at cosmic scales than other essential elements for life, such as H, C, O and N, something that has been named as 'the phosphorus enigma' (Macia-Barber, 2020). In fact, the number and complexity of the P-bearing molecules detected in space (in both the interstellar and the circumstellar medium) are still very limited: PO (Tenenbaum et al., 2007; Rivilla et al., 2016), PN (Ziurys, 1987; Fontani et al., 2016), CP (Guelin et al., 1990), HCP (Agundez et al., 2007), CCP (Halfen et al., 2008), PH\({}_{3}\)(Agundez et al., 2008, 2014), NCCP (Agundez et al., 2014), and PO\({}^{+}\)(Rivilla et al., 2022).
In recent years, PO and PN have attracted special attention among the astrochemistry community because they are the only P-bearing species that have been detected in molecular clouds and star forming regions (see e.g. Ziurys, 1987; Fontani et al., 2016; Rivilla et al., 2016; Lefloch et al., 2016; Rivilla et al., 2018, 2020; Bernal et al., 2021; Bergner et al., 2019, 2022). All these observational works reveal that PO is systematically more abundant than PN with abundance ratios of [PO]/[PN] \(\sim 1.4-3\), independently of the observed source. These ratios are, however, not easy to reproduce by existing astrochemical models since they predict [PO]/[PN] ratios \(<1\) for a wide range of physical conditions (Jimenez-Serra et al., 2018; Chantzos et al., 2020; Sil et al., 2021). This implies that, despite the abundant information available from astronomical observations, the formation routes of PO and PN remain unclear. The inconsistency between the models and the observations may be due to several reasons: (i) the incompleteness of the chemical network of P; (ii) the large uncertainties in the reaction rate coefficients that models have to deal with; and (iii) the unknown yields of surface reactions on grains, which determine the main reservoir of solid P and the form in which P is made available in the gas phase.
For point (i), Jimenez-Serra et al. (2018) proposed that the reaction P+OH \(\rightarrow\) PO+H, missing in astrochemical models, could be an efficient mechanism of formation of PO. This has been recently confirmed by Garcia de la Concepcion et al. (2021), who have performed quantum-chemical and kinetic calculations of this reaction proving that it is indeed one of the main formation routes of PO in magnetohydrodynamic shocks with shock speeds \(\geq\)40 km s\({}^{-1}\).
For points (ii) and (iii), astrochemical codes have to deal with the already mentioned uncertainty of the reaction rate coefficients needed to solve the system of ordinary differential equations (ODEs) associated with a big set of reactions. The majority of the reaction rates are tabulated in databases such as UMIST (McElroy et al., 2013) and KIDA (KInetic Database for Astrochemistry; Wakelam et al., 2012), but many of these values have not been validated with theoretical or experimental methods. In addition, little is known about the main reservoir of P on dust grains, although it is suspected that it resides in a semi-refractory form (Bergner et al., 2022).
In this work, we introduce a mathematical model of the evolution of P chemistry in the ISM to cast light on the formation routes of PO and PN. We analyze the dependences of the PO and PN abundances on the reaction rate coefficients and the influence of the main reservoir of P on grains. By making appropriate assumptions, we reduce the number of reactions of the complex chemical network of phosphorus to the key reactions that are then analyzed. The simplicity of our approach allows us to obtain explicit mathematical expressions that reproduce the abundance evolution with time of the P-bearing species involved in our reduced P-network. Such complete analytical solution of the system permits much faster computation times than currently available numerical methods from astrochemical codes. Taking advantage of this method, we analyze in detail the dependence of the model on the parameter space, (i) showing the main pathways by which PO and PN are formed and/or destroyed to provide a general explanation to why [PO]/[PN] values are systematically \(<\)1 in models but \(>\)1 in real data; and (ii) identifying the most critical reaction rate coefficients involved in the process. Finally, the application of Bayesian statistics is used in the refinement of the calculation of such coefficients with the aid of observational data available in the literature.
## 2 A model for the chemical evolution of phosphorus in the interstellar medium
The theoretical model involves 17 chemical species and is composed of a reduced set of 14 chemical reactions (see Table 1) that are assumed to take place in the ISM. To generate this set of chemical reactions, we have taken the chemical network built by Jimenez-Serra et al. (2018) and recently augmented by Garcia de la Concepcion et al. (2021). This chemical network considers all reactions with P-bearing species present in the UMIST database (McElroy et al., 2013), which includes the original network for phosphorus of Millar (1991), plus additional reactions extracted from Charnley and Millar (1994), Anicich (1993) and Agundez et al. (2007) (see Jimenez-Serra et al., 2018, for details). The
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \(j\) & Reaction & \(\alpha\) & \(\beta\) & \(\gamma\) & \(k_{j}\) (\(T\)=10 K) & \(k_{j}\) (\(T\)=100 K) & \(k_{j}\) (\(T\)=300 K) & References \\ & & (cm\({}^{3}\) s\({}^{-1}\)) & & & (cm\({}^{3}\) s\({}^{-1}\)) & (cm\({}^{3}\) s\({}^{-1}\)) & (cm\({}^{3}\) s\({}^{-1}\)) & \\ \hline
1 & N+PO \(\rightarrow\) P+NO & \(2.55\times 10^{-12}\) & 0 & 0 & \(2.55\times 10^{-12}\) & \(2.55\times 10^{-12}\) & \(2.55\times 10^{-12}\) & 1 \\
2 & N+PO \(\rightarrow\) PN+O & \(3.00\times 10^{-11}\) & -0.6 & 0 & \(2.31\times 10^{-10}\) & \(5.80\times 10^{-11}\) & \(3.00\times 10^{-11}\) & 1 \\
3 & O+PH\({}_{2}\)\(\rightarrow\) PO+H\({}_{2}\) & \(4.00\times 10^{-11}\) & 0 & 0 & \(4.00\times 10^{-11}\) & \(4.00\times 10^{-11}\) & \(4.00\times 10^{-11}\) & 1 \\
4 & O+PH \(\rightarrow\) PO+H & \(1.00\times 10^{-10}\) & 0 & 0 & \(1.00\times 10^{-10}\) & \(1.00\times 10^{-10}\) & \(1.00\times 10^{-10}\) & 1 \\
5 & P+O\({}_{2}\)\(\rightarrow\) PO+O & \(3.99\times 10^{-12}\) & 0.89 & 814 & \(8.61\times 10^{-49}\) & \(4.38\times 10^{-16}\) & \(2.65\times 10^{-13}\) & 4 \\
6 & P+OH \(\rightarrow\) PO+H & \(2.28\times 10^{-10}\) & 0.16 & 0.37 & \(1.28\times 10^{-10}\) & \(1.91\times 10^{-10}\) & \(2.28\times 10^{-10}\) & 2 \\
7 & N+PH \(\rightarrow\) PN+H & \(8.80\times 10^{-11}\) & -0.18 & 1.01 & \(1.47\times 10^{-10}\) & \(1.06\times 10^{-10}\) & \(8.77\times 10^{-11}\) & 5 \\
8 & N+CP \(\rightarrow\) PN+C & \(8.80\times 10^{-11}\) & 0.42 & 0 & \(2.11\times 10^{-11}\) & \(5.55\times 10^{-11}\) & \(8.80\times 10^{-11}\) & 1\({}^{a}\) \\
9 & P+CN \(\rightarrow\) PN+C & \(8.80\times 10^{-11}\) & 0.42 & 0 & \(2.11\times 10^{-11}\) & \(5.55\times 10^{-11}\) & \(8.80\times 10^{-11}\) & 1\({}^{a}\) \\
10 & H+PH \(\rightarrow\) P+H\({}_{2}\) & \(1.50\times 10^{-10}\) & 0 & 416 & \(1.29\times 10^{-28}\) & \(2.34\times 10^{-12}\) & \(3.75\times 10^{-11}\) & 3 \\
11 & O+CP \(\rightarrow\) P+CO & \(4.00\times 10^{-11}\) & 0 & 0 & \(4.00\times 10^{-11}\) & \(4.00\times 10^{-11}\) & \(4.00\times 10^{-11}\) & 1 \\
12 & H+PH\({}_{2}\)\(\rightarrow\) PH+H\({}_{2}\) & \(6.20\times 10^{-11}\) & 0 & 318 & \(9.59\times 10^{-25}\) & \(2.58\times 10^{-12}\) & \(2.15\times 10^{-11}\) & 3 \\
13 & H+PH\({}_{3}\)\(\rightarrow\) PH\({}_{2}\)+H\({}_{2}\) & \(4.50\times 10^{-11}\) & 0 & 735 & \(5.40\times 10^{-43}\) & \(2.89\times 10^{-14}\) & \(3.88\times 10^{-12}\) & 3 \\
14 & C+PH \(\rightarrow\) CP+H & \(7.50\times 10^{-11}\) & 0 & 0 & \(7.50\times 10^{-11}\) & \(7.50\times 10^{-11}\) & \(7.50\times 10^{-11}\) & 1 \\ \hline \end{tabular} Note. –\({}^{a}\) No bibliography was available for those reactions, so we used the parameters associated with the analogous Nitrogen (N) reaction N+CN \(\rightarrow\) N\({}_{2}\)+C, as previous works suggest that the chemical similarity between N and P could lead to a similar chemical behavior (Agúndez et al., 2007).
\end{table}
Table 1: Set of chemical reactions, kinetic parameters \(\alpha\), \(\beta\) and \(\gamma\) of the modified Arrhenius equation, reaction rate coefficients \(k_{j}\) (for temperatures 10 K, 100 K and 300 K) and bibliographical sources.
Figure 1: Complex networks associated with the set of reactions selected from the chemistry of phosphorus in the ISM for our model. (a) Chemical network representing the 17 species and 14 chemical reactions analyzed in this work. Nodes represent chemical species involved in the system, and are classified as follows: abundant (blue nodes), scarce (green) and non-interacting (white) according to the criterion explained in the text. Directed links (arrows) go from the reactants to the products of a reaction and undirected links (dashed lines) connect the reactants of a reaction. (b) Sub-network of the total network plotted in (a) that sketches the theoretically solved system. We have included the indices \(i\) used in Equations (3-4) to number the P-bearing species in the minimal system.
chemical network also includes the newly calculated rate constants for the reactions P+OH \(\rightarrow\) PO+H (Garcia de la Concepcion et al., 2021), P+O\({}_{2}\)\(\rightarrow\) PO+O (Garcia de la Concepcion et al. 2023, submitted) and N+PH \(\rightarrow\) PN+H (Gomes et al., 2023). In this work, we focus on neutral-neutral gas-phase reactions because, unlike ion-neutral reactions (see e.g. Thorne et al., 1984), the neutral-neutral ones have not been measured in the laboratory and therefore they are subject to large uncertainties (most reaction rates are best guesses due to the difficulties in performing laboratory experiments with these species; see e.g. Millar et al., 1987). In addition, these reactions are expected to dominate the chemistry of P-bearing molecules in the regions where PN and PO have been found since the ionization fraction of the gas is low (Jimenez-Serra et al., 2018). Indeed, ion-neutral and dissociative recombination reactions are known to be minor contributors to the formation of PO and PN in molecular clouds and star-forming regions from previous theoretical studies (Millar et al., 1987; Charnley and Millar, 1994), unless an extremely high UV radiation field or cosmic rays ionization rate is present (Jimenez-Serra et al., 2018; Rivilla et al., 2022). However, here we only focus on deeply embedded star-forming regions, where most detections of PO and PN have been reported. In regions where photochemistry is relevant, the chemical network could not be reduced as we do it here.
The selected set of reactions is finally represented as a complex network (see Figure 1(a)), where nodes are the 17 chemical species involved, directed links (arrows) go from the reactants to the products of the same reaction, and undirected links (dashed lines) connect both reactants of a reaction.
The chemistry is modeled according to the law of mass action (Chang and Overby, 2017). Consequently, given a set of reactions of the form A+B \(\rightarrow\) C+D, the rate of change with time of the abundance of each chemical species \(i\) is given by
\[\frac{d[X_{i}]}{dt}=\sum_{l,m}k_{lm}^{i}n_{\rm H}[X_{l}][X_{m}]-[X_{i}]\sum_{n} k_{ni}n_{\rm H}[X_{n}]\,, \tag{1}\]
where \([X_{i}]\) is the abundance of species \(i\) relative to the abundance of H, and \(n_{\rm H}\) is the H number density. The first sum contains the formation terms and the second sum contains the destruction terms of species \(i\). \(k_{lm}^{i}\) are the reaction rate coefficients of the reactions between the reactants \(X_{l}\) and \(X_{m}\) that produce species \(i\) (i.e. \(X_{l}+X_{m}\to X_{i}+X_{n}\)), while \(k_{ni}\) is the reaction rate coefficient of all the reactions in which species \(X_{i}\) is a reactant (i.e. \(X_{n}+X_{i}\rightarrow\) products). If we apply Equation (1) to the 17 chemical species in the network, we obtain the associated system of ODEs explicitly shown in Appendix A whose solution describes the evolution with time of the abundances of all molecules.
For neutral-neutral gas-phase reactions, the most common form to parameterize the dependence of the reaction rate coefficient on temperature is given by the modified Arrhenius equation,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Species & Initial abundance & Type & References \\ \hline C & \(2.69\times 10^{-4}\) & A & 1 \\ CN & \(5.92\times 10^{-10}\) & S & \(2^{a}\) \\ CO & - & NI \({}^{e}\) & N/A \\ CP & \(1.00\times 10^{-13}\) & S & N/A\({}^{b}\) \\ H & 1 & A & N/A\({}^{c}\) \\ H\({}_{2}\) & - & NI \({}^{e}\) & N/A \\ N & \(6.76\times 10^{-5}\) & A & 1 \\ NO & - & NI \({}^{e}\) & N/A \\ O & \(4.90\times 10^{-4}\) & A & 1 \\ O\({}_{2}\) & \(6.04\times 10^{-7}\) & A & 3,4 \({}^{a}\) \\ OH & \(1.00\times 10^{-7}\) & A & 5 \\ P & \((1-f_{\rm P})\times 2.57\times 10^{-9}\) & S\({}^{d}\) & 1 \\ PH & \((f_{\rm P}/3)\times 2.57\times 10^{-9}\) & S\({}^{d}\) & 1 \\ PH\({}_{2}\) & \((f_{\rm P}/3)\times 2.57\times 10^{-9}\) & S\({}^{d}\) & 1 \\ PH\({}_{3}\) & \((f_{\rm P}/3)\times 2.57\times 10^{-9}\) & S\({}^{d}\) & 1 \\ PN & 0 & S & N/A \({}^{c}\) \\ PO & 0 & S & N/A \({}^{c}\) \\ \hline \end{tabular} Note. – \({}^{a}\) In cases where the source provided two values or we considered two sources, we used the geometric mean.
\({}^{b}\) Up to date, CP has not been detected in the ISM, but it has been detected in a circumstellar shell envelope by Guelin et al. (1990). Thus, in our model we consider that CP is present but we fix its initial value to \(10^{-13}\) so it is sufficiently below the detection limit (\(\sim 10^{-12}\)).
\({}^{c}\) The value is set to one (for H) and zero (for PO and PN) following the model’s rules.
\({}^{d}\) The initial abundances of atomic P, PH, PH\({}_{2}\) and PH\({}_{3}\) with respect to H are expressed in terms of the P-hydrogen fraction \(f_{\rm P}\).
\({}^{e}\) The initial abundances of non-interacting (NI) species are not needed to solve numerically or theoretically the rest of the system.
\end{table}
Table 2: Initial abundances with respect to H of the species involved in the phosphorus chemistry network in the ISM studied in this work.
\[k(T)=\alpha\left(\frac{T}{300}\right)^{\beta}\exp\left(-\frac{\gamma}{T}\right)\,, \tag{2}\]
where \(\alpha\), \(\beta\) and \(\gamma\) are the kinetic parameters and T is the temperature. The kinetic parameters \(\alpha\), \(\beta\) and \(\gamma\) of the 14 reaction rate coefficients have been obtained from the sources specified in Table 1 and have been used to obtain the rate coefficients \(k_{j}\) of each reaction \(j\).
Our model takes into account the initial abundances of the chemical species involved in the chemical network. We have assumed solar abundances for the atomic species (extracted from Asplund et al., 2009; Jimenez-Serra et al., 2018), while for the molecules O\({}_{2}\), CN, and OH, we use the abundances measured toward molecular clouds (see Table 2 and references therein). For CP, since it has not been detected in the ISM, we just assume an abundance below the typical detection limit of \(\sim\)10\({}^{-12}\). All initial conditions can be found in Table 2 along with their bibliographic sources.
The chemical species are classified according to their initial abundance in one of the following groups: abundant (A), scarce (S) or non-interacting species (NI). Throughout the paper we assume that _abundant_ species are those whose initial abundance is greater than 10\({}^{-8}\) with respect to H, and are represented as blue nodes in the chemical network plotted in Figure 1. _Scarce_ (S) chemical species are those whose initial abundance is below 10\({}^{-8}\), and are represented as green nodes. Finally, there are three species that are not reactants of any reaction and therefore their abundances do not appear in the right side of any equation in the system of ODEs introduced in Appendix A. Independently of their initial abundance, they have been called _non-interacting_ (NI) species, and are represented as white nodes.
While the model describes explicitly the gas-phase chemistry, the grain-surface chemistry is also implicitly included as follows. It has been argued that the observed scarcity of P in the gas phase in molecular clouds is because most of the atomic P freezes out onto dust grains (Ziurys, 1987; Turner & Bally, 1987; Aota & Aikawa, 2012; Lefloch et al., 2016). In accordance with this assumption, the sum of the initial abundances of atomic P, PH, PH\({}_{2}\) and PH\({}_{3}\) in the model has been depleted by a factor of 100 with respect to the cosmic abundance of P, proceeding as in previous works (see e.g. Aota & Aikawa, 2012; Lefloch et al., 2016; Jimenez-Serra et al., 2018). In addition, it is believed that molecules PH, PH\({}_{2}\) and PH\({}_{3}\) are formed on the dust grain surfaces through hydrogenation of atomic P (Charnley & Millar, 1994) before being released to the gas phase, but the actual yields of the surface reactions transforming P into PH, PH\({}_{2}\) and PH\({}_{3}\) are unknown. In order to account for this uncertainty in our simulations, we define the P-hydrogenation fraction \(f_{\rm P}\) as the fraction of P that is initially in the form of PH, PH\({}_{2}\) and PH\({}_{3}\). The initial abundances of P, PH, PH\({}_{2}\) and PH\({}_{3}\) depend on \(f_{\rm P}\) as described in Table 2. As an example, in our model \(f_{\rm P}=0\) means that all initial P is in the form of atomic P, while \(f_{\rm P}=1\) means that all initial P is hydrogenated and equally distributed between PH, PH\({}_{2}\) and PH\({}_{3}\).
## 3 Analysis of the System
### Theoretical Solution
The set of ODEs that describes the chemical evolution of phosphorus in the interstellar medium can be mathematically solved under certain approximations that transform the system of 17 nonlinear ODEs into a minimal linear system of 7 ODEs corresponding to the P-bearing species P, PH, PH\({}_{2}\), PH\({}_{3}\), CP, PO and PN. Note that, for the sake of clarity, we will name throughout the paper _total system_ to the one composed of the 17 nonlinear ODEs (Equations (A1-A17) in Appendix A) and _minimal system_ to the mathematically solvable system made of 7 linear ODEs (set of Equations (B18) in Appendix B). To obtain the minimal system and be able to solve it mathematically, we must assume that (i) the abundant species are constant for all times (i.e. \(d[X]/dt=0\) for \(X=\) C, H, N, O, O\({}_{2}\), OH), (ii) CN abundance is constant as its rate of change, \(d[\text{CN}]/dt=-k_{\rm g}n_{\rm H}[\text{P}][\text{CN}]\), is extremely small because both P and CN are scarce species, and (iii) the term \(k_{1}[\text{N}][\text{PO}]\) in Equation (A12) is negligible because its value is several orders of magnitude lower than the dominant terms and in consequence the same applies to the arrow from PO to P in Figure 1(b) (see Appendices B and C for a thorough analysis of the suitability of these assumptions).
Furthermore, as the non-interacting species NO, H\({}_{2}\) and CO do not influence the evolution of the rest of molecules and we are only interested in the evolution of the P-bearing species, we can neglect their kinetic equations and finally obtain an independent set of seven ODEs for the minimal system, where every differential equation is linear and of the type
\[\frac{d[X_{i}]}{dt}=\sum_{j\neq i}k_{j}n_{\rm H}[Y_{j}][X_{j}]-[X_{i}]\sum_{j} k_{j}n_{\rm H}[Y_{j}]\,, \tag{3}\]
where \(X\) stand for the P-bearing species, \(Y\) for the non P-bearing species, and \(i\) numbers the species according to Figure 1(b). The right-hand first sum and second sum are the formation and the destruction terms of P-bearing species \(i\), respectively. Note that non P-bearing species \(Y_{j}\) belong to the abundant type (A) and therefore verify \([Y_{j}]=[Y_{j}]_{0}\) for all times, while P-bearing species
\(X_{j}\) belong to the scarce type and verify \([X_{j}]_{0}\ll[Y_{j}]_{0}\) (as mentioned above, CN is also scarce but was treated differently).
While obtaining the explicit solution of a linear ODE system with seven equations is in general unfeasible, in this case we can do it by solving the equations sequentially, as the matrix of coefficients associated with the system is triangular. This property has a graphic counterpart in the fact that the sub-network (of the total chemical network) shown in Figure 1(b) composed of the 7 P-bearing species and the links connecting them does not have any cycles, that is, if we start a walk in any of those nodes, there are no paths to go back to the original node by following the directed links of the network.
Making the mentioned assumptions and following the steps described above, we obtain a general explicit expression for the time-evolution of the abundances \([X_{i}]\) of each P-bearing species \(i\),
\[[X_{i}](t)=\left[\sum_{j=1}^{i-1}\frac{C_{ij}}{r_{i}-r_{j}}\,e^{-r_{j}\,t} \right]+C_{ii}\,e^{-r_{i}\,t}\,, \tag{4}\]
where \(C\) and \(r\) are constants that depend on the reaction rate coefficients and the initial abundances and whose expressions are given in Appendix B. Note that constant \(r_{i}\) represents the decay velocity of the consumption of species \(i\) due to its own interaction with other species. A clarifying example: CP (\(i=4\)) is consumed in reactions 8 and 11, interacting with N and O respectively (remarked in dashed lines in Figure 1(a)). Its associated decay velocity is then \(r_{4}=k_{8}n_{\rm H}[{\rm N}]_{0}+k_{11}n_{\rm H}[{\rm O}]_{0}\,\).
Appendix B shows the complete mathematical derivation of the solutions of the minimal system introduced in Equation (4) and described above. Furthermore, in Appendix C we assess the rightness and caveats of assuming constant the abundance of the species classified as abundant, a necessary premise to obtain the theoretical solution. In particular, we provide a theoretical calculation where we show that, for chemical reactions of the type \({\rm A}+{\rm B}\rightarrow{\rm C}+{\rm D}\), the error in the calculation of the evolution of A and B when assuming that the abundance of B is constant for the times analyzed in this work (\(t\leq 10^{5}\) yrs) becomes negligible when the initial conditions verify \(B_{0}>>A_{0}\). Finally, in Appendix D we analyze theoretically the ratio [PO]/[PN] for the first stage of the chemical evolution of the system, in order to cast light on the [PO]/[PN] disagreement between models and observational data.
### Numerical Solution
Throughout this work, we will model three typical astrophysical scenarios where reactions can take place: a molecular cloud during the cold collapse phase (at \(T\)=10 K) and a star-forming region affected by shocks with average gas temperatures of \(T\)=100 K and \(T\)=300 K. For all of them, simulations are performed for time-scales of \(10^{5}\) yrs (see e.g. Fontani et al., 2016; Jimenez-Serra et al., 2018) and assuming that the cloud density is constant with time, with the H number density \(n_{\rm H}=10^{4}\) cm\({}^{-3}\). Note that our aim is not to reproduce the astrochemical modeling done in previous works (where typically multiple evolutionary phases/stages are considered; see e.g. Aota and Aikawa, 2012; Lefloch et al., 2016; Jimenez-Serra et al., 2018), but to analyze in detail the dependences of the [PO]/[PN] abundance ratio on the assumed reaction rate coefficients, and to understand why this ratio is systematically \(<1\) in models but \(>1\) in observational data.
In Figure 2(a-c) the evolution curves of P, PH, PH\({}_{2}\), PH\({}_{3}\), CP, PO and PN abundances are plotted for \(T\)=10 K, \(T\)=100 K, and \(T\)=300 K, with a P-hydrogenation fraction \(f_{\rm P}=0.5\) (i.e. 50% of the initial P locked into atomic P and 50% equally distributed between PH, PH\({}_{2}\) and PH\({}_{3}\)). In Figure 2(d-f) the ratio [PO]/[PN] is represented under the same conditions. The abundances have been calculated applying numerical methods (the total system with 17 equations -Appendix A- has been solved applying a fourth-order Runge-Kutta algorithm with a constant time step of 0.1 yr) and through the theoretically obtained expressions for the minimal system introduced in Equation (4). To ensure that the chemical system studied here is not significantly affected by the lack of ion-neutral and dissociative recombination reactions, in Figure 2(d-f) we also show the [PO]/[PN] ratio obtained with the astrochemical code UCLCHEM (Holdship et al., 2017) using the same physical conditions and initial abundances of Table 2. From Figure 2(d-f), it is clear that the [PO]/[PN] ratios derived using UCLCHEM are in perfect agreement with the ones derived using our model for time-scales\(\leq\)1000 yrs for \(T\)=10 and 100 K, and for time-scales\(\leq\)100 yrs for \(T\)=300 K. For time-scales larger than these, our model predictions deviate from the UCLCHEM's results. However, note that the evolutionary trends are preserved and thus, these discrepancies do not qualitatively affect our conclusions.
For \(T\)=10 K, PH and PH\({}_{2}\) are initially transformed into PO, PN and CP as a result of reactions 3 (O+PH\({}_{2}\)\(\rightarrow\) PO+H\({}_{2}\)), 4 (O+PH \(\rightarrow\) PO+H), 7 (N+PH \(\rightarrow\) PN+H) and 14 (C+PH \(\rightarrow\) CP+H). Although the initial abundances of PO and PN are zero, and CP very scarce, after a few hundreds of years all three have reached detectable abundances of about \(10^{-11}\)-\(10^{-10}\) (note that fixing the initial CP abundance to zero would yield almost indis
Figure 2: Evolution of the abundances relative to H of the P-bearing molecules (a-c) and the ratio [PO]/[PN] (d-f) for \(T\)=10 K, 100 K, and 300 K respectively. P-hydrogenation fraction \(f_{\rm P}=0.5\) (i.e. 50% of initial P locked into atomic P and 50% equally distributed between PH, PH\({}_{2}\) and PH\({}_{3}\)) in all cases. Results were obtained solving the total model numerically (blue lines) and through the theoretical solution of the minimal system (dashed red lines). Note that the numerical and theoretical approaches yield identical results to the naked eye for all species, times and temperatures. A dashed vertical line remarks the typical cloud age, \(t=10^{4}-10^{5}\) yrs, and a dashed horizontal line is marked at [PO]/[PN]=1 in (d-f). For comparison, the curve of [PO]/[PN] obtained from UCLCHEM chemical code (Holdship et al., 2017) has been plotted in (d-f) (dash-dotted green lines).
tinguishable results). In a second stage of the evolution, PH and PH\({}_{2}\) get depleted (but not PH\({}_{3}\)) at \(t\sim 10^{3}\) yrs and consequently reactions 3, 4, 7 and 14 become negligible, resulting in a strong decay of CP and PO. PN gets strongly reinforced from there on as PO has become abundant enough to enhance reaction 2 (N+PO \(\rightarrow\) PN+O), a reaction that was negligible in the first stage of the evolution of the system. This transformation of PO into PN beyond \(t\sim 10^{3}\) yrs reinforces the decrease in the [PO]/[PN] ratio, which drops from its initial value \(\sim 7\) to \(\sim 0.02\) at around \(t\sim 10^{3}\) yrs. Note that at this temperature, reaction 6 (P+OH \(\rightarrow\) PO+H) is not strong enough to prevent PO from being consumed, but it strongly slows down its decrease and that of the ratio [PO]/[PN] for long times.
For higher temperatures such as \(T\)=100 K and \(T\)=300 K, the route PH\({}_{3}\rightarrow\) PH\({}_{2}\rightarrow\) PH \(\rightarrow\) P is activated since the rate coefficients of chain reactions 13 (H+PH\({}_{3}\rightarrow\) PH\({}_{2}\)+H\({}_{2}\)), 12 (H+PH\({}_{2}\rightarrow\) PH+H\({}_{2}\)) and 10 (H+PH \(\rightarrow\) P+H\({}_{2}\)) have a strong positive dependence on temperature (they are endothermic; see Table 1). This phenomenon results in a constant growth of P and a fast depletion of PH, PH\({}_{2}\) and PH\({}_{3}\) (at \(T\)=100 K these species get depleted in the first \(10^{3}\) yrs of evolution, and at \(T\)=300 K the process is even faster). The growth of P reinforces reaction 6 (P+OH \(\rightarrow\) PO+H), enhancing the formation of PO, but the fast depletion of PH and PH\({}_{2}\) affects PO negatively because reactions 3 and 4 need PH and PH\({}_{2}\) to create PO. The combination of both effects makes PO to reach lower maximum abundances than for \(T\)=10 K, and consequently reaction 2 transforms PO into PN at a lower rate and hinders PO from reaching very low values. For this reason, PN becomes more abundant than PO (i.e. [PO]/[PN]\(<1\)) later than for \(T\)=10 K.
Interestingly, the numerical solutions of the total system and the theoretical solutions of the minimal system are indistinguishable to the naked eye in Figure 2 for all chemical species, temperatures, and at all times, proving the suitability of the simplifications assumed to obtain the theoretical expressions in Equation (4). We compared the final abundances of the P-bearing species calculated via numerical methods with the same quantities obtained from our theoretical solution (Equation (4)). We report average relative errors of \(\sim 0.3\%\) for \(T\)=10 K, \(\sim 1\%\) for \(T\)=100 K, and \(\sim 2\%\) for \(T\)=300 K. In summary, the mathematical solutions of the minimal system provide a highly accurate description of the evolution of the abundances of the P-bearing chemical species. Also, let us remark that the calculation of the final abundance (i.e. at time \(t=10^{5}\) yrs) of a species with Equation (4) in a typical laptop computer is on average more than \(10^{5}\) times faster than the numerical solution of the total system.
Finally, the minimal and total systems naturally yield that, at typical cloud ages (between \(t=10^{4}\) and \(t=10^{5}\) yrs) the abundance of PN is clearly larger than the abundance of PO, as predicted by other models. In the analysis developed above, we have identified potential sources that could be contributing to the [PO]/[PN] disagreement between observations and models: models provide final ratios [PO]/[PN]\(<\)1 for all temperatures because, for large times (i) the PO formation routes are not significant anymore (since they depend on PH and PH\({}_{2}\), which get depleted rapidly), and (ii) the transformation of PO into PN governs the system. We will address in detail the possible sources of the [PO]/[PN] disagreement between observational data and models in the Discussion.
### The Role of Grain-surface Chemistry
The chemical evolution of the P-bearing species is also affected by the grain-surface reactions taking place in the physico-chemical environment where the system evolves. To evaluate this effect, we now focus on the dependence of the system on the hydrogenation fraction of P (\(f_{\rm P}\)), that represents the fraction of P that is hydrogenated on dust grains via grain-surface reactions before being released to the gas phase. As introduced in Section 2 and Table 2, the initial abundance of P is given by \((1-f_{\rm P})\times 2.57\times 10^{-9}\). For simplicity we assume that PH, PH\({}_{2}\) and PH\({}_{3}\) have equal initial abundances of \((f_{\rm P}/3)\times 2.57\times 10^{-9}\). Unbalancing the initial abundances of PH, PH\({}_{2}\) and PH\({}_{3}\) would only have a noticeable effect for low temperatures (see Appendix E), but note that PO and PN have so far been reported only in star-forming regions affected by shocks (Cernicharo et al., 2006; Bernal et al., 2021; Zeng et al., 2018; Rivilla et al., 2022; Lefloch et al., 2016), where the temperature is around 100 K or larger.
Figure 3 shows the time-evolution of the abundances of PO, PN and their ratio for \(T\)=10 K, 100 K and 300 K, for different values of \(f_{\rm P}\) from 0 to 1 (the dependence of the initial abundances on \(f_{\rm P}\) is shown in Table 2). We can see that PO at \(T\)=10 K presents a complex dependence on \(f_{\rm P}\) because, as we explained in Section 3.2, PO abundance does not decay to zero due to reaction 6 (P+OH \(\rightarrow\) PO+H), which keeps its abundance above a certain limit. Since reaction 6 has P as a reactant, it is straightforward to see that this PO abundance limit for large times depends negatively on \(f_{\rm P}\).
On the contrary, the abundances of PO and PN (and thus the ratio [PO]/[PN]) do not depend on \(f_{\rm P}\) at the end of the cloud's evolution for \(T\)=100 K and \(T\)=300
K: \(f_{\rm P}\) determines the initial abundance of PO and PN, but eventually the curves converge. This means that it is irrelevant whether the source of P is atomic P or the set of PH, PH\({}_{2}\) and PH\({}_{3}\) (or a combination of them), as the same amount of P will be finally transformed into PO and PN. Furthermore, and as long as \(f_{\rm P}\) is not zero, for \(T\)=100 K and \(T\)=300 K the ratio [PO]/[PN] can be considered independent of \(f_{\rm P}\) for all times. This proves that [PO]/[PN] is a more robust quantity for all times than [PO] and [PN] separately in order to compare observed data with numerical predictions obtained from existing models.
### Sensitivity of the Abundances of PO and PN on the Reaction Rate Coefficients
Many rate coefficients associated with the different chemical reactions that take place in astrophysical environments are either totally unknown or very uncertain (McElroy et al., 2013; Wakelam et al., 2012; Wakelam et al., 2015). The reactions involved in our model are
Figure 3: Evolution of the abundance of PO, PN and their ratio [PO]/[PN] for (a) \(T\)=10 K, (b) \(T\)=100 K and (c) \(T\)=300 K and for different values of the P-hydrogenation fraction \(f_{\rm P}\), the fraction of P that has been transformed into PH, PH\({}_{2}\) and PH\({}_{3}\) via grain-surface reactions before being released to the gas phase.
not an exception, as the error associated with most of the reaction rate coefficients is at least 2-fold (Wakelam et al., 2012). P is naturally highly reactive, and so treating it experimentally to obtain the kinetic parameters becomes especially challenging. It is also possible to apply theoretical quantum chemical methods for this purpose, but they are computationally very expensive and therefore it is not possible to apply them to all the reactions. Regarding our network, only reactions 5, 6 and 7 in Table 1 have been calculated through these methods (Garcia de la Concepcion et al., 2021; Garcia de la Concepcion, 2023; Gomes et al., 2023).
In this section we investigate how the uncertainty associated with each chemical reaction affects the final abundances of PO and PN. In particular, we aim to identify which rate coefficients \(k_{i}\) should be preferentially constrained via precise theoretical quantum calculations or measured experimentally in the laboratory, since increasing their certainty would yield better predictions of PO and PN abundances in current and future numerical modelling. We thus make use of the theoretical solution of the minimal system, whose extremely fast calculation permits us to explore the parameter space in a way that would be impossible to tackle numerically. Benefiting from this fact, we calculate with Equation (4) the abundance of PO, PN and their ratio for all the combinations of 3 different values of each \(k_{i}\) (\(k_{i}/10\), \(k_{i}\) and \(10k_{i}\), \(k_{i}\) obtained from the source provided in Table 1) for all 14 reaction rate coefficients of the total system. In this way, we obtain sets of \(3^{14}\) data for [PO], [PN] and [PO]/[PN], and do so for three different times: \(t=10^{3}\), \(t=10^{4}\) and \(t=10^{5}\) yrs. Figure 4 shows the Pearson
Figure 4: Dependence of the formation of PO, PN and their ratio with the reaction rate coefficients \(k_{i}\) of the 14 chemical reactions of the system. We plot the Pearson correlation \(r\) of the abundance of PO (blue bars), PN (red bars) and their ratio (green bars) with all 14 reaction rate coefficients for (a) \(T\)=10 K (b) \(T\)=100 K and (c) \(T\)=300 K, considering times \(t=10^{3}\) (light colors), \(t=10^{4}\) (vivid colors) and \(t=10^{5}\) yrs (dark colors). Correlations \(|r|>0.05\) are supported by p-values \(<0.01\). The correlations were calculated with sets consisting of all the combinations of 3 values of each of the 14 \(k_{i}\) (bars), and also for a finer grid composed of 11 values of each of the 5 most influential \(k_{i}\) (black diamonds). In all cases \(k_{i}\) ranges from \(k_{i}/10\) to \(10k_{i}\), being the values logarithmically distributed, and P-hydrogenation fraction \(f_{\rm P}=0.5\).
correlation coefficient \(r\) between these sets of values of [PO] (blue bars), [PN] (red bars) and [PO]/[PN] (green bars) and each reaction rate coefficient \(k_{i}\) (calculated in a log-log scale), for \(T\)=10 K, 100 K and 300 K. Bar colors go from light to dark according to the evolution times.
Furthermore, to check that the correlations plotted in Figure 4 are sufficiently precise in spite of the fact that we used only 3 values for each \(k_{i}\) to save computer time, we also plot (in black diamonds) the Pearson coefficients calculated for a much finer grid of 11 different values for the 5 most influential \(k_{i}\) (from \(k_{i}\)/10 to \(10k_{i}\), including \(k_{i}\), in a logarithmically uniform distribution), giving rise to \(11^{5}\) different data for each set of [PO], [PN] and their ratio. The similarity between the correlations calculated with 3 and 11 different values is clear.
The results plotted in Figure 4 show that, for \(T\)=10 K, the abundances of PO, PN and their ratio have a strong dependence on the reaction rate coefficients \(k_{2}\) and \(k_{6}\), while \(k_{1}\), \(k_{3}\) and \(k_{4}\) and \(k_{14}\) also play a significant role. The system's dependence on reaction rate coefficients is very similar for \(T\)=100 K and \(T\)=300 K, and in comparison to \(T\)=10 K, the abundances weaken their correlation with \(k_{14}\) while \(k_{10}\) becomes relevant because reaction 10 and its associated reactions (reactions 12 and 13) have a non-zero activation barrier (\(\gamma\)) that makes them highly dependent on temperature (see Table 1 and Equation (2)).
Analyzing Figure 4 in more detail, we find that, for all temperatures and times, the rate coefficient of reaction 2 (N+PO \(\rightarrow\) PN+O) has a strong negative correlation with PO, and a strong positive correlation with PN (and thus the correlation with [PO]/[PN] is negative), as expected. The influence of reaction 6 (P+OH \(\rightarrow\) PO+H) on the system, on the contrary, is more intricate because the correlations between its rate coefficient \(k_{6}\) and the abundances strongly vary with time. It happens that \(k_{6}\) is strongly positively correlated with the abundance of PN for all temperatures even when PN is not in reaction 6, and grows with time. This behavior relies on the fact that when time grows PN is mostly obtained from PO through reaction 2, explaining why the correlation is higher at longer times. In a similar way, for all temperatures and short times (\(t=10^{3}\) and \(t=10^{4}\) yrs), the abundance of PO is positively correlated with \(k_{6}\), as expected. However, for large temperatures and times the abundance of PO negatively correlates with \(k_{6}\), and this apparently paradoxical effect can be explained as follows: increasing \(k_{6}\) for large T accelerates the production of PO and the consumption of P, being beneficial for the growth of PO at short times, but, at the end of the cloud's evolution time (\(t\sim 10^{5}\) yrs), less P will be available to form PO and the system will not be able to counteract the negative effect of reaction 2 consuming PO. Consequently, enhancing a reaction that has PO as a product can lead to negative effects on it at certain physico-chemical conditions. With this example, we remark on the complexity of analyzing astrochemical models: even for our simple case (made of 14 chemical reactions), including formation routes for a certain chemical species may not result in an increase of that species abundance.
## 4 Improving the certainty of the reaction rate coefficients through Bayesian statistics
Bayesian statistics relies on the combination of available real data and a previous knowledge of the parameters of the system under study, and has gained significant popularity in the past decades. It has been applied to many different fields, including chemical kinetics (Hsu et al., 2009; Galagali and Marzouk, 2015; Cohen and Vlachos, 2021), and recently proved to unveil relevant information about the model parameters and their associated uncertainties in the context of an astrochemical system (Holdship et al., 2018; Heyl et al., 2020).
Following this methodology, in this section we apply Bayesian statistics to the evolution of ISM phosphorus with the aim of improving our knowledge of the reaction rate coefficients. The real data used correspond to observations of PO and PN abundances in the star forming region Orion-KL, the Giant Molecular Cloud G+0.693-0.03 located in the Galactic Center and the star-forming region L1157 (see Table 3). The chemistry observed in these three sources is dominated by shocks (Cernicharo et al., 2006; Bernal et al., 2021; Zeng et al., 2018; Rivilla et al., 2022; Lefloch et al., 2016). We have selected these sources because they are the only ones for which the PO and PN abundances have been reported with their associated uncertainties. Note that we are assuming that our model for \(T\)=100 K describes the physico-chemical environment of these sources (i.e. gas affected by shocks), and consequently all the calculations in this section have been carried out considering \(T\)=100 K. For this reason, we chose to apply Bayesian inference to the five most influential reaction rate coefficients in PO and PN at \(T\)=100 K, \(k_{m}\), where \(m=\{1,2,4,6,10\}\), as obtained in Section 3.4. The P-hydrogenation fraction \(f_{\rm P}\) has been established to be \(f_{\rm P}=0.5\), as in Sections 3.2 and 3.4, and we have fixed \(t=10^{4}\) yrs for all calculations according to the estimated age of the shocked regions Orion-KL (see e.g. Cernicharo et al., 2006), G+0.693-0.03 (Requena-Torres et al., 2006) and L1157 (Gueth et al., 1996; Podio et al., 2016).
The Bayes's rule yields the posterior probability distributions (PPDs) of the parameters of the model (in this case the reaction rate coefficients) associated with any set of calculated abundances, and it is described as follows:
\[P(\mathbf{k}_{j}|\mathbf{x})=\frac{P(\mathbf{x}|\mathbf{k}_{j})P(\mathbf{k}_{j} )}{\sum_{j}P(\mathbf{x}|\mathbf{k}_{j})P(\mathbf{k}_{j})}\propto P(\mathbf{x}| \mathbf{k}_{j})P(\mathbf{k}_{j})\,, \tag{5}\]
where \(\mathbf{x}\) is the real data and \(\mathbf{k}_{j}\) represents every set of values of the reaction rate coefficients. Our target is to obtain \(P(\mathbf{k}_{j}|\mathbf{x})\), the posterior probability of each set, as it represents the certainty of the reaction rate coefficients after considering all the available data and any previous knowledge that we might have of the values of the parameters and their uncertainties. The denominator is the sum of the probabilities of all the sets, that is, a normalization constant. In summary, we need to
Figure 5: Bayesian inference applied to the most important reaction rate coefficients of the model and the [PO]/[PN] ratio. (a-e) Prior probability distributions (thin black lines) and posterior probability distributions (PPDs, wide blue lines) obtained with Bayesian inference of the 5 most relevant reaction rate coefficients of our model for \(T\)=100 K and \(t=10^{4}\) yrs, according to observations of star-forming regions from Table 3. The center of the prior distribution is the value provided by KIDA and shown in Table 1 (dashed red lines). (f) Distribution of the [PO]/[PN] abundance ratio obtained from sampling the PPD’s of the reaction rate coefficients. The median (black line) along with its 1\(\sigma\) confidence interval (dashed black lines) are shown, as well as the original abundances obtained in the model with KIDA values of the rate coefficients (dotted red line), and the real abundances from clouds Orion-KL, G+0.693-0.03 and L1157 (green, orange and magenta lines, respectively). P-hydrogenation fraction is \(f_{\rm P}\)=0.5 in all calculations.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(q\) & Source & [PO] & [PN] & [PO]/[PN] & Reference \\ \hline
1 & Orion-KL & \((1.6\pm 0.1)\times 10^{-10}\) & \((6.1\pm 0.6)\times 10^{-11}\) & \(2.6\pm 0.4\) & Bernal et al. (2021) \\
2 & G+0.693-0.03 & \((5.9\pm 2.2)\times 10^{-11}\) & \((4.1\pm 0.2)\times 10^{-11}\) & \(1.4\pm 0.6\) & Rivilla et al. (2018) \\
3 & L1157 & \((2.5\pm 0.4)\times 10^{-9}\) & \((9.0\pm 1.0)\times 10^{-10}\) & \(2.8\pm 0.5\) & Lefloch et al. (2016) \\ \hline \end{tabular}
\end{table}
Table 3: Observational data of PO and PN abundances used for Bayesian inference of the model’s reaction rate coefficients.
calculate \(P(\mathbf{k}_{j})\) and \(P(\mathbf{x}|\mathbf{k}_{j})\) for every \(\mathbf{k}_{j}\) to obtain the PPDs associated with the reaction rate coefficients.
\(P(\mathbf{k}_{j})\) is the prior probability of a given set of values of the reaction rate coefficients \(\mathbf{k}_{j}\). To calculate it, Bayesian methodology requires the definition of a prior probability distribution \(P(k_{m})\) for each of the five reaction rate coefficients \(k_{m}\). Following the information in KIDA (Wakelam et al., 2012), we established a discrete 55-value log-normal distribution centered in the \(k_{m}\) provided in Table 1, in a range of \(k_{m,l}\) with \(l=\{1,...,55\}\) from \(k_{m,1}=k_{m}/100\) to \(k_{m,55}=100k_{m}\), and with a standard deviation established in a way that \(4k_{m}\) and \(1/4k_{m}\) fall inside the 68.2 % confidence interval (\(1\sigma\)). The choice of 55 points for each rate coefficient \(\mathbf{k}_{j}\) ensures a sufficiently exhaustive analysis with moderate computational costs. Appendix F analyzes the same system considering a log-uniform prior probability distribution.
\(\mathbf{k}_{j}=\{k_{1},k_{2},k_{4},k_{6},k_{10}\}\) describes every set of 5 values of the reaction rate coefficients \(k_{m}\) chosen in the range \(k_{m,l}\), where \(j\) therefore stands for the \(55^{5}\) different combinations of \(l=1,...,55\) values for each of the 5 reaction rate coefficients. In consequence, since the value of each rate coefficient is independent, the joint probability of the set \(\mathbf{k}_{j}\) is
\[P(\mathbf{k}_{j})=\prod_{m}^{5}P(k_{m,l})\,. \tag{6}\]
\(P(\mathbf{x}|\mathbf{k}_{j})\) is the likelihood of \(\mathbf{k}_{j}\), i.e. the probability of obtaining from the model the observed values of PO and PN given a specific set \(\mathbf{k}_{j}\), and it is obtained as
\[P(\mathbf{x}|\mathbf{k}_{j})=\prod_{q}^{3}\prod_{r}^{2}\exp\left(-\frac{1}{2} \left(\frac{x_{r,q}-[X_{r,j}]}{\sigma_{r,q}}\right)^{2}\right)\,, \tag{7}\]
where \(q=\{1,2,3\}\) refers to each observational source (see Table 3) and \(r=\){PO,PN}, in such a way that \(x_{r,q}\) represents the observed abundance of species \(r\) in source \(q\) (along with its standard deviation \(\sigma_{r,q}\)) and \([X_{r,j}]\) is the model's predicted final abundance of species \(r\) for a given set \(\mathbf{k}_{j}\). Calculating \([X_{r,j}]\) for all the \(55^{5}\) combinations of reaction rate coefficients was computationally accessible once again because of the theoretical solution of the system introduced in Equation (4). As we could explore the totality of the parameter space, we avoided the usual necessity of a much more complex Markov Chain Monte Carlo (MCMC) sampling method (Holdship et al., 2018).
Once we have calculated \(P(\mathbf{x}|\mathbf{k}_{j})\) for all \(\mathbf{k}_{j}\) following Equations (5-7), we can finally obtain the PPD of a given \(k_{m}\). To do so, we need to sum up all the probabilities \(P(\mathbf{x}|\mathbf{k}_{j})\) for which the set \(\mathbf{k}_{j}\) contains a fixed \(k_{m}\) at its value \(k_{m,l}\), as follows
\[P(k_{m}=k_{m,l}|\mathbf{x})=\sum_{j/k_{m}=k_{m,l}}P(\mathbf{k}_{j}|\mathbf{x})\,. \tag{8}\]
Note that the sum contains \(55^{4}\) elements for which the values of the fixed \(k_{m}\) are equal to \(k_{m,l}\) and the values of the other \(k_{m}\) take all their 55 possible values. Then, the PPD of \(k_{m}\) is given by the results of Equation (8) for the 55 values of \(k_{m,l}\). The prior probability distributions \(P(k_{m})\) and the PPDs for each \(k_{m}\) are plotted in Figure 5(a-e). If we compare the peak of each posterior probability distribution with the original value provided by KIDA (red dashed vertical line), we can make estimates of the reaction rate coefficients according to the observations of PO and PN that we considered.
Figure 5(a-e) shows that the PPDs of \(k_{2}\) and \(k_{6}\) present much higher and sharper peaks than the distributions of the other reaction rate coefficients, in agreement with what we already obtained in Section 3.4: the abundances of PO and PN depend critically on the values of \(k_{2}\) and \(k_{6}\) at this time and temperature. On the one hand, \(k_{2}\) presents a peak at a value that is \(\sim 0.04\) times the value in KIDA. This reveals that the available value of \(k_{2}\) might be a large overestimation of the real one. On the other hand, \(k_{6}\) PPD is centered very close to its available value in agreement with the fact that \(k_{6}\) was calculated with precise theoretical methods (Garcia de la Concepcion et al., 2021). The rest of the rate coefficients (\(k_{1}\), \(k_{4}\) and \(k_{10}\)) do not have a strong impact on PO and PN abundances, as Figure 5(a,c,e) PPDs are similar to the prior distributions assigned to them. However, if we calculate the posterior probability distributions of the system making use of a log-uniform prior instead of a log-normal prior, that is, if we use priors devoid of information, the results plotted in Appendix F confirm that the observational data do not provide any relevant information about \(k_{4}\), but allow us to constrain the values of \(k_{1}\) and \(k_{10}\) so that \(k_{1}<3.9\times 10^{-11}\)cm\({}^{3}\) s\({}^{-1}\) and \(k_{10}>3.0\times 10^{-13}\)cm\({}^{3}\) s\({}^{-1}\). As shown in Appendix F, \(k_{6}\) is constrained to the value calculated by Garcia de la Concepcion et al. (2021) even when using a log-uniform prior.
Finally, we sampled the posterior probability distributions for the abundance of PO, PN and their ratio at \(t=10^{4}\) yrs and \(T=\)100 K, and plotted the latter in Figure 5(f). To do so, we used the theoretical solution of the system (given by Equation (4)). We found that the median of PO and PN abundances are around one order of magnitude larger than the original numerical values (see Table 4), and the median of [PO]/[PN] is almost two orders of magnitude larger than the original
numerical value (see the dotted red line in Figure 5(f) and Table 4), in agreement with observations.
## 5 Discussion
In this work, we have developed a thorough theoretical and numerical study of the dynamical system associated with the chemical evolution of phosphorus in an interstellar molecular cloud. A wide variety of techniques and algorithms have been developed for network reduction in chemical models (Tupper, 2002; Lehmann, 2004; Markosyan et al., 2014; Peerenboom et al., 2015; Ayliaran et al., 2019), some of them focusing precisely on astronomical systems (Hollenbach et al., 2008; Heyl et al., 2020). Here we present a different approach. By making suitable assumptions, we have focused on a complex and limited network of phosphorus with 14 chemical reactions and 17 chemical species. This system can be further reduced so that it only analyzes the evolution of the P-bearing species PO, PN, CP, P, PH, PH\({}_{2}\) and PH\({}_{3}\), becoming a solvable system made of 7 linear ODEs. The grain-surface chemistry is taken into account in the model through the parameter \(f_{\rm P}\), the fraction of P that has been transformed into PH, PH\({}_{2}\) and PH\({}_{3}\) before being released to the gas phase.
Most studies in recent literature that model the evolution of the chemistry of phosphorus in the interstellar medium and in star-forming regions ground on the use of complex software describing the specific physicochemical conditions of the target astronomical source. These computer programs consider some thousands of chemical reactions with uncertain rate constants. In contrast, we have focused exclusively on the most relevant reactions regarding the phosphorus chemistry in order to approach the system from a theoretical perspective and benefit from a much deeper knowledge of its complex dynamics. In particular, the selected reactions are neutral-neutral reactions, which in general lack accurate laboratory measurements and which are dominant in the interstellar regions where P-bearing species are detected (as e.g. in regions dominated by shocks). Furthermore, the explicit mathematical expressions obtained for the chemical evolution of the relevant species allowed us to develop a thorough analysis of the phenomenology with computation times that were up to five orders of magnitude faster than the numerical methods needed to solve the total system, and obviously millions of times faster than with the use of any complex astrochemical software.
We have detected several target reactions whose rate coefficients should be determined accurately in future calculations or experiments in order to minimize the uncertainty in the astrochemistry of phosphorus. The evolution of the P chemical network is sensitive to a reduced set of key reactions for low temperatures (leading the conversion of PO into PN by N+PO \(\rightarrow\) PN+O -reaction 2- or the conversion of PH and PH\({}_{2}\) into PO and PN through O+PH\({}_{2}\)\(\rightarrow\) PO+H\({}_{2}\) -reaction 3-, O+PH \(\rightarrow\) PO+H -reaction 4- and N+PH \(\rightarrow\) PN+H -reaction 7-, while for high temperatures the chemistry becomes more complex and involves these and other interactions, such as the intensive destruction of PH in H+PH \(\rightarrow\) P+H\({}_{2}\) -reaction 10-. Furthermore, Bayesian methods applied to the model at \(T\)=100 K and the use of real data regarding 3 different sources (Orion-KL, G+0.693-0.03 and L1157) yield that the reaction rate coefficient \(k_{2}\) might be especially overestimated, according to our results by a factor of \(\sim\)25.
Unveiling the formation dynamics of PO and PN over time helped us to identify possible sources of the [PO]/[PN] disagreement between observational data and models. Observational data yield [PO]/[PN] \(\sim 1.4-3\) in star forming regions (Ziurys, 1987; Fontani et al., 2016; Rivilla et al., 2016; Lefloch et al., 2016; Rivilla et al., 2018, 2020; Bernal et al., 2021; Bergner et al., 2019, 2022), with the exception of [PO]/[PN]\(=0.6\pm 0.5\) recently detected toward Ser SMM1, which is subjected to large uncertainties (Wurmser and Bergner, 2022). In contrast, numerical models typically yield [PO]/[PN]\(<\)1 (Jimenez-Serra et al., 2018; Chantzos et al., 2020; Sil et al., 2021). Our simulations show that [PO]/[PN] grows with the temperature of the cloud, but still yields values of [PO]/[PN] \(<<1\) for all the scenarios analyzed with the parameters present in Table 1 and the initial conditions in Table 2 for times between \(t=10^{4}\) and \(10^{5}\) yrs because,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Data source & [PO] & [PN] & [PO]/[PN] \\ \hline Original & \(1.12\times 10^{-11}\) & \(1.67\times 10^{-10}\) & 0.067 \\ Bayesian inferred & \(1.4\times 10^{-10}\) & \(4.2\times 10^{-11}\) & 3.3 \\ \hline \end{tabular} Note. – The first line shows the abundances of PO, PN and their ratio calculated applying the original (that is, without Bayesian inference) values of the reaction rate coefficients compiled in Table 1. The second line shows the median of the distributions of the abundances of PO, PN and their ratio calculated from sampling the posterior probability distributions of reaction rate coefficients \(k_{m}\) (where \(m=\{1,2,4,6,10\}\)), and the original values for the rest. In both cases the abundances have been calculated with the theoretical model (Equation (4)) applying \(T\)=100 K, P-hydrogenation fraction \(f_{\rm P}\)=0.5 and \(t=10^{4}\) yrs.
\end{table}
Table 4: Model’s outputs before and after applying Bayesian inference to the reaction rate coefficients.
at the final stages of the evolution, the formation routes of PO become negligible while reaction 2 (N+PO \(\rightarrow\) PN+O) governs the system. Therefore, we argue that current astrochemical models are unable to yield realistic [PO]/[PN] values because (i) certain reaction rate coefficients, mainly \(k_{2}\), are estimated very inaccurately, and (ii) models might lack important destruction routes for PN (e.g. in KIDA (Wakelam et al., 2012) only N+PN \(\rightarrow\) P+N\({}_{2}\) is present and its kinetic parameters \(\alpha=10^{-18}\) cm\({}^{3}\) s\({}^{-1}\) and \(\beta=\gamma=0\) make it negligible in molecular clouds). We stress that we have not considered either ion-neutral reactions or photochemistry in this work. However, note that these are valid assumptions given that most regions where PO and PN have been detected present a chemistry dominated by shocks and not by photochemistry. Indeed, when compared to the astrochemical code UCLCHEM, our model reproduces well the evolution of the [PO]/[PN] ratio with time for the same physical conditions and initial abundances (see Figure 2(d-f)).
Interestingly, [PO]/[PN] is not dependent on the P-hydrogenation fraction \(f_{\rm P}\) for any time of the evolution at high temperatures (\(T\)=100 K and 300 K), as long as a small amount of P is converted into PH\({}_{3}\), that is, for \(f_{\rm P}>0\). This reinforces the prevalence of [PO]/[PN] over [PO] and [PN] separately in order to compare observed data with numerical predictions obtained from existing models.
In spite of the already mentioned predominance of PN over PO at the final stages of the cloud evolution, our environment shows a natural prevalence of PO over PN at the very beginning of the evolution of the system (even taking into account that [PO]\({}_{0}\)=[PN]\({}_{0}\)=0). The theoretical solution of the system allows for a calculation of the limit of [PO]/[PN] at early times, which gives
\[\lim_{t\to 0}\frac{[PO]}{[PN]}\approx\frac{(k_{3}+k_{4})[{\rm O}]_{0}}{k_{7}[{ \rm N}]_{0}}\,, \tag{9}\]
(see Appendix D for the mathematical proof). Note that this expression does not depend on the P-hydrogenation fraction \(f_{\rm P}\) (as long as \(f_{\rm P}>0.01\) to ensure the existence of sufficient initial PH and PH\({}_{2}\), condition required to perform the approximations in Appendix D) or the initial abundances with the exception of N and O. If we evaluate Equation (9), we obtain that [PO]/[PN] ranges from 7 to 12 depending on the temperature, which agrees with numerical results at early times for all T and \(f_{\rm P}\) with an average error of 0.5%. Although this result is limited by the model's caveats, we can extract some general conclusions: at early times, only reactions O+PH\({}_{2}\)\(\rightarrow\) PO+H\({}_{2}\), O+PH \(\rightarrow\) PO+H and N+PH \(\rightarrow\) PN+H -reactions 3, 4 and 7 respectively- are relevant, and PO formation seems to be much more enhanced than PN formation because (i) the cosmic abundance of O is one order of magnitude higher than the cosmic abundance of N; and (ii) the addition of reaction rate coefficients \(k_{3}\) and \(k_{4}\) is similar to \(k_{7}\) at all T. However, the ratio decreases when at longer times reaction N+PO \(\rightarrow\) PN+O -reaction 2- becomes noticeable and reinforces PN, which overcomes PO eventually leading to the [PO]/[PN] values under 1 typically obtained from models at \(t\sim 10^{4}-10^{5}\) yrs. In fact, the theoretical solution for PO and PN (Equation (4)) yields that [PO]\(\rightarrow\) 0 and [PN]\(\rightarrow\)\(C_{77}>0\) when \(t\rightarrow\infty\), which means that [PN] will sooner or later overcome [PO] leading to [PO]/[PN]\(<1\) for any value of the reaction rate coefficients and initial conditions. However, let us remark that, when the corrected values of the reaction rate coefficients obtained by Bayesian inference are included (in particular when \(k_{2}\) decreases), the crossing-time between PO and PN grows and PO remains more abundant than PN for meaningful evolution times (i.e. \(t\sim 10^{4}\) yrs or more), solving in this way the [PO]/[PN] disagreement between models and real data.
Finally, we believe that the analysis of astrochemical systems with the tools of network theory is a promising line of research that has not been sufficiently developed yet. While the goal of the seminal studies linking both disciplines was the topological description of astrochemical networks associated with very diverse astrophysical environments (Sole & Munteanu, 2004; Jolley & Douglas, 2010), more recently the geometry of grain surface reaction networks was analyzed to reduce the computational expense of performing Bayesian inference (Heyl et al., 2020), and the emergence of interstellar molecular complexity was successfully explained with a model based on interacting complex networks (Garcia-Sanchez et al., 2022). We are confident that our multidisciplinary approach will attract the attention of both the astrochemistry and complexity theory communities in the next years, as representing astrochemical systems as complex networks in permanent evolution provides a profound understanding of the formation and destruction of the chemical species involved. Larger or more complex chemical networks than the phosphorus network might preclude the calculation of an explicit theoretical solution and its concomitant drastic decrease in computer time, but with more computational power the methodology here introduced could still be used to describe the formation of a wide variety of chemical precursors of organic macromolecules in space, a key question to unveil the origin and early evolution of life on Earth.
The authors acknowledge insightful comments on the manuscript from S. Viti, technical advice on Bayesian Inference from M. Castro, and fruitful conversations with A. Aguirre-Tamaral, J. Garcia de la Concepcion, R. Guantes, S. Manrubia, A. Megias, V.M. Rivilla and M. Ruiz-Bermejo. J.A. and M.F.-R. received support from grant No. PID2021-122936NB-I00, J.A. and I.J.-S. from grant No. PID2019-105552RB-C41 and J.A., M.F.-R. and I.J.-S. from grant No. MDM-2017-0737 Unidad de Excelencia "Maria de Maeztu"-Centro de Astrobiologia (CSIC-INTA), funded by the Spanish Ministry of Science and Innovation/State Agency of Research MCIN/AEI/10.13039/501100011033 and by "ERDF A way of making Europe".
Appendix A Set of Ordinary Differential Equations that Describes the Chemical Evolution of Phosphorus in the Interstellar Medium
The dynamical system under study is a set of 14 reactions which involve 17 chemical species. Applying the law of mass action to the reactions, we obtain a system of 17 ordinary differential equations (ODEs) which accounts for the evolution with time of each chemical species abundance. The system is:
\[\frac{1}{n_{\rm H}}\frac{d[{\rm C}]}{dt} = k_{8}[{\rm N}][{\rm CP}]+k_{9}[{\rm P}][{\rm CN}]-k_{14}[{\rm C}][{ \rm PH}]\] (A1) \[\frac{1}{n_{\rm H}}\frac{d[{\rm CN}]}{dt} = -k_{9}[{\rm P}][{\rm CN}]\] (A2) \[\frac{1}{n_{\rm H}}\frac{d[{\rm CO}]}{dt} = k_{11}[{\rm O}][{\rm CP}]\] (A3) \[\frac{1}{n_{\rm H}}\frac{d[{\rm CP}]}{dt} = -k_{8}[{\rm N}][{\rm CP}]-k_{11}[{\rm O}][{\rm CP}]+k_{14}[{\rm C }][{\rm PH}]\] (A4) \[\frac{1}{n_{\rm H}}\frac{d[{\rm H}]}{dt} = k_{4}[{\rm O}][{\rm PH}]+k_{6}[{\rm P}][{\rm OH}]+k_{7}[{\rm N} ][{\rm PH}]-k_{10}[{\rm H}][{\rm PH}]-k_{12}[{\rm H}][{\rm PH}_{2}]-k_{13}[{ \rm H}][{\rm PH}_{3}]+k_{14}[{\rm C}][{\rm PH}]\] (A5) \[\frac{1}{n_{\rm H}}\frac{d[{\rm H}_{2}]}{dt} = k_{3}[{\rm O}][{\rm PH}_{2}]+k_{10}[{\rm H}][{\rm PH}]+k_{12}[{ \rm H}][{\rm PH}_{2}]+k_{13}[{\rm H}][{\rm PH}_{3}]\] (A6) \[\frac{1}{n_{\rm H}}\frac{d[{\rm N}]}{dt} = -k_{1}[{\rm N}][{\rm PO}]-k_{2}[{\rm N}][{\rm PO}]-k_{7}[{\rm N} ][{\rm PH}]-k_{8}[{\rm CP}][{\rm N}]\] (A7) \[\frac{1}{n_{\rm H}}\frac{d[{\rm NO}]}{dt} = k_{1}[{\rm N}][{\rm PO}]\] (A8) \[\frac{1}{n_{\rm H}}\frac{d[{\rm O}]}{dt} = k_{2}[{\rm N}][{\rm PO}]-k_{3}[{\rm O}][{\rm PH}_{2}]-k_{4}[{\rm O }][{\rm PH}]+k_{5}[{\rm P}][{\rm O}_{2}]-k_{11}[{\rm O}][{\rm CP}]\] (A9) \[\frac{1}{n_{\rm H}}\frac{d[{\rm O}_{2}]}{dt} = -k_{5}[{\rm P}][{\rm O}_{2}]\] (A10) \[\frac{1}{n_{\rm H}}\frac{d[{\rm OH}]}{dt} = -k_{6}[{\rm P}][{\rm OH}]\] (A11) \[\frac{1}{n_{\rm H}}\frac{d[{\rm P}]}{dt} = k_{1}[{\rm N}][{\rm PO}]-k_{5}[{\rm P}][{\rm O}_{2}]-k_{6}[{\rm P }][{\rm OH}]-k_{9}[{\rm P}][{\rm CN}]+k_{10}[{\rm H}][{\rm PH}]+k_{11}[{\rm O}][{ \rm CP}]\] (A12) \[\frac{1}{n_{\rm H}}\frac{d[{\rm PH}]}{dt} = -k_{4}[{\rm O}][{\rm PH}]-k_{7}[{\rm N}][{\rm PH}]-k_{10}[{\rm H}][{ \rm PH}]+k_{12}[{\rm H}][{\rm PH}_{2}]-k_{14}[{\rm C}][{\rm PH}]\] (A13)
\[\frac{1}{n_{\rm H}}\frac{d[{\rm PH}_{2}]}{dt} = -k_{3}[{\rm O}][{\rm PH}_{2}]-k_{12}[{\rm H}][{\rm PH}_{2}]+k_{13}[{ \rm H}][{\rm PH}_{3}]\] (A14) \[\frac{1}{n_{\rm H}}\frac{d[{\rm PH}_{3}]}{dt} = -k_{13}[{\rm H}][{\rm PH}_{3}]\] (A15) \[\frac{1}{n_{\rm H}}\frac{d[{\rm PN}]}{dt} = k_{2}[{\rm N}][{\rm PO}]+k_{7}[{\rm N}][{\rm PH}]+k_{8}[{\rm N}] [{\rm CP}]+k_{9}[{\rm P}][{\rm CN}]\] (A16) \[\frac{1}{n_{\rm H}}\frac{d[{\rm PO}]}{dt} = -k_{1}[{\rm N}][{\rm PO}]-k_{2}[{\rm N}][{\rm PO}]+k_{3}[{\rm O}][ {\rm PH}_{2}]+k_{4}[{\rm O}][{\rm PH}]+k_{5}[{\rm O}_{2}][{\rm P}]+k_{6}[{\rm OH }][{\rm P}]\] (A17)
Note that this system of ODEs is a nonlinear system of the form \(d\mathbf{X}/dt=\mathbf{F}(\mathbf{X})\), where \(\mathbf{X}\) is the vector of the abundances of all chemical species.
## Appendix B Analytical Solution of the System
The _total_ system presented in Equations (A1-A17) is far too complex to be fully solved theoretically and must be treated numerically. However, in this Appendix we show that through pertinent approximations it can be linearized and simplified to obtain a _minimal_ system that allows us to obtain explicit equations that fit very precisely the numerical evolution of the P-bearing species abundances.
Interestingly, all of kinetic Equations (A1-A17) are composed of a sum of terms which are in turn composed of a product of an _abundant_ species abundance and a _scarce_ species abundance, according to the classification explained in Section 2. The only exception is the term \(k_{9}[{\rm P}][{\rm CN}]\), in which P and CN species are both _scarce_.
In this work, we assume that the _abundant_ species abundances are constant (i.e. \(d[X_{i}]/dt=0\) for C, H, N, O, O\({}_{2}\) and OH). Appendix C is devoted to describe the applicability and caveats of this assumption. In addition, we assume that the CN abundance is constant as its rate of change, \(d[{\rm CN}]/dt=-k_{9}n_{\rm H}[{\rm P}][{\rm CN}]\), is extremely small because both P and CN are scarce species, and we neglect the term \(k_{1}[{\rm N}][{\rm PO}]\) in Equation (A12) because its value is several orders of magnitude lower than the dominant terms.
The assumptions presented above convert the system described by Equations (A1-A17) in a linear system consisting of 10 ODEs. However, the non-interacting species NO, H\({}_{2}\) and CO do not influence the evolution of the rest of molecules (note that their abundances do not appear in the right-hand terms of the ODEs, a direct consequence of the fact that these species are not reactants of any reaction). Since we are only interested in the evolution of the P-bearing species, we can then neglect the kinetic equations relating NO, H\({}_{2}\) and CO and obtain that the total system described in Equations (A1-A17) can be finally reduced to a system of 7 ODEs composed of the equations of the rate of change of the abundances of PH\({}_{3}\), PH\({}_{2}\), PH, CP, P, PO and PN as
\[\frac{d\mathbf{X}_{\rm P}}{dt}=\mathbf{F}_{\rm P}(\mathbf{X}_{\rm P})=\mathbf{A}\mathbf{X}_{\rm P }\,,\] (B18)
where, if we number the species as in Table 5, the vector containing the P-bearing species abundances is then \(\mathbf{X}_{\rm P}\)={[PH\({}_{3}\)], [PH\({}_{2}\)], [PH], [CP], [P], [PO] and [PN]}, and consistently the matrix of coefficients \(\mathbf{\mathrm{A}}\) becomes
\[\mathbf{\mathrm{A}}=\left(\begin{array}{cccccc}-r_{1}&0&0&0&0&0&0\\ k_{13}n_{\rm H}[{\rm H}]&-r_{2}&0&0&0&0&0\\ 0&k_{12}n_{\rm H}[{\rm H}]&-r_{3}&0&0&0&0\\ 0&0&k_{14}n_{\rm H}[{\rm C}]&-r_{4}&0&0&0\\ 0&0&k_{10}n_{\rm H}[{\rm H}]&k_{11}n_{\rm H}[{\rm O}]&-r_{5}&0&0\\ 0&k_{3}n_{\rm H}[{\rm O}]&k_{4}n_{\rm H}[{\rm O}]&0&k_{5}n_{\rm H}[{\rm O}_{2}] +k_{6}n_{\rm H}[{\rm OH}]&-r_{6}&0\\ 0&0&k_{7}n_{\rm H}[{\rm N}]&k_{8}n_{\rm H}[{\rm N}]&k_{9}n_{\rm H}[{\rm CN}]&k_{ 2}n_{\rm H}[{\rm N}]&0\end{array}\right)\,,\]
where constants \(r_{i}\) are defined at the end of this Section. Note that throughout the paper we have named _minimal system_ to the one defined by Equation (B18).
Solving this simplified system of ODEs is equivalent to obtaining the eigenvalues and eigenvectors of \(\mathbf{\mathrm{A}}\). Since \(\mathbf{\mathrm{A}}\) is a matrix of size 7\(\times\)7, it would not be possible to solve its associated system of ODEs mathematically if it were not for the fact that \(\mathbf{\mathrm{A}}\) is a triangular matrix. This configuration permits the resolution of each ODE sequentially. The calculations along with the solutions are presented below. Note that for simplicity the expressions obtained in this section for the different \([X_{i}](t)\) will be a function of the time \(t\) and two sets of constants, \(C_{ij}\) and \(r_{i}\). The dependence
of such constants on the initial conditions of the system (i.e. \([X_{i}](0)\)) and the system parameters (i.e. the reaction rate coefficients \(k_{i}\)) is listed at the end of this Appendix.
**Calculation of [PH\({}_{3}\)]:** The differential equation associated with PH\({}_{3}\) is
\[\frac{1}{n_{\rm H}}\frac{d[{\rm PH}_{3}]}{dt}=-k_{13}[{\rm H}][{\rm PH}_{3}]\,.\] (B19)
Assuming \([{\rm H}]=[{\rm H}]_{0}\) (constant for all \(t\)), the only variable in Equation (B19) is [PH\({}_{3}\)], making it analytically solvable. We obtain
\[[{\rm PH}_{3}](t)=C_{11}\,e^{-r_{1}\,t}\,.\] (B20)
**Calculation of [PH\({}_{2}\)]:** The differential equation associated with PH\({}_{2}\) is
\[\frac{1}{n_{\rm H}}\frac{d[{\rm PH}_{2}]}{dt}=-k_{3}[{\rm O}][{\rm PH}_{2}]-k_ {12}[{\rm H}][{\rm PH}_{2}]+k_{13}[{\rm H}][{\rm PH}_{3}]\,.\] (B21)
Assuming \([{\rm O}]=[{\rm O}]_{0}\) and \([{\rm H}]=[{\rm H}]_{0}\) for all times, and making use of the expression for [PH\({}_{3}\)] in Equation (B20), we obtain
\[[{\rm PH}_{2}](t)=\frac{C_{21}}{r_{2}-r_{1}}\,e^{-r_{1}\,t}+C_{22}\,e^{-r_{2} \,t}\,.\] (B22)
**Calculation of [PH]:** The differential equation associated with PH is
\[\frac{1}{n_{\rm H}}\frac{d[{\rm PH}]}{dt}=-k_{4}[{\rm O}][{\rm PH}]-k_{7}[{\rm N }][{\rm PH}]-k_{10}[{\rm H}][{\rm PH}]+k_{12}[{\rm H}][{\rm PH}_{2}]-k_{14}[{ \rm C}][{\rm PH}]\,.\] (B23)
Assuming \([{\rm O}]=[{\rm O}]_{0}\), \([{\rm N}]=[{\rm N}]_{0}\), \([{\rm H}]=[{\rm H}]_{0}\) and \([{\rm C}]=[{\rm C}]_{0}\) for all times, and making use of the expression for [PH\({}_{2}\)] in Equation (B22), we obtain
\[[{\rm PH}](t)=\frac{C_{31}}{r_{3}-r_{1}}\,e^{-r_{1}\,t}+\frac{C_{32}}{r_{3}-r _{1}}\,e^{-r_{2}\,t}+C_{33}\,e^{-r_{3}\,t}\,.\] (B24)
**Calculation of [CP]:** The differential equation associated with CP is
\[\frac{1}{n_{\rm H}}\frac{d[{\rm CP}]}{dt}=-k_{8}[{\rm N}][{\rm CP}]-k_{11}[{ \rm O}][{\rm CP}]+k_{14}[{\rm C}][{\rm PH}]\,.\] (B25)
Assuming \([{\rm N}]=[{\rm N}]_{0}\), \([{\rm O}]=[{\rm O}]_{0}\) and \([{\rm C}]=[{\rm C}]_{0}\) for all times, and making use of the expression for [PH] in Equation (B24), we obtain
\[[{\rm CP}](t)=\frac{C_{41}}{r_{4}-r_{1}}\,e^{-r_{1}\,t}+\frac{C_{42}}{r_{4}-r _{2}}\,e^{-r_{2}\,t}+\frac{C_{43}}{r_{4}-r_{3}}\,e^{-r_{3}\,t}+C_{44}\,e^{-r_{4 }\,t}\,.\] (B26)
**Calculation of [P]:** The differential equation associated with P is
\[\frac{1}{n_{\rm H}}\frac{d[{\rm P}]}{dt}=k_{1}[{\rm N}][{\rm PO}]-k_{5}[{\rm P }][{\rm O}_{2}]-k_{6}[{\rm P}][{\rm OH}]-k_{9}[{\rm P}][{\rm CN}]+k_{10}[{\rm H }][{\rm PH}]+k_{11}[{\rm O}][{\rm CP}]\,.\] (B27)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Chemical species & PH\({}_{3}\) & PH\({}_{2}\) & PH & CP & P & PO & PN \\ \hline Index \(i\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \end{tabular}
\end{table}
Table 5: P-bearing chemical species included in the minimal system (the one theoretically solved in this work). The indices \(i\) determine the order in which the species abundances appear in vector \(\boldsymbol{X}_{\rm P}\).
We assume \([\text{N}]=[\text{N}]_{0}\), \([\text{O}_{2}]=[\text{O}_{2}]_{0}\), \([\text{OH}]=[\text{OH}]_{0}\), \([\text{CN}]=[\text{CN}]_{0}\), \([\text{H}]=[\text{H}]_{0}\) and \([\text{O}]=[\text{O}]_{0}\) for all times. In addition, as mentioned above, we neglect the first term \(k_{1}[\text{N}][\text{PO}]\) because its value is between \(10\) and \(10^{3}\) times lower than the dominant terms \(k_{10}[\text{H}][\text{PH}]\) and \(k_{11}[\text{O}][\text{CP}]\). From all this, and making use of the expression for [PH] in Equation (B24) and the expression for [CP] in Equation (B26), we obtain
\[[\text{P}](t)=\frac{C_{51}}{r_{5}-r_{1}}\,e^{-r_{1}\,t}+\frac{C_{52}}{r_{5}-r_ {2}}\,e^{-r_{2}\,t}+\frac{C_{53}}{r_{5}-r_{3}}\,e^{-r_{3}\,t}+\frac{C_{54}}{r_ {5}-r_{4}}\,e^{-r_{4}\,t}+C_{55}\,e^{-r_{5}\,t}\,.\] (B28)
**Calculation of [PO]:** The differential equation associated with PO is
\[\frac{1}{n_{\text{H}}}\frac{d[\text{PO}]}{dt}=-k_{1}[\text{N}][\text{PO}]-k_{ 2}[\text{N}][\text{PO}]+k_{3}[\text{O}][\text{PH}_{2}]+k_{4}[\text{O}][\text{ PH}]+k_{5}[\text{O}_{2}][\text{P}]+k_{6}[\text{OH}][\text{P}]\,.\] (B29)
Assuming \([\text{N}]=[\text{N}]_{0}\), \([\text{O}]=[\text{O}]_{0}\), \([\text{O}_{2}]=[\text{O}_{2}]_{0}\) and \([\text{OH}]=[\text{OH}]_{0}\) for all times, and making use of the expression for [PH\({}_{2}\)] in Equation (B22), the expression for [PH] and the expression for [P] in Equation (B28), we obtain
\[[\text{PO}](t)=\frac{C_{61}}{r_{6}-r_{1}}\,e^{-r_{1}\,t}+\frac{C_{62}}{r_{6}-r _{2}}\,e^{-r_{2}\,t}+\frac{C_{63}}{r_{6}-r_{3}}\,e^{-r_{3}\,t}+\frac{C_{64}}{ r_{6}-r_{4}}\,e^{-r_{4}\,t}+\frac{C_{65}}{r_{6}-r_{5}}\,e^{-r_{5}\,t}+C_{66}e^{-r_{ 6}\,t}\,.\] (B30)
**Calculation of [PN]:** The differential equation associated with PN is
\[\frac{1}{n_{\text{H}}}\frac{d[\text{PN}]}{dt}=k_{2}[\text{N}][\text{PO}]+k_{7 }[\text{N}][\text{PH}]+k_{8}[\text{N}][\text{CP}]+k_{9}[\text{P}][\text{CN}]\,.\] (B31)
Assuming \([\text{N}]=[\text{N}]_{0}\) and \([\text{CN}]=[\text{CN}]_{0}\) for all times, and making use of the expression for [PH] in Equation (B24), the expression for [CP] in Equation (B26), the expression for [P] in Equation (B28) and the expression for [PO] in Equation (B30), we obtain
\[[\text{PN}](t)=\frac{C_{71}}{r_{7}-r_{1}}\,e^{-r_{1}\,t}+\frac{C_{72}}{r_{7}-r _{2}}\,e^{-r_{2}\,t}+\frac{C_{73}}{r_{7}-r_{3}}\,e^{-r_{3}\,t}+\frac{C_{74}}{r_ {7}-r_{4}}\,e^{-r_{4}\,t}+\frac{C_{75}}{r_{7}-r_{5}}\,e^{-r_{5}\,t}+\frac{C_{76} }{r_{7}-r_{6}}\,e^{-r_{6}\,t}+C_{77}\,e^{-r_{7}\,t}\,.\] (B32)
As can be seen above, the solutions for the evolution with time of the abundances of the P-bearing molecules follow a common functional structure (consisting of a sum of exponentials). For this reason, they can be expressed in a more general way as
\[[X_{i}](t)=\left[\sum_{j=1}^{i-1}\frac{C_{ij}}{r_{i}-r_{j}}\,e^{-r_{j}\,t} \right]+C_{ii}\,e^{-r_{i}\,t}\,,\] (B33)
where \([X_{i}](t)\) are the abundances of species \(i\) (index according to Table 5). Constants \(C_{ij}\) and \(r_{i}\) depend on the initial conditions of the system (i.e. the initial abundances \([X_{i}](0)\)) and the system parameters (i.e. the reaction rate coefficients \(k_{i}\)) as follows:
\[r_{1} = k_{13}n_{\text{H}}[\text{H}]_{0}\] \[r_{2} = k_{3}n_{\text{H}}[\text{O}]_{0}+k_{12}n_{\text{H}}[\text{H}]_{0}\] \[r_{3} = k_{4}n_{\text{H}}[\text{O}]_{0}+k_{7}n_{\text{H}}[\text{N}]_{0}+k_ {10}n_{\text{H}}[\text{H}]_{0}+k_{14}n_{\text{H}}[\text{C}]_{0}\] \[r_{4} = k_{8}n_{\text{H}}[\text{N}]_{0}+k_{11}n_{\text{H}}[\text{O}]_{0}\] \[r_{5} = k_{5}n_{\text{H}}[\text{O}_{2}]_{0}+k_{6}n_{\text{H}}[\text{OH}]_{ 0}+k_{9}n_{\text{H}}[\text{CN}]_{0}\] \[r_{6} = k_{1}n_{\text{H}}[\text{N}]_{0}+k_{2}n_{\text{H}}[\text{N}]_{0}\] \[r_{7} = 0\] \[C_{11} = [\text{PH}_{3}]_{0}\] \[C_{21} = k_{13}n_{\text{H}}[\text{H}]_{0}C_{11}\] \[C_{22} = [\text{PH}_{2}]_{0}-\frac{C_{21}}{r_{2}-r_{1}}\] \[C_{31} = k_{12}n_{\text{H}}[\text{H}]_{0}\frac{C_{21}}{r_{2}-r_{1}}\]
\[C_{32} = k_{12}n_{\rm H}[{\rm H}]_{0}C_{22}\] \[C_{33} = [{\rm PH}]_{0}-\frac{C_{31}}{r_{3}-r_{1}}-\frac{C_{32}}{r_{3}-r_{2}}\] \[C_{41} = \frac{k_{14}n_{\rm H}[{\rm C}]_{0}C_{31}}{r_{3}-r_{1}}\] \[C_{42} = \frac{k_{14}n_{\rm H}[{\rm C}]_{0}C_{32}}{r_{3}-r_{2}}\] \[C_{43} = k_{14}n_{\rm H}[{\rm C}]_{0}C_{33}\] \[C_{44} = [{\rm PH}]_{0}-\frac{C_{41}}{r_{4}-r_{1}}-\frac{C_{42}}{r_{4}-r_{ 2}}-\frac{C_{43}}{r_{4}-r_{3}}\] \[C_{51} = k_{10}n_{\rm H}[{\rm H}]_{0}\frac{C_{31}}{r_{3}-r_{1}}+k_{11}n_{ \rm H}[{\rm O}]_{0}\frac{C_{41}}{r_{4}-r_{1}}\] \[C_{52} = k_{10}n_{\rm H}[{\rm H}]_{0}\frac{C_{32}}{r_{3}-r_{2}}+k_{11}n_{ \rm H}[{\rm O}]_{0}\frac{C_{42}}{r_{4}-r_{2}}\] \[C_{53} = k_{10}n_{\rm H}[{\rm H}]_{0}C_{33}+k_{11}n_{\rm H}[{\rm O}]_{0} \frac{C_{43}}{r_{4}-r_{3}}\] \[C_{54} = k_{11}n_{\rm H}[{\rm O}]_{0}C_{44}\] \[C_{55} = [{\rm P}]_{0}-\left(\frac{C_{51}}{r_{5}-r_{1}}+\frac{C_{52}}{r_{ 5}-r_{2}}+\frac{C_{53}}{r_{5}-r_{3}}+\frac{C_{54}}{r_{5}-r_{4}}\right)\] \[C_{61} = k_{3}n_{\rm H}[{\rm O}]_{0}\frac{C_{21}}{r_{2}-r_{1}}+k_{4}n_{ \rm H}[{\rm O}]_{0}\frac{C_{31}}{r_{3}-r_{1}}+\left(k_{5}[{\rm O}_{2}]_{0}+k _{6}[{\rm OH}]_{0}\right)n_{\rm H}\frac{C_{51}}{r_{5}-r_{1}}\] \[C_{62} = k_{3}n_{\rm H}[{\rm O}]_{0}C_{22}+k_{4}n_{\rm H}[{\rm O}]_{0} \frac{C_{32}}{r_{3}-r_{2}}+\left(k_{5}[{\rm O}_{2}]_{0}+k_{6}[{\rm OH}]_{0} \right)n_{\rm H}\frac{C_{52}}{r_{5}-r_{2}}\] \[C_{63} = k_{4}n_{\rm H}[{\rm O}]_{0}C_{33}+\left(k_{5}[{\rm O}_{2}]_{0}+k _{6}[{\rm OH}]_{0}\right)n_{\rm H}\frac{C_{53}}{r_{5}-r_{3}}\] \[C_{64} = \left(k_{5}[{\rm O}_{2}]_{0}+k_{6}[{\rm OH}]_{0}\right)n_{\rm H} \frac{C_{54}}{r_{5}-r_{4}}\] \[C_{65} = \left(k_{5}[{\rm O}_{2}]_{0}+k_{6}[{\rm OH}]_{0}\right)n_{\rm H} \frac{C_{55}}{r_{5}-r_{4}}\] \[C_{66} = \left[{\rm PO}]_{0}-\left(\frac{C_{61}}{r_{6}-r_{1}}+\frac{C_{62} }{r_{6}-r_{2}}+\frac{C_{63}}{r_{6}-r_{3}}+\frac{C_{64}}{r_{6}-r_{4}}+\frac{C_{6 5}}{r_{6}-r_{5}}\right)\] \[C_{71} = k_{2}n_{\rm H}[{\rm N}]_{0}\frac{C_{61}}{r_{6}-r_{1}}+k_{8}n_{ \rm H}[{\rm N}]_{0}\frac{C_{41}}{r_{4}-r_{1}}+k_{7}n_{\rm H}[{\rm N}]_{0}\frac{ C_{31}}{r_{3}-r_{1}}+k_{9}n_{\rm H}[{\rm CN}]_{0}\frac{C_{51}}{r_{5}-r_{1}}\] \[C_{72} = k_{2}n_{\rm H}[{\rm N}]_{0}\frac{C_{62}}{r_{6}-r_{2}}+k_{8}n_{ \rm H}[{\rm N}]_{0}\frac{C_{42}}{r_{4}-r_{2}}+k_{7}n_{\rm H}[{\rm N}]_{0}\frac{ C_{32}}{r_{3}-r_{2}}+k_{9}n_{\rm H}[{\rm CN}]_{0}\frac{C_{52}}{r_{5}-r_{2}}\] \[C_{73} = k_{2}n_{\rm H}[{\rm N}]_{0}\frac{C_{63}}{r_{6}-r_{3}}+k_{8}n_{ \rm H}[{\rm N}]_{0}\frac{C_{43}}{r_{4}-r_{3}}+k_{7}n_{\rm H}[{\rm N}]_{0}C_{33}+ k_{9}n_{\rm H}[{\rm CN}]_{0}\frac{C_{53}}{r_{5}-r_{3}}\] \[C_{74} = k_{2}n_{\rm H}[{\rm N}]_{0}\frac{C_{64}}{r_{6}-r_{4}}+k_{8}n_{ \rm H}[{\rm N}]_{0}C_{44}+k_{9}n_{\rm H}[{\rm CN}]_{0}\frac{C_{54}}{r_{5}-r_{4}}\] \[C_{75} = k_{2}n_{\rm H}[{\rm N}]_{0}\frac{C_{65}}{r_{6}-r_{5}}+k_{9}n_{ \rm H}[{\rm CN}]_{0}C_{55}\] \[C_{76} = k_{2}n_{\rm H}[{\rm N}]_{0}C_{66}\] \[C_{77} = [{\rm PN}]_{0}-\left(\frac{C_{71}}{r_{7}-r_{1}}+\frac{C_{72}}{r_{ 7}-r_{2}}+\frac{C_{73}}{r_{7}-r_{3}}+\frac{C_{74}}{r_{7}-r_{4}}+\frac{C_{75}}{r_ {7}-r_{5}}+\frac{C_{76}}{r_{7}-r_{6}}\right).\]
Appendix C Calculation of the error derived from assuming that the abundant species are constant in the theoretical solution of the system
In our theoretical calculation of the abundance evolution of the P-bearing species (see Appendix B), we assumed that the abundances of those species classified as _abundant_ are constant (i.e. \(d[X_{i}]/dt=0\) for C, H, N, O, O\({}_{2}\) and OH).
This assumption leads to a set of linearized equations and an approximate solution that is almost indistinguishable for all times and temperatures from the solution obtained with numerical methods. In this section we provide a theoretical calculation to explain this striking similarity. In particular, we will show that the error behind this approximation is negligible for the case of one single reaction of the form
\[\mathrm{A}+\mathrm{B}\to\mathrm{C}+\mathrm{D}\,,\]
where \(\mathrm{A}\) is a _scarce_ species and \(\mathrm{B}\) is an _abundant_ species. We do so because the reactants of our set of reactions (see Table 1 and Figure 1) are always composed of one scarce species and one abundant species (except for reaction 9 which we treated differently).
For simplicity, the abundances of species \(\mathrm{A}\) and \(\mathrm{B}\) will be called \(A\) and \(B\). According to the law of mass action, their destruction rate is given by the _nonlinear_ system \(S\)
\[\frac{dA}{dt}=\frac{dB}{dt}=-kAB\,,\quad A(0)=A_{0},\quad B(0)=B_{0}\,.\] (C34)
Our aim is to prove that, when \(B_{0}>>A_{0}\), the exact solution of the system \(S\) is very similar to the solution of the _linear_ system \(S^{\prime}\) in which the abundance of species \(\mathrm{B}\) is constant, that is:
\[\frac{dA^{\prime}}{dt}=-kA^{\prime}B_{0},\quad A^{\prime}(0)=A_{0}\,.\] (C35)
The exact solution of the nonlinear system \(S\) is
\[A(t) = \frac{B_{0}-A_{0}}{\frac{B_{0}e^{(B_{0}-A_{0})kt}}{A_{0}}-1}\,,\] (C36) \[B(t) = B_{0}-A_{0}+\frac{B_{0}-A_{0}}{\frac{B_{0}e^{(B_{0}-A_{0})kt}}{ A_{0}}-1}\,,\] (C37)
and the exact solution of the linear solution \(S^{\prime}\) is
\[A^{\prime}(t) = A_{0}e^{-B_{0}kt}\,,\] (C38) \[B^{\prime}(t) = B_{0}\,.\] (C39)
We are interested in the error that we are assuming when we approximate the solution of \(S\) by the solution of \(S^{\prime}\) for the abundances of both species \(\mathrm{A}\) and \(\mathrm{B}\). Thus we focus on the relative errors of \(A(t)\) and \(B(t)\) defined as
\[\left|\frac{\Delta A}{A}\right| = \left|\frac{A(t)-A^{\prime}(t)}{A(t)}\right|\,,\] (C40) \[\left|\frac{\Delta B}{B}\right| = \left|\frac{B(t)-B^{\prime}(t)}{B(t)}\right|\,.\] (C41)
To introduce that \(B_{0}>>A_{0}\), we define \(\varepsilon=\frac{A_{0}}{B_{0}}\ll 1\). We need to evaluate the relative errors of \(A(t)\) and \(B(t)\) committed when we approximate the solution of the nonlinear system \(S\) by the solution of the linear \(S^{\prime}\) in terms of the parameter \(k\) and the initial condition \(B_{0}\) for small \(\varepsilon\). To do so, we calculate the first order approximation of their Taylor series and obtain
\[\left|\frac{\Delta A}{A}\right| = (-1+e^{-B_{0}kt}+B_{0}kt)\varepsilon+o(\varepsilon^{2})\,,\] (C42) \[\left|\frac{\Delta B}{B}\right| = (1-e^{-B_{0}kt})\varepsilon+o(\varepsilon^{2})\,.\] (C43)
We conclude that the solution of the nonlinear system \(S\) is well approximated by the solution of the linear system \(S^{\prime}\) (where we assume that the abundances of the species labeled as abundant are constant), as long as the time is not too large and \(\varepsilon\) is sufficiently small, that is, when the initial condition of the abundant species \(B_{0}\) is sufficiently larger than the initial condition of the scarce species \(A_{0}\).
We evaluated these errors for our model's data (which comprise all the reaction rate coefficients \(k_{i}\) and the initial abundances of the reactants compiled in Tables 1 and 2, the H number density \(n_{\rm H}\) included as \(k=k_{i}n_{\rm H}\), and \(T\)=10, 100 and 300 K). We found that the relative errors of A and B verify \(|\Delta A/A|<0.003\) and \(|\Delta B/B|<0.01\), even for their maximum values in \(t=10^{5}\) yrs. In summary, the calculations developed here for a chemical reaction of the same type as the reactions studied in this work strongly support the main assumption of our theoretical analysis: supposing that the abundances of the abundant species are constant in the total system does not have a significant impact on both abundant and scarce species evolution curves. Finally, note that the second-order or bimolecular reactions that can be treated as first-order reactions (because they can be linearized in the way presented here) are called _pseudo-first-order reactions_(Chang & Overby, 2017)).
## Appendix D Theoretical analysis of the ratio [Po]/[pn] for low times in the interstellar medium
The reason of the prevalence of the abundance of PO over PN in many astrophysical environments is still unclear. Our model yields values of [PO]/[PN] \(<<1\) for all the scenarios analyzed with the parameters present in Table 1 and the initial conditions in Table 2 for \(t=10^{5}\) yrs, but for low times PO is systematically more abundant than PN. Here we demonstrate that the abundance of PO at short times is mainly due to the action of reactions 3 (O+PH\({}_{2}\to\) PO+H\({}_{2}\)) and 4 (O+PH \(\to\) PO+H), while the abundance of PN is mainly a result of reaction 7 (N+PH \(\to\) PN+H).
Considering the ratio [PO]/[PN], and applying L'Hopital's rule in the limit \(t\to 0\) because both abundances are zero in the limit, we obtain
\[\lim_{t\to 0}\frac{[PO]}{[PN]}=\lim_{t\to 0}\frac{\frac{d[PO]}{dt}}{\frac{d[PN]}{dt}}\,.\] (D44)
We substitute Equations (A16) and (A17), which yields
\[\lim_{t\to 0}\frac{[PO]}{[PN]}=\lim_{t\to 0}\frac{-k_{1}n_{\rm H}[{\rm N}][{\rm PO }]-k_{2}n_{\rm H}[{\rm N}][{\rm PO}]+k_{3}n_{\rm H}[{\rm O}][{\rm PH}_{2}]+k _{4}n_{\rm H}[{\rm O}][{\rm PH}]+k_{5}n_{\rm H}[{\rm O}_{2}][{\rm P}]+k_{6}n_ {\rm H}[{\rm OH}][{\rm P}]}{k_{2}n_{\rm H}[{\rm N}][{\rm PO}]+k_{7}n_{\rm H}[{ \rm N}][{\rm PH}]+k_{8}n_{\rm H}[{\rm N}][{\rm CP}]+k_{9}n_{\rm H}[{\rm P}][{ \rm CN}]}\,.\] (D45)
Considering the initial abundances, the expression becomes
\[\lim_{t\to 0}\frac{[PO]}{[PN]}\approx\frac{k_{3}[{\rm O}]_{0}[{\rm PH}_{2}]_{0} +k_{4}[{\rm O}]_{0}[{\rm PH}]_{0}+k_{5}[{\rm O}_{2}]_{0}[{\rm P}]_{0}+k_{6}[{ \rm OH}]_{0}[{\rm P}]_{0}}{k_{7}[{\rm N}]_{0}[{\rm PH}]_{0}+k_{8}[{\rm N}]_{0}[{ \rm CP}]_{0}+k_{9}[{\rm P}]_{0}[{\rm CN}]_{0}}\,.\] (D46)
Here we recall that the abundances of PH and PH\({}_{2}\) depend on the parameter \(f_{\rm P}\), since we defined it as the fraction of the total P that is in the form of PH, PH\({}_{2}\) and PH\({}_{3}\) (while the rest remains in the form of atomic P). In both the numerator and the denominator the terms with PH or PH\({}_{2}\) are dominant even for a very small fraction of P in the form of PH and PH\({}_{2}\). More precisely, for \(f_{\rm P}\geq 0.01\), these terms are more than 2 orders of magnitude higher than the other terms for all temperatures. Therefore, if we assume \(f_{\rm P}\geq 0.01\) we can neglect the non-dominant terms, obtaining
\[\lim_{t\to 0}\frac{[PO]}{[PN]}\approx\frac{k_{3}[{\rm O}]_{0}[{\rm PH}_{2}]_{0} +k_{4}[{\rm O}]_{0}[{\rm PH}]_{0}}{k_{7}[{\rm N}]_{0}[{\rm PH}]_{0}}\,.\] (D47)
Although [PH]\({}_{0}\) and [PH\({}_{2}\)]\({}_{0}\) depend on the P-hydrogenation fraction \(f_{\rm P}\), note that [PH]\({}_{0}=[{\rm PH}_{2}]_{0}\) (see Table 2), and therefore
\[\lim_{t\to 0}\frac{[PO]}{[PN]}\approx\frac{(k_{3}+k_{4})[{\rm O}]_{0}}{k_{7}[{ \rm N}]_{0}}\,.\] (D48)
Appendix E Analysis of the chemical evolution of the system for unequal initial abundances of PH, PH\({}_{2}\) and PH\({}_{3}\).
Throughout the simulations we have used the same initial abundance for PH, PH\({}_{2}\), and PH\({}_{3}\) for simplicity (equal to \((f_{\rm P}/3)\times 2.57\times 10^{-9}\), see Table 2). Here, we study how the chemical evolution of the system changes if these initial quantities are unbalanced. Two different distributions of initial abundances are used: (i) only one PH\({}_{x}\) has a non-zero abundance, and (ii) two different PH\({}_{x}\) species have the same initial abundance and the other is absent. Note that case (i) is the most biased possible distribution of initial abundances, and therefore the difference obtained between calculations done for this distribution and the one used in the rest of the paper (i.e. the relative error shown in Table 6) should be an upper bound of any other potential distribution.
H, PH\({}_{2}\), and PH\({}_{3}\) are related through the chemical route: PH\({}_{3}\)\(\rightarrow\) PH\({}_{2}\)\(\rightarrow\) PH\(\rightarrow\) P composed of the very endothermic reaction 13 (H+PH\({}_{3}\)\(\rightarrow\) PH\({}_{2}\)+H\({}_{2}\)), reaction 12 (H+PH\({}_{2}\)\(\rightarrow\) PH+H\({}_{2}\)) and reaction 10 (H+PH\(\rightarrow\) P+H\({}_{2}\)). In consequence, as we can see in Table 6, for low temperatures the system moderately depends on the inequality of initial abundances for PH, PH\({}_{2}\), and PH\({}_{3}\). On the contrary, for medium and large temperatures the chemical route PH\({}_{3}\)\(\rightarrow\) PH\({}_{2}\)\(\rightarrow\) PH\(\rightarrow\) P is so efficient that for sufficiently large times the system is in practice independent of the unbalance in the initial abundances of PH, PH\({}_{2}\), and PH\({}_{3}\).
Appendix F Bayesian inference for the reaction rate coefficients with a log-uniform prior distribution
In Section 4 we used log-normal prior distributions of the reaction rate coefficients \(k_{i}\), according to KIDA guidelines. In order to assess the influence of the prior distribution choice on each \(k_{i}\), we calculate here the posterior probability distributions obtained from a log-uniform prior, that is, a prior devoid of information. The rest of the parameters are as in Section 4 and Figure 5. Figure 6 confirms that observational data do not provide relevant information about \(k_{4}\), but shows more clearly than in Figure 5 that \(k_{1}\) has an upper bound (\(k_{1}<3.9\times 10^{-11}\)cm\({}^{3}\) s\({}^{-1}\)) and \(k_{10}\) a lower bound (\(k_{10}>3.0\times 10^{-13}\)cm\({}^{3}\) s\({}^{-1}\)).
Figure 6: Bayesian inference applied to the most important reaction rate coefficients of the model, when log-uniform prior probability distributions are used. (a-e) Prior probability distributions (thin black lines) and posterior probability distributions (PPDs, wide blue lines) obtained with Bayesian inference of the 5 most relevant reaction rate coefficients of our model for \(T\)=100 K and \(t=10^{4}\) yrs, according to observations of star-forming regions from Table 3. The values of the reaction rate coefficients provided by KIDA and summarized in Table 1 are plotted (dashed red lines). P-hydrogenation fraction is \(f_{\rm P}\)=0.5 in all calculations. |
2309.06527 | Machine Translation Models Stand Strong in the Face of Adversarial
Attacks | Adversarial attacks expose vulnerabilities of deep learning models by
introducing minor perturbations to the input, which lead to substantial
alterations in the output. Our research focuses on the impact of such
adversarial attacks on sequence-to-sequence (seq2seq) models, specifically
machine translation models. We introduce algorithms that incorporate basic text
perturbation heuristics and more advanced strategies, such as the
gradient-based attack, which utilizes a differentiable approximation of the
inherently non-differentiable translation metric. Through our investigation, we
provide evidence that machine translation models display robustness displayed
robustness against best performed known adversarial attacks, as the degree of
perturbation in the output is directly proportional to the perturbation in the
input. However, among underdogs, our attacks outperform alternatives, providing
the best relative performance. Another strong candidate is an attack based on
mixing of individual characters. | Pavel Burnyshev, Elizaveta Kostenok, Alexey Zaytsev | 2023-09-10T11:22:59Z | http://arxiv.org/abs/2309.06527v1 | # Machine Translation Models Stand Strong in the Face of Adversarial Attacks
###### Abstract
Adversarial attacks expose vulnerabilities of deep learning models by introducing minor perturbations to the input, which lead to substantial alterations in the output. Our research focuses on the impact of such adversarial attacks on sequence-to-sequence (seq2seq) models, specifically machine translation models. We introduce algorithms that incorporate basic text perturbation heuristics and more advanced strategies, such as the gradient-based attack, which utilizes a differentiable approximation of the inherently non-differentiable translation metric. Through our investigation, we provide evidence that machine translation models display robustness displayed robustness against best performed known adversarial attacks, as the degree of perturbation in the output is directly proportional to the perturbation in the input. However, among underdogs, our attacks outperform alternatives, providing the best relative performance. Another strong candidate is an attack based on mixing of individual characters.
Keywords:Adversarial attack Robustness Neural machine translation
## 1 Introduction
Modern neural machine translation models demonstrate high-quality generated translations, and they are widely used in real-world applications as part of automatic translation systems. For this reason, the robustness and reliability of such models become crucial factors.
Adversarial attacks, as detailed [3, 21], encompass a broad range of techniques aimed at exposing and probing the vulnerabilities of these models. These attacks introduce slight perturbations to the input data, which can, in turn, lead to significant misinterpretations or errors in the output. The aim is to understand the model's weak points and stability under these "attacks".
The core concept of an adversarial attack is not conditioned on the data nature: an attacker tries to significantly change the model output by modifying the input object. Nevertheless, constructing adversaries for NLP models is complicated due to the discrete structure of the text data [1, 16, 24]. As we can not straightly use derivatives of the loss function, we compute differentiable approximations of metrics [26] and derivatives of the adversarial loss with respect
to non-discrete token embeddings. We can use this idea to generate adversarial examples from the embedding space [9] The work [8] goes further in this way by proposing the use of a generative model to make the adversarial attack work.
However, one can spot a common point in significant part of all these articles: they mostly pay attention to models with an output that consists of a single number. Nowadays, many use cases for natural language processing models focus on sequence2sequence problems, where both input and output for a model are sequences. One particular example of such a problem is a classic machine translation. The input, in this case, is a sequence in one language, and the output is a sequence in another language. Our research can help not only investigate vulnerabilities of these models to adversarial perturbations but also provide new insights on the possibility of detecting anomalies and estimating uncertainty for these models.
Our main contributions to adversarial attacks on machine translation models are:
* We propose new techniques to construct adversaries on the machine translation task. The first algorithm replaces input tokens based on the gradient of the target function with respect to the model's embeddings. Another approach exploits approximations of non-differentiable metrics.
* We conduct a fair comparison of different attacks based on a diverse set of metrics for a machine translation problem.
* Our experiments demonstrate that modern machine translation models are only slightly vulnerable to adversarial inputs. They do degrade for carefully created adversarial examples via a range of techniques, while the effect is less evident compared to drastic performance drops for computer vision and NLP classification models [12].
* The biggest vulnerability comes from attacks that work at character levels suggesting that in this case the adversarial examples fall out of the domain of data used for training.
## 2 Related work
Various types of adversarial attacks on machine translation models have detected their sensitivity to disrupted inputs [2, 20]. The first family of attack strategies finds the most loss-increasing perturbations of the source sentence using a gradient in the embedding space. The HotFlip attack [7] vectorizes simple char-level operations such as replacement, deletion, and insertion and uses directional derivatives to select the change of input sample. Targeted attack [15] uses gradient projections in the latent space to make perturbations. It preserves the similarity between initial and adversarial translations by inserting a target keyword into adversarial output. AdvGen algorithm [4] works on word level and craft adversarial examples based on the similarity between the loss gradient and distance between initial word and adversarial candidates.
The second group of attacks exploits differentiable estimations of standard NLP metrics to control text perturbations. Authors of [26] propose such approximation to BLEU, and authors of [10] use a deep learning model to estimate Levenstein distance between sentences. Dependence on metrics allows selecting perturbations in discrete space more naturally.
The third type of attack can successfully fool a machine translation model by imitating typos or letter emission. Authors of [1] add synthetic noise to attacked sentences which includes replacement of letters and varying their order. In addition to swapped characters, distorted inputs can contain emojis and profanity [18].
Several approaches can produce high-quality adversarial examples but require more complicated training and generation processes. GAN-based framework [24] operates on a sentence level, and its training process is adapted for the discrete data structure. Authors of [27] propose a reinforcement learning paradigm to generate meaning-preserving examples.
There are certain methods to evaluate adversarial attacks on NLP data. Attack Success Rate measures the proportion of successful attacks, which reduces twice the BLEU score of adversarial translation compared to initial translation [5]. Authors of [12] propose an evaluation framework for attacks on seq2seq models that focuses on the semantic equivalence of the pre- and post-perturbation input.
In this study, we provide a comparison of principal attack types: gradient-based, synthetic, and metric approximation. Our modifications to existing methods allow both saving the semantic and grammar correctness of adversaries and altering the attacked translation.
## 3 Methods
### General description of a Machine Translation Model
The backbone of the majority of modern research and production Translation models is a Transformer model [19]. It consists of Encoder and Decoder parts, each of which includes sequential application of a multi-head attention mechanism that forces latent representation of tokens to interact with each other. The encoder of the model maps the input sentence \(X=\{x_{1},x_{2}\ldots x_{n}\}\) into latent representation \(Z=\{z_{1},z_{2}\ldots z_{k}\}\). Decoder likewise translates it into output embedding representation. The decoder output goes into the classification head, which chooses the next token \(y_{j}\), the process repeats until the model generates a special end token. Choice of the next output token \(y_{<j}\) depends on input text \(X\), hidden representations \(z\) and already generated text \(y_{<j}\):
\[p_{\theta}(Y|X)=\prod_{j=1}^{m}p_{\theta}(y_{j}|y_{<j},X,Z),\]
where \(\theta\) are model parameters and \(Y=\{y_{1},\ldots,y_{k^{\prime}}\}\). The loss function can be defined as \(J(\theta,X,Y)=\frac{1}{n}\sum_{i=1}^{n}-\log P(y_{i}|X,\theta)\).
### Gradient Machine Translation attack
The proposed gradient attack algorithm has a white-box full access to the model's parameters \(\theta\), adversarial loss \(\mathcal{L}_{adv}\) we want to minimize, and an input sequence of tokens \(X\) that corresponds to a text. We suppose that for a set of tokens, we have a dictionary of embeddings \(\mathcal{V}\). The model works on the token level, and the number of tokens in the alphabet is \(|\mathcal{V}|\).
The core idea of the attack is inspired by Hotflip [7] attack: we iteratively replace input tokens according to the adversarial loss, calculated with respect to the model's input embeddings \(\mathbf{e}\).The new token's embedding would minimize the first-order Taylor approximation of adversarial loss:
\[\operatorname*{arg\,min}_{\mathbf{e}_{i}^{\prime}\in\mathcal{V}}\left[ \mathbf{e}_{i}^{\prime}-\mathbf{e}_{i}\right]^{\top}\nabla_{\mathbf{e}_{i}} \mathcal{L}_{adv}.\]
\(\nabla_{\mathbf{e}_{i}}\mathcal{L}_{adv}\) means computing gradient with respect to token at position \(i\). The subtracted part of the expression does not depend on substitute embeddings, so the optimization problem reduces to
\[\operatorname*{arg\,min}_{\mathbf{e}_{i}^{\prime}\in\mathcal{V}}\left[ \mathbf{e}_{i}^{\prime}\right]^{\top}\nabla_{\mathbf{e}_{i}}\mathcal{L}_{adv}.\]
To select the replacement token, we try all possible indexes \(i\) and compare their respective difference in loss.
The overall approach is illustrated in Figure 1.
It's essential to preserve the semantics and grammar of the initial text. Otherwise, the attack discriminators [25] would always detect the attack. So, following [6], we use several constraints in our experiments. They aim to save the initial meaning of the sentence and prevent an attacker from turning a sentence into a meaningless string of characters that doesn't resemble the initial meaning of a sentence:
Figure 1: Gradient Attack on Machine Translation models
1. The cosine distance between new and replaced embeddings must not be smaller than the threshold.
2. Attacker can replace each token position only once.
3. Attacker can separate all tokens in the vocabulary into two parts. The first part of the subset of tokens always stays at the beginning of the word. Another part stays at the second and next positions. We discourage the algorithm from replacing tokens from one part with tokens from another.
4. We disallow replacement of tokens denoting punctuation, first and last tokens of the sentence, and stop words.
### BLEUER attack
The gradient attack described before does not guarantee any estimations on the primary translation metrics change: BLEU, METEOR, etc. Instead of optimizing adversarial loss, which does not directly depend on text metric, we can incorporate dependence from the differentiable approximation of the metric. An illustration of the approach for BLEU score is presented in Figure 2.
Before applying an attack, an adversary needs to train extra layers to predict BLEU. We translated a subset from the text corpus using the initial translation model and computed initial BLEU scores for pairs of sequences. Those scores are used as targets during the training of additional layers on the top of the encoder part of the model. During the training, we minimize MSE loss between the predicted value of the BLEU score and the original one:
\[J=\text{MSE}(f(z),\text{BLEU}(Y_{orig},Y_{trans})),\]
where \(z\) is the encoder output, \(f\) means applying additional layers, \(Y_{orig}\) is expected translation from data corpus, \(Y_{trans}\) is the model's translation. The attack algorithm executes the following steps:
1. Get encoder outputs \(z\);
Figure 2: BLEUER Attack, based on predicting initial BLEU score
2. Calculate prediction of BLEU score \(f(z)\) using differentiable layers and loss value \(J\);
3. Calculate gradients with respect to loss and update encoder outputs \(z\), so that approximate BLEU score decreases: \[z:=z+\varepsilon\cdot\nabla_{\mathbf{e}_{i}}\text{MSE}(f(z),1)).\] The updated encoder output is served at the entrance to the decoder part of the model, which generates adversarial translation.
### MBART attack
The proposed gradient attack and BLEUER attack approaches can be combined in the attack, which we call the MBART attack. Depending on gradients, obtained after predicting BLEU score on encoder outputs, we iteratively replace input tokens. After several iterations we get adversarial input \(X_{adv}\) and use the model to get adversarial translation \(Y_{adv}\).
### Synthetic attacks
We propose an extremely naive synthetic attack as an alternative method to attack machine translation tasks. Synthetic attack [1] initially simulates errors or mistakes that can happen during daily usage of translation systems: keyboard typos, accidental omission/addition, or swapping chars. Additionally, we tried some uncommon sentence perturbations, such as randomly swapping a subset of words or randomly swapping a subset of chars in one word. As the main hyperparameter of such an attack, we used a portion of perturbed words or chars in the sentence.
## 4 Experiments
We perform the experiments with Marian and MBART Transformer models. For them, the comparison is between the three approaches described above: gradient, BLEUER, and synthetic attacks, since each of them represents a principal type of attack method. We also conduct a comparison of existing and novel attack algorithms. We pay special attention to balancing the trade-off between preserving the original sentence and altering the attacked translation. The attack should not be easily recognized by adversarial detectors, so the text should save logical and semantic literacy and grammar structure. We use a wide range of automatic linguistic metrics to evaluate attack approaches from this point of view. The code of our experiments will be available on a public online repository in the case of acceptance.
### Metrics
Machine translation attacks aim to decrease the quality of translation metrics. We used 6 metrics in our experiments. **BLEU** is the most famous metric for evaluating the similarity of two sentences, and it is highly correlated with the human concept of text similarity. **chrF** metric is calculated as an F-score between character \(n\)-grams [13]. **METEOR** is another \(n\)-gram metric. It is calculated as an F-score for unigrams. **WER** metric considers a number of basic text operations: adding, deleting, and swapping characters for transforming one text into another. **Paraphrase similarity** metric is built upon pre-trained Sentence-Transformer [14], model, which matches texts into 768-dimensional vectors. Cosine distance between such vectors correlates greatly with a human opinion of text similarity. **BertScore**[22] leverages vectors, obtained from pre-trained models. Bert Score has been found to correspond with human judgment at the sentence level meaning. It calculates each token's precision, recall, and F1 measures in assessed sentences.
### Baselines
In addition to the proposed methods, we evaluated naive approaches and state-of-the-art approaches.
In particular, we consider our variant of **Gradient** attack itself and a modification of it **Gradient attack and ML constraint**. For the later attack, we utilize constraints on how much we can change the initial sentence. These constraints are described above in the methods section and aim at keeping the meaning and structure of the attacked sentence similar to the initial one. The attack uses Marian [11] Transformers, pre-trained on English-Russian corpora.
We consider two types of attacks that consider an approximation of the target metric during an attack: **BLEUER** and **MBART** attacks. For these attacks, we train an additional head that takes encoder outputs as an input. These heads are predicting BLEU or BertScore correspondingly. As these heads are differentiable, we incorporate these scores into the loss function to maximize the difference between the initial translation \(Y\) and the attacked sentence translation \(Y_{\text{attacked}}\) and minimize the difference between the initial sentence \(X\) and its adversarial perturbation \(X_{\text{attacked}}\). For training of BLEUER and MBART, we use the validation data of **wmt-14** dataset.
To make sure that all main types of attacks are considered, we evaluate methods from the literature. **Prefix attack** inserts tokens at the beginning as we try to select tokens that serve as a prompt. **SWLS** is the attack from the article [23]. This attack tries to leverage a bidirectional translation model and looks for perturbations that maximize the difference between adversarial sequence \(X_{\text{attacked}}\) and its back-translation.
The last two methods consider attacks at separate character levels. **Char swap** tries to randomly swap characters to make the attack stronger. **Char + grad swap** is a version of our gradient attack at the character level.
### Attack examples
At first, we visually examined the results of the conducted attacks by comparing examples of adversarially perturbed sentences. While sometimes the results are imperfect, in general, we see the desired effect. Examples of such sentences are provided in Table 1.
### Experiment Setup
For gradient attack 3.2 and "BLEUER" attack 3.3 we used Marian [11] Transformers, pre-trained on English-Russian text corpora. For MBART 3.4 attack we used MBart-50 [17]). For "BLEUER" we additionally trained layers for approximating actual BLEU metric. For training we used validation data of **wmt-14** dataset.
### Main results
There is an important factor to be considered while evaluating machine translation adversarial attacks: perturbations should preserve the lexicon and grammar structure of the initial sentence. Authors of [12] proposed a new definition, _meaning-preserving_ perturbations, which underline the importance of the correct
\begin{table}
\begin{tabular}{l l l} \hline Attack & Sentence type & Sentence \\ \hline Gradient & Orig. sentence & Cars get many more miles to the gallon. \\ \cline{2-3} & Attacked sentence & Cars get many more miles to the common. \\ \cline{2-3} & Orig. translation & Automobil preezikaviot bollyie mily na dollin galloin. \\ \cline{2-3} & Translation & Manipy preekali gez miogo milly do galloin. \\ \cline{2-3} & Attacked translation & Manipy preekali gezadao bollyie mily do rommona. \\ \hline BLEUER & Orig. sentence & Cars get many more miles to the gallon. \\ \cline{2-3} & Attacked sentence & Cars get many more miles to the gall on. \\ \cline{2-3} & Orig. translation & Automobil preezikaviot bollyie mily na dollin galloin. \\ \cline{2-3} & Translation & Automobili preezikaviot gorazdo bollyie mily do galloin. \\ \cline{2-3} & Attacked translation & Automobili pollyie mily do galloin. \\ \hline Synthetic & Orig. sentence & Cars get many more miles to the gallon. \\ \cline{2-3} & Attacked sentence & arCs get myna embryo iselm to het glinoa. \\ \cline{2-3} & Orig. translation & Automobili preezikaviot bollyie mily na dollin galloin. \\ \cline{2-3} & Translation & Manipy preekali gez miogo milly do galloin. \\ \cline{2-3} & Attacked translation & arCs eggt myna embryo iselm to het glinoa. \\ \hline \end{tabular}
\end{table}
Table 1: Attack samples for the Machine Translation task for different types of attacks
assessment of an attack. We decided to take care of the balance of perturbing initial sentences and translations. Computing two similarities between the source sentence \(X\) and its perturbed sentence \(X_{\text{attacked}}\) and the similarity between the initial translation \(Y\) and translation of the attacked sentence \(Y_{\text{attacked}}\) is key to holding such a balance. Suppose the violation of the initial sentence approximately coincides with the violation of the translation. In that case, we cannot talk about the attack's success: the model honestly works out on a distorted sentence. An ideal attack would slightly change the initial similarity metric but significantly decrease the similarity between translations.
We vary the ability to introduce modifications into the initial sentence by modifying hyperparameters for all attacks. For each attack setting, they form a Pareto frontier, which can help us analyze the attack's impact. Numbers near the dots indicate hyperparameters of the attack. For the gradient attack and BLEUER attack, we provided a threshold for minimum cosine distance between vectors of original and substitute tokens for each dot. For synthetic attacks, we provided a maximum number of basic transformations for each dot.
Pareto frontiers for the full set of considered attacks are presented in Figure 3. In general, the considered attacks could not show high values of attack success rates, supporting the evidence that modern translation models are robust due to the architectural features of models, computational expenses on training, and the colossal size of datasets. The top-performing attack is based on a swap at the level of characters. Both modifications show a significant improvement over others jointly providing a desired Pareto frontier.
Figure 3: Pareto frontiers for ChRF metric for considered attack methods. We aim at the lower right corner with high change of the translated sentence, but small change of the sentence to translate
Figure 4: Pareto frontiers for BLEU, ChRF, METEOR, WER, Par.Similarity, BertScore metrics for different attacks. Better attacks should aim lower right corner with a big similarity between input sequences before (\(x\)) and after an attack (\(x_{\text{attacked}}\)) and low similarity between translated sequences before (\(y\)) and after (\(y_{\text{attacked}}\))
### Performance with respect to different metrics
We provide Pareto frontiers for 6 automatic text metrics for 3 types of attacks: gradient attack, BLEUER, and a synthetic attack. Experimental results are presented in Figure 4. Hitting as low and to the right as possible is the most successful attack, showing min distance between original and adversarial sentences and maximum distance between original and adversarial translations. It is rather evident from the graphics that most dots correspond to the same distortion of the initial and translation sequences. We can not ignore that the dots of the most straightforward method, synthetic attack on average, lie lower than the dots of more complicated approaches. That fact is especially noticeable for chrF metric due to a char-level of that attack. Simple char operations break the structure of tokens, heavily damaging deep models for machine translation, which usually work on the token level. The numerical summary is given in Table 2. It also supports the evidence that Synthetic attack provides superior metrics compared to an embedding-based approach that leverages the gradients of a model.
## 5 Acknowledgements
The research was supported by the Russian Science Foundation grant 20-71-10135.
## 6 Conclusion
Adversarial attacks face limitations in the NLP domain. Especially for the machine translation task, both creating adversarial sequences and evaluating attacks become non-trivial. Most of the existing approaches have a high attack success rate, but they still suffer from lacking semantics and losing lexicon and grammar correctness. In our investigations, we focus on how we can make attacks more meaningful and valuable in analyzing the translation model's vulnerabilities. We tried to control translation metrics directly by using differentiable approximations.
The primary outcome of MT experiments is that we still did not find a method that guaranteed that translation would be changed stronger than a
\begin{table}
\begin{tabular}{l l l l} \hline Metric type & \multicolumn{3}{l}{BLEUER Gradient Synthetic} \\ \hline BLEU \(\uparrow\) & 0.08 & 0.11 & **0.20** \\ chrF \(\uparrow\) & 0.09 & 0.13 & **0.25** \\ METEOR \(\uparrow\) & 0.14 & 0.24 & **0.27** \\ WER \(\downarrow\) & -0.02 & -0.04 & **-0.07** \\ Paraphrase similarity \(\uparrow\) -0.01 & 0.00 & **0.06** \\ BertScore \(\uparrow\) & **0.00** & -0.01 & -0.01 \\ \hline \end{tabular}
\end{table}
Table 2: Numerical comparison of the best attack settings based on the differences between the initial similarity metric and the similarity of translations
source sentence. We compared a range of metrics between initial and corrupted sentences and between initial and attacked translations. We made many additional rules and constraints which forced the attack algorithm not to collapse the initial sentence and save its semantic meaning totally, but they did not significantly change the situation.
|
2309.09614 | Gradpaint: Gradient-Guided Inpainting with Diffusion Models | Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved
remarkable results in conditional and unconditional image generation. The
pre-trained models can be adapted without further training to different
downstream tasks, by guiding their iterative denoising process at inference
time to satisfy additional constraints. For the specific task of image
inpainting, the current guiding mechanism relies on copying-and-pasting the
known regions from the input image at each denoising step. However, diffusion
models are strongly conditioned by the initial random noise, and therefore
struggle to harmonize predictions inside the inpainting mask with the real
parts of the input image, often producing results with unnatural artifacts.
Our method, dubbed GradPaint, steers the generation towards a globally
coherent image. At each step in the denoising process, we leverage the model's
"denoised image estimation" by calculating a custom loss measuring its
coherence with the masked input image. Our guiding mechanism uses the gradient
obtained from backpropagating this loss through the diffusion model itself.
GradPaint generalizes well to diffusion models trained on various datasets,
improving upon current state-of-the-art supervised and unsupervised methods. | Asya Grechka, Guillaume Couairon, Matthieu Cord | 2023-09-18T09:36:24Z | http://arxiv.org/abs/2309.09614v1 | # GradPaint: Gradient-Guided Inpainting with Diffusion Models
###### Abstract
Denoising Diffusion Probabilistic Models (DDPMs) have recently achieved remarkable results in conditional and unconditional image generation. The pre-trained models can be adapted without further training to different downstream tasks, by guiding their iterative denoising process at inference time to satisfy additional constraints. For the specific task of image inpainting, the current guiding mechanism relies on copying-and-pasting the known regions from the input image at each denoising step. However, diffusion models are strongly conditioned by the initial random noise, and therefore struggle to harmonize predictions inside the inpainting mask with the real parts of the input image, often producing results with unnatural artifacts.
Our method, dubbed GradPaint, steers the generation towards a globally coherent image. At each step in the denoising process, we leverage the model's "denoised image estimation" by calculating a custom loss measuring its coherence with the masked input image. Our guiding mechanism uses the gradient obtained from backpropagating this loss through the diffusion model itself. GradPaint generalizes well to diffusion models trained on various datasets, improving upon current state-of-the-art supervised and unsupervised methods. Our code will be made available upon publication.
## 1 Introduction
Inpainting consists in generating a missing part of a given image, given a binary mask indicating where the generation should take place. It is a fundamental task in computer vision, having obvious implications for image editing, image restoration, object removal, and so on. Currently, state-of-the-art methods are generally based on Generative Adversarial Networks (GANs) [37, 45], and consist in explicitly training a model to reconstruct an image using self-generated masks. Although these methods often achieve reasonable results with standard metrics, visual results tend to have obvious, unrealistic artifacts. Moreover, training these models is accompanied with the difficulties of training instability inherent with GANs as well as limitations on the diversity of the dataset distribution.
Denoising diffusion probabilistic models (DDPMs) have recently gained massive attention, achieving high-resolution, photo-realistic and diverse image generation [29, 11, 31, 1, 6, 27]. In terms of image generation, these models are on par or better than GANs even for constrained datasets like faces [31]; and largely surpass them for diverse datasets like ImageNet [27, 31]. Furthermore, recent models trained on large-scale datasets [11, 29, 27, 8, 31] have given rise to high-quality and flexible text-conditioned image generation, allowing users to generate astonishingly imaginative or artistic high-resolution images [35]. It is thus highly enticing to be able to use these pretrained models directly for downstream tasks, rather than re-training a new model from scratch. Here, we focus on the particular downstream task of inpainting.
There has been limited work in using pre-trained diffusion models for this task, and the typical approach [23, 26, 27] is to guide the generative model by replacing values of the intermediate noise map with noised pixels of the input image outside the inpainting mask, based on the hope that the denoising process inside the inpainting mask will progressively be biased towards image parts that blend naturally with the known surrounding context. However, this strategy often produces unsatisfying results, which we believe is due to the diffusion model being strongly conditioned on the initial noise map [17], therefore having difficulties harmonizing the generation when the initial random latent map is too mismatched with the input image.
In this paper, we propose a new strategy for guiding pre-trained diffusion models to better perform inpainting tasks. Our method, dubbed GradPaint, is optimizing the diffusion process by better harmonizing generated content inside the inpainting mask. This guides the generation at every single step of the denoising process towards a more harmonized final image. Our method aims to minimize or even eliminate all the artifacts and inconsistencies that generally persist on the images due to the masked regions. We propose a training-free algorithm which is advantageous because (i) there is no need to train a inpainting
specialized model whenever a new model is available, and (ii) training-based methods must chose a mask distribution to train on, to which training-free methods are agnostic. We perform an extensive evaluation on various datasets, including CelebA-HQ[22], FFHQ[15], ImageNet[4], Places2[47], and COCO[20].
Our main contributions can be summed up as:
* We propose a novel training-free algorithm to the denoising scheduling of diffusion models for the specific task of inpainting. We improve this inpainting mechanism with the explicit goal of harmonizing the generated parts with the current context. Specifically, we use a custom _alignment loss_ and leverage the intrinsic nature of diffusion models through which we back-propagate and calculate a gradient to optimize our loss.
* We show that our method generalizes well to a variety of datasets and pre-trained models, including latent-diffusion models. We show that our method improves baseline methods and is even on par with equivalent models trained specifically for the task of inpainting.
## 2 Related Work
### Inpainting
Historically, inpainting was aimed at recovering small corruption errors in images and was addressed with matching or "borrowing" local color and texture around the masked region [28, 41]. Evaluation consisted in calculating a distance metric with respect to the unmasked image. More recently, generative models have become capable of synthesizing realistic and diverse images, allowing the use of much larger masks when inpainting images. Generative models thus have more freedom to "imagine" a wide range of possibilities much different from the reference image, which is satisfactory (and oftentimes desired) so long as the resulting output looks realistic.
In recent years, inpainting has been primarily addressed with training deep encoder-decoder convolutional networks from scratch, often using a GAN[9] loss to encourage plausibility. Most recent work consists in improving the typical convolutional architecture in the encoder and/or decoder to better leverage structural or textural information from the surrounding regions [37, 13, 43, 14, 42, 48, 21, 25, 46]. [18] proposes a progressive inpainting scheme which iteratively fills in the mask by using surrounding information in the deep feature space. [40, 19] propose a framework to locate and leverage semantic information.
In another line of work similar to ours, image completion is effectuated with the help of existing priors not specifically trained for the task. [39] trains a randomly initialized convolutional network to generate the input image, stopping training before overfitting occurs. [30, 45, 2] utilize powerful pre-trained decoders like StyleGAN2[16] and only train encoders to map the input image into the latent space of the decoder, which can produce more realistic results if the input image fits well to the distrubtion of the pre-trained decoder.
### Diffusion models
Diffusion models are becoming state-of the art methods for generation tasks on many modalities, like images, videos, speech and text. Their excellent scaling behavior makes them a model of choice for training on large and diverse data, compared to GANs which still suffer from mode collapse and training instabilities. They can also be conditioned on various input data: for the specific task of inpainting, the input image and mask can be given as additional input to train a conditional diffusion model specialized on the inpainting task, as done in [33].
However, due to the computational cost of training generative models, it is appealing to find adaptation algorithms for downstream tasks without fine-tuning, especially for the task of inpainting which bears a lot of similarities with the unconditional generation task. [31, 27, 36] propose to adapt pre-trained diffusion models to inpainting by injecting a guiding mechanism in the generative process, a strategy which we build upon in this paper. [23] also proposes to take advantage of pre-trained diffusion models with cycles of denoising and denoising operations, which we found computationally very expensive. Finally, in a parallel line of work most similar to ours, [3] similarly propose to guide the generation using the gradient of a "manifold constraint", but they do not use a custom loss nor do they apply optimization to the entirety of the intermediate noise maps.
## 3 GradPaint Method
### Background
Denoising diffusion probabilistic models [12] is a class of generative models trained with the following image denoising objective:
\[\mathcal{L}=\mathbb{E}_{\mathbf{x}_{0},t,\epsilon}\|\epsilon-\epsilon_{\theta}( \mathbf{x}_{t},t)\|_{2}^{2}, \tag{1}\]
where \(\epsilon_{\theta}\) is a noise estimator network trained to predict the noise \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) mixed with an input image \(\mathbf{x}_{0}\) in the following way: \(\mathbf{x}_{t}=\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1-\alpha_{t}}\epsilon\). This training is performed for different values of the mixing coefficient \(\alpha_{t}\), monotonically decreasing from \(\alpha_{0}=1\) (no noise) to \(\alpha_{T}\simeq 0\) (almost pure noise) for a large integer \(T\).
At inference time, a new sample from the training distribution can be obtained by starting from random Gaussian noise \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), and iteratively refining it with the noise estimator network with the following equations, called _DDPM sampling equations_[12]:
\[x\text{ and }\mathbf{x}\] \[\hat{\mathbf{x}}_{0} =\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{x}_{t}-\sqrt{1-\alpha_{t}}\cdot \epsilon_{\theta}(\mathbf{x}_{t},t)), \tag{2}\] \[\mathbf{x}_{t-1} =\frac{(\alpha_{t-1}-\alpha_{t})\sqrt{\alpha_{t-1}}}{\alpha_{t-1} (1-\alpha_{t})}\hat{\mathbf{x}}_{0}+\frac{(1-\alpha_{t-1})\sqrt{\alpha_{t}}}{(1- \alpha_{t})\sqrt{\alpha_{t-1}}}\mathbf{x}_{t}+\sigma\mathbf{z},\]
where \(t\) goes from \(T\) to \(0\), \(\sigma_{t}\) is a variance parameter, and \(z\sim\mathcal{N}(\mathbf{0},\mathbf{\mathrm{I}})\).
This iterative refinement can be "guided" to impose constraints on the generated sample \(\mathbf{x}_{0}\). In the case of inpainting, the aim is that the generated image exactly matches the input image outside a given inpainting region. The variable \(\hat{\mathbf{x}}_{0}\), available at each timestep, represents the model's current estimation of what the denoised image will look at the end. For instance, [27] applies a maskwise correction on \(\hat{\mathbf{x}}_{0}\) at each timestep:
\[\hat{\mathbf{x}}_{0}^{\prime}=M\odot\hat{\mathbf{x}}_{0}+(1-M)\odot I, \tag{3}\]
where \(I\) is the input image and \(M\) is a binary image mask equal to 1 in the image regions that must be inpainted, 0 otherwise. The update rule for \(\mathbf{x}_{t-1}\) is then adapted to use \(\hat{\mathbf{x}}_{0}^{\prime}\) instead of \(\hat{\mathbf{x}}_{0}\) in Equation 2. This correction progressively biases the diffusion model to exactly match \(I\) outside the inpainting mask \(M\). In the remaining of the paper, we refer to this method as _combine-image_ since it combines the images \(\hat{\mathbf{x}}_{0}\) and \(I\) before interpolating with \(\mathbf{x}_{t}\).
Alternatively, [36, 31, 23] propose to directly correct \(\mathbf{x}_{t-1}\) by replacing regions outside \(M\) with the noised regions of the input image \(I\):
\[\mathbf{x}_{t-1}^{\prime}=M\odot\mathbf{x}_{t-1}+(1-M)\odot(\sqrt{\alpha_{t-1}}I+ \sqrt{1-\alpha_{t-1}}\epsilon), \tag{4}\]
where \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{\mathrm{I}})\) is resampled at each step. This \(\mathbf{x}_{t-1}^{\prime}\) is then used as input for the next denoising step instead of \(\mathbf{x}_{t-1}\). We will refer to this method as _combine-noisy_ since it combines \(\mathbf{x}_{t-1}\) inside the mask with ground truth (noised) pixel values outside the mask.
### GradPaint framework
Our strategy is built upon the _combine-image_ zero-shot inpainting method presented in SS3.1. Our key observation is that the most asthetically-pleasing inpainting results are obtained when the collage \(M\odot\hat{x}_{0}+(1-M)\odot I\) is coherent right from the beginning of the generation process. When this is not the case, there is a mismatch between the model's estimation in the inpainting region and the known regions of input image \(I\). This mismatch is generally present from the beginning and is not fully corrected during the denoising generation process.
To enforce harmonization between the inpainted region and known regions of the input image, we introduce the _GradPaint update_. An overview of our method is presented in Fig. 1. At each denoising step, the variable \(\mathbf{x}_{t}\) is updated so that (i) \(\hat{x}_{0}\) matches well known regions of \(I\) outside the mask; and (ii) the collage \(M\odot\hat{x}_{0}+(1-M)\odot I\) does not present any discontinuity due to the copy-paste operation. This update consists in a one-step gradient descent update from two loss terms corresponding to the two objectives aforementioned.
Given a binary mask \(M\in\mathbb{R}^{n\times n}\) and \(\odot\) denoting the element-wise product, we define our losses as follows:
**Masked MSE loss.** The first loss term is a mean squared error term outside the inpainting mask \((1-M)\), taking as reference known regions of the input image:
\[\mathcal{L}_{mse}(I_{1},I_{2},M)=\frac{1}{n^{2}}\|I_{1}\odot(1-M)-I_{2}\odot(1 -M)\|_{2}^{2}. \tag{5}\]
**Alignment loss.** The "alignment loss" \(al(I,M)\) measures the smoothness of image \(I\) on the boundaries of the inpainting mask \(M\). It is defined as follows:
\[al(I,M){=}\frac{1}{n^{2}}\|D_{x}I\odot D_{x}(1-M)+D_{y}I\odot D_{y}(1-M)\|_{2 }^{2}, \tag{6}\]
where \(D_{x}\) and \(D_{y}\) are the normalized image gradients:
Figure 1: GradPaint method overview. We propose to modify one step of the DDPM denoising process with a gradient descent update on \(x_{t}\) to better match the masked input image, in turn producing a better matched noise map \(x_{t-1}\) for the next step. This improvement in the DDPM noise prediction thus allows for better fitting intermediate noise map predictions \(x_{t}\) earlier in the DDPM denoising process, which ultimately produces a successful final inpainted image \(x_{0}\).
\[\begin{bmatrix}D_{x}I\\ D_{y}I\end{bmatrix}_{(i,j)}=\begin{cases}\frac{\nabla I_{(i,j)}}{||\nabla I_{(i,j)}|| _{2}},&\text{if }||\nabla I||_{(i,j)}>0\\ \begin{bmatrix}0&0\end{bmatrix}^{T},&\text{else}\end{cases} \tag{7}\]
with \(\nabla I=[\partial_{x}I\ \ \partial_{y}I]^{T}\) is the vector of gradients of \(I\) in the \(x\) and \(y\) directions respectively. When we minimize this loss, we aim to achieve the smoothest transition possible in the image \(I\) along the direction where \(M\) changes values. Since this loss \(al(I,M)\) is defined for an image with only one color channel, we simply define the total alignment loss \(\mathcal{L}_{al}\) as the average loss over the three color channels for a regular RGB image.
**GradPaint Update.** Our total loss is defined as:
\[\mathcal{L}=\mathcal{L}_{mse}+\lambda_{al}\mathcal{L}_{al}, \tag{8}\]
with \(\lambda_{al}\) being a hyperparameter controlling the relative strength of the alignment loss compared to the MSE loss.
At each step in the denoising process, we compute \(\mathbf{x}_{t-1}\) as a function of \(\mathbf{x}_{t}\) as in the _combine-image_ method. In between each step, we update the variable \(\mathbf{x}_{t-1}\) with the normalized gradient of our total loss:
\[\mathbf{x}^{\prime}_{t-1}=\mathbf{x}_{t-1}-\alpha\frac{\nabla_{\mathbf{x}_{t}}\mathcal{L }(x_{0},\hat{x}_{0},M)}{\|\nabla_{\mathbf{x}_{t}}\mathcal{L}(x_{0},\hat{x}_{0},M) \|_{2}}, \tag{9}\]
with \(\alpha\) being a fixed learning rate.
Backpropagating through the diffusion model itself until variable \(\mathbf{x}_{t}\) is a crucial element of our method. Since \(\mathbf{x}_{t}\) is updated to produce a better estimation \(\hat{\mathbf{x}}_{0}\) when processed by the diffusion model, this property will also transfer to \(\mathbf{x}_{t-1}\) which is, at each step, very close to \(\mathbf{x}_{t}\).
### Visualizations
**Harmonization.** The effect of the GradPaint update is illustrated in Fig. 2, which shows the intermediate DDPM predictions for \(\hat{\mathbf{x}}_{0}\) and \(\hat{\mathbf{x}}^{\prime}_{0}\) at various timesteps. We compare GradPaint with the _combine-noisy_ and _combine-image_ methods presented in SS3.1, where all three methods share the same DDPM model, parameters and initial noise maps. These baseline approaches require more steps to integrate the information from the input image, at which point it is often "too late" to construct a harmonized image - misalignment between the generation and the input image can no longer be corrected. In contrast, for GradPaint, the optimization step on \(\mathbf{x}_{t}\) quickly pushes the merged image \(\hat{\mathbf{x}}^{\prime}_{0}\) to harmonizes well with the masked input image \(\mathbf{x}_{0}\), producing an inpainting result without alignment artifacts.
**Gradient visualization.** The two separate components of our loss have different effects on \(\nabla\mathbf{x}_{t}\), as we can see in Fig. 2(a). While the gradient of the masked MSE loss remains active throughout the denoising process, the gradient of the alignment loss becomes obsolete about halfway-through, thereafter only concentrating in a few local points in \(\mathbf{x}_{t}\). The gradient of the alignment loss has a concentrated effect on the borders of the mask, but also affects the entire noise map \(\mathbf{x}_{t}\) globally, while the masked MSE loss has a much stronger effect in the unmasked region. The alignment loss encourages smoother and more gradual transitions in the final generation, as can be seen with the background in Fig. 2(b).
## 4 Evaluation Protocol
### Pre-trained models and implementation details
We detail our setup for image-space diffusion models as well as latent-space diffusion models. We provide a detailed list of the assets used in our work (datasets, code, and models) in the Appendix in A.
Figure 2: DDPM predictions at different stages (indicated in \(\%\)) of the denoising process. We compare two baselines (a) and (b) with GradPaint (the two last rows). GradPaint better harmonizes regions inside and outside the inpainting mask right from the beginning of the denoising process.
Experiments on image-space diffusion modelsWe primarily use diffusion models from guided diffusion [7], which operates on images of size \(256\times 256\). We use their pre-trained unconditional models (pre-trained on FFHQ, CelebaHQ, and Places2) as well as their class-conditional model trained on ImageNet.
We use a default number of 100 steps for DDPM sampling; the loss is computed with \(\lambda_{al}=400\) during the first 45 steps of decoding (and disabled afterwards following our observations shown in (a)a). The gradient is updated with a fixed learning rate of \(0.005\).
Extension to latent diffusion modelsWe also experiment with latent diffusion models [32]. We have observed that the latent spaces that we use have much less structure compared to real images, and that our alignment loss, whose role is to enforce smoothness on real images, cannot fulfill this role in latent spaces. Therefore, for all experiments with latent diffusion models, we only experiment with the masked MSE loss, which naturally extends to latent spaces by considering the encoded input image as reference in our MSE loss.
Latent diffusion models also operate on \(256\times 256\) images, but images are edited in a latent space with spatial dimensions of \(64\times 64\). We use pre-trained unconditional latent diffusion models on CelebAHQ and FFHQ. We use the class-conditional latent diffusion model pre-trained on ImageNet. Finally, for text-conditional models, we use Stable Diffusion pre-trained on the public LAION-5B dataset [34].
We use a default number of 100 steps for DDPM sampling; the loss is computed with \(\lambda_{mse}=1\). The gradient is updated with a fixed learning rate of \(0.005\).
DatasetsWe evaluate our algorithm on five datasets: FFHQ, CelebaHQ, ImageNet, Places2 and COCO.
Given an image, the aim is to perform inpainting inside a random mask generated with the mask generator from [37]. We mainly evaluate on the difficult and more realistic _thick_ masks (additional results on _thin_ and _medium_ masks are provided in the appendix). We create 5000 masked images for all experiments.
For both image-space and latent diffusion models, we evaluate the FFHQ pre-trained model on a subset of CelebAHQ images. Inversely, we evaluate the CelebAHQ pre-trained model on a subset of FFHQ images. We evaluate the ImageNet pre-trained models on a subset ImageNet validation set and use the class label as conditioning.
For the image-space diffusion model pre-trained on Places2, we use a subset of the Places2 validation set for evaluation. For the Stable Diffusion model pre-trained on LAION-5B, we use a subset of the COCO validation set and use the captions as conditioning text information for the diffusion model.
### Metrics
For a set of images inpainted with a given method, we compute two core metrics that encapsulate the challenges of inpainting: the LPIPS distance [44] between the inpainted image and the (unmasked) input image which measures the extent to which we correctly recover the masked regions, and the FID score [10] which measures the realism of output images. The primary requirement is that inpainted images should look as natural as possible, hence having the smallest possible FID score. For LPIPS distances, an inpainting result closer to the reference image is generally better, although realistic images further away from the reference image can also be satisfactory, especially for large masks.
Figure 3: Effect of separate components of our loss on the intermediate predictions of the DDPM model and their corresponding gradients. Noise maps are initialized identically.
### Baselines
We compute the best and worst possible LPIPS and FID scores with two trivial measures: the _COPY_ oracle measure, which simply copies the (unmasked) input image, gives an LPIPS score of \(0\) and a lower bound on possible FID scores; and the _GREYFILL_ measure, which simply fills the region to be inpainted with uniform grey. We also add a _Latent COPY_ oracle for latent diffusion models which consists in simply auto-encoding the input image. Without gradient-based optimization, our method is equivalent to the _combine-image_ baseline for image inpainting, which we evaluate in our experiments along with its _combine-noisy_ variant. Apart from these three closely related methods, we compare against the following state-of-the-art inpainting methods: LaMa [37], a GAN-based method trained for inpainting; Palette [33], also trained for inpainting but with diffusion models, RePaint [24], another training-free inpainting algorithm that is much more computationally expensive, and finally MCG [3], a parallel line of work to ours which is similarly training-free but with a different optimization scheme.
## 5 Quantitative Evaluation
Image-space Diffusion ModelsQuantitative results on the FFHQ and CelebA datasets for image-space diffusion models are shown in Tab. 1, where GradPaint is compared against available competing methods (FFHQ-pretrained checkpoints are not available for Palette and LAMA) as well as the _combine_ baselines. We present the results for the _thick_ mask setting, as this is the most interesting setting for practical applications. Results for other mask sizes are presented in the appendix. The benefit of our gradient update is visible when comparing to _combine-image_ (same as ours without gradient updates): On FFHQ, the FID score is reduced from 7.30 to 5.65, a significant improvement given that the minimum obtainable FID score is 4.29 on 5000 images (_COPY_ oracle measure). Results on both datasets show similar gains. When comparing with competing methods on FFHQ, GradPaint obtains the state-of-the art FID score, outperforming methods specialized in inpainting (Palette, LAMA) as well as the training-free algorithms Repaint and MCG based on the same diffusion model as ours. LaMa obtains slightly better LPIPS scores but requires an inpainting-specific training (compared to simply using a pre-trained generative model). Moreover, LaMa, unlike all other methods, has access at train time to the mask distribution that we use for testing.
We validate different components of our method with the ImageNet [5] dataset and guided diffusion model, summarized in Tab. 2. This more difficult dataset was chosen to better analyze our different components as well as validate our method on class-conditioned diffusion models, where generation could be biased by the class. Our full method, and the alignment loss in particular, improves reconstruction and realism of generated images.
Latent Diffusion ModelsResults on Latent Diffusion Models are presented in Tab. 3. As we can see, the latent space allows for very good image reconstruction (small LPIPS scores), so it is not a real limitation and GradPaint (latent) is still able to outperform competing methods (FID 5.97 on FFHQ _thick_ masks). Overall, we observe large and consistent gains on all three datasets ImageNet, COCO and FFHQ datasets over the reference inpainting methods, for both FID and LPIPS.
Our method produces globally and locally harmonized images, without the heavy computation cost of [24] nor the specific supervised training of [37]. Note that we selected images where the _thick_ masks masked out key parts of the input image to better appreciate the different results.
Fig. 6 shows qualitative results on ImageNet-trained guided diffusion model for different components of our method. We note that the baseline _combine-image_ is biased by the class-conditioning of the model without taking into account the context, like for the "red wolf" class. Adding gradient update as well as the alignment loss produces generations harmonized with the surrounding context.
Latent Diffusion ModelsFig. 7 shows visual examples of our method using Stable Diffusion on COCO. Our method produces realistic and harmonized results compared to the baseline method. We provide further results on ImageNet in the Appendix E.
## 7 Impact of mask distribution
Training-free is particularly advantageous as such a method is is agnostic to any pre-defined mask distribution to train on, contrary to training-based methods. We illustrate this by comparing our method to [37] on masks outside of their pre-defined training distribution. Specifically, we create masks where each pixel has a 80% chance of being masked, masking considerable portions of the image. As we can see in Fig. 8, [37] produces low-quality results while our method produces realistic images. This is confirmed quantitatively in Tab. 4.
Figure 4: Inpainting results on select images from ImageNet. The _combine-image_ baseline produces unharmonized results and struggles to take the context into account. Our method produces high-quality results at a fraction of the time of RePaint[23].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & \multicolumn{2}{c|}{ImageNet} & \multicolumn{2}{c|}{COCO} & \multicolumn{2}{c|}{FFHQ} \\ \hline FID & LPIPS & FID & LPIPS & FID & LPIPS \\ \hline
**COPY** (**10.2**) & 22.72 & **0.0** & **7.29** & **0.0** & **4.29** & **0.0** \\ \hline
**Lat. COPY** (**6.0**) & 12.00 & **0.034** & **7.71** & **0.041** & **4.98** & **0.018** \\ \hline
**GREY**FIL & 34.51 & 0.269 & 29.97 & 0.264 & 77.43 & 0.257 \\ \hline _combine-image_ & 17.17 & 0.195 & 11.12 & 0.241 & 8.73 & 0.132 \\ \hline _combine-image_ & 17.37 & 0.207 & 12.68 & 0.257 & 6.832 & 0.127 \\ \hline
**Grad**Paint (ours) & **14.62** & **0.163** & **9.43** & **0.216** & **5.97** & **0.111** \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation of pre-trained latent diffusion models with _thick_ masks. For all values, lower is better. The COPY oracle measures the metrics on the ground-truth images, and the Latent COPY oracle does the same for autoencoded ground-truth images. As we can see, our modification for latent diffusion models yields significant improvements on all datasets.
Figure 5: In-the-wild images for models trained on Places2. Note that _combine-image_, _combine-noisy_ and GradPaint all use the same noise map for initialization. Note that LaMa was specifically trained using similar masks, contrary to our method.
## 8 Conclusion
We have presented GradPaint, a training-free algorithm that guides the generative process of diffusion to better perform inpainting operations when given real images. GradPaint improves upon baselines by better harmonizing generated content inside the inpainting mask with known regions of the input image, which is done via gradient descent computed from a dedicated harmonization loss. Extensive qualitative and quantitative experiments demonstrate the superiority of our method, which is able to outperform methods trained specifically for inpainting.
It is important to note that many open-source diffusion models are trained with large amounts of web-scraped data, thus inheriting their biases. Applying our method onto these models could potentially reinforce harmful cultural biases. We believe open-sourcing editing algorithms in a research context contributes to a better understanding of these biases and will aid the community to mitigate them in the future.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{2}{c|}{CelebaHQ} & \multicolumn{2}{c|}{Places2} \\ \hline & FID\(\downarrow\) & LPIPS\(\downarrow\) & FID\(\downarrow\) & LPIPS\(\downarrow\) \\ \hline
**COPY (oracle)** & 4.29 & **0.0** & **6.47** & **0.0** \\ \hline GREYFILL & 403.23 & 1.06 & 282.01 & 1.09 \\ \hline LaMa & 74.47 & 0.517 & 52.63 & 0.320 \\ \hline GradPaint & **44.87** & **0.170** & **27.17** & **0.277** \\ \hline \end{tabular}
\end{table}
Table 4: Quantitative results comparing our train-free method to a training-based method (LaMa) on CelebaHQ and Places2 using masks outside of LaMa’s training distribution.
Figure 8: Uncurated results of our method compared to LaMa on out-of-distribution masks. We show images from (a) FFHQ and (b) Places2. LaMa fares poorly on masks outside of the training distribution. Best viewed zoomed and in color.
Figure 6: Qualitative results for select images of ImageNet dataset. Baseline _combine-image_ produces images with visible artifacts. Our gradient update using only the masked MSE loss improves the “copy-paste” effect, while the alignment loss produces better aligned transitions. |
2309.14123 | Harnessing Supervised Learning for Adaptive Beamforming in Multibeam
Satellite Systems | In today's ever-connected world, the demand for fast and widespread
connectivity is insatiable, making multibeam satellite systems an indispensable
pillar of modern telecommunications infrastructure. However, the evolving
communication landscape necessitates a high degree of adaptability. This
adaptability is particularly crucial for beamforming, as it enables the
adjustment of peak throughput and beamwidth to meet fluctuating traffic demands
by varying the beamwidth, side lobe level (SLL), and effective isotropic
radiated power (EIRP). This paper introduces an innovative approach rooted in
supervised learning to efficiently derive the requisite beamforming matrix,
aligning it with system requirements. Significantly reducing computation time,
this method is uniquely tailored for real-time adaptation, enhancing the
agility and responsiveness of satellite multibeam systems. Exploiting the power
of supervised learning, this research enables multibeam satellites to respond
quickly and intelligently to changing communication needs, ultimately ensuring
uninterrupted and optimized connectivity in a dynamic world. | Flor Ortiz, Juan A. Vasquez-Peralvo, Jorge Querol, Eva Lagunas, Jorge L. Gonzalez Rios, Luis Garces, Victor Monzon-Baeza, Symeon Chatzinotas | 2023-09-25T13:23:22Z | http://arxiv.org/abs/2309.14123v1 | # Harnessing Supervised Learning for Adaptive Beamforming in Multibeam Satellite Systems
###### Abstract
In today's ever-connected world, the demand for fast and widespread connectivity is insatiable, making multibeam satellite systems an indispensable pillar of modern telecommunications infrastructure. However, the evolving communication landscape necessitates a high degree of adaptability. This adaptability is particularly crucial for beamforming, as it enables the adjustment of peak throughput and beamwidth to meet fluctuating traffic demands by varying the beamwidth, side lobe level (SLL), and effective isotropic radiated power (EIRP). This paper introduces an innovative approach rooted in supervised learning to efficiently derive the requisite beamforming matrix, aligning it with system requirements. Significantly reducing computation time, this method is uniquely tailored for real-time adaptation, enhancing the agility and responsiveness of satellite multibeam systems. Exploiting the power of supervised learning, this research enables multibeam satellites to respond quickly and intelligently to changing communication needs, ultimately ensuring uninterrupted and optimized connectivity in a dynamic world.
beamforming, machine learning, multibeam satellite
## I Introduction
In an era defined by the insatiable demand for high-speed, ubiquitous connectivity, multibeam satellite systems have emerged as a crucial cornerstone of modern telecommunications infrastructure [1, 2]. These systems, characterized by their ability to serve multiple users and regions with a single satellite simultaneously, offer the promise of global connectivity, bridging the digital divide, and supporting a myriad of applications ranging from broadband internet access to disaster response and remote sensing [3].
Beamforming, in essence, is the dynamic force behind the uninterrupted flow of data that connects our digital lives. Central to the performance and efficiency of multibeam satellite systems is the concept of on-board adaptive beamforming. Adaptive beamforming enables satellites to dynamically focus their transmission beams towards specific user terminals or regions, optimizing signal quality, reducing interference, and conserving precious satellite resources [4, 5]. Traditionally, adaptive beamforming algorithms have relied on fixed, pre-engineered solutions that may not fully adapt to the ever-changing conditions of the satellite environment [6].
The paper referenced in [4] presents an innovative approach for beam pattern synthesis tailored to the needs of geostationary satellite communication systems. This synthesis technique empowers the generation of beams characterized by a flexible beamwidth variation ranging from 0.45\({}^{\circ}\) to 1.5\({}^{\circ}\), with independent control over these parameters for the two primary slices. The output of this advanced beam pattern synthesizer is a meticulously crafted matrix of weights imbued with beamforming coefficients precisely tailored to the desired beam characteristics. Notably, the study's results highlight the algorithm's exceptional efficacy, facilitated by the use of a surrogate optimizer. This optimizer adeptly computes the weight matrix, ultimately synthesizing beams with only minor deviations from the input data. These findings underscore the method's potential to revolutionize beam pattern synthesis for geostationary (GEO) satellite communication systems, offering a robust and responsive solution to meet the complex demands of modern telecommunications.
This paper explores a paradigm shift in the field of adaptive beamforming for multibeam satellite systems. Instead of relying solely on static beamforming techniques, we propose supervised learning for adaptive beamforming, a novel approach that harnesses the power of supervised machine learning to adaptively optimize beamforming parameters in real-time. The core of this adaptability lies within the realm of beamforming, a pivotal technology that empowers satellites to adjust their peak throughput, beamwidth, side lobe level (SLL), nulling, and effective isotropic radiated power (EIRP) to accommodate the ever-fluctuating traffic demands of our connected world [4, 7].
The main idea behind SLAB is to equip multibeam satellite systems with the ability to autonomously learn and adapt their beamforming strategies based on historical data and ongoing environmental conditions [7, 8]. By doing so, supervised learning for adaptive beamforming promises to revolutionize the field, enabling satellites to operate more efficiently, reduce latency, enhance data rates, and extend the reach of their services [9].
We present the theoretical underpinnings of supervised learning for on-board adaptive beamforming, provide a comprehensive overview of the supervised learning techniques employed, and detail the implementation of this innovative approach within multibeam satellite systems. We also offer an extensive performance evaluation, comparing our approach to traditional beamforming methods under various operational scenarios and illustrating the substantial benefits it provides.
## II System Model and Problem Definition
In the context of designing a multibeam satellite antenna system, several key factors need to be considered, including antenna array dimensioning, beamwidth, SLL, nulling, and EIRP control.
### _Antenna Array Dimensioning_
This work considers a direct-radiating antenna array (DRA). The number of antenna elements in the DRA is determined by the required gain, beam solid angle, satellite altitude, position, and coverage area. For our study, we assume that the satellite is positioned in the geostationary orbit. Assuming a symmetric and rectangular array comprising \(N\times N\) radiating elements, the size of each dimension can be calculated as:
\[N=\frac{\mathrm{asinc}(\frac{1}{\sqrt{2}})\lambda_{0}}{\eta\ \theta_{-3dB2}d}, \tag{1}\]
where, \(d\) represents the inter-element spacing, \(\lambda_{0}\) is the operating wavelength, \(\theta_{-3dB}\) is the beamwidth, and \(\eta\) is the antenna efficiency. For our scenario, we employ open-ended waveguide antennas as the radiating elements, with an inter-element separation of 7/8\(\lambda_{0}\) chosen to maintain mutual coupling below -30 dB for a central frequency \(f_{0}\) = 19 GHz [4]. The estimated efficiency is approximately 90% to account for potential factors affecting overall performance. Thus, the proposed DRA has a total of \(144\times 144\) antenna elements.
Individually controlling all the radiating elements in this array would be impractical due to the high number of required radio-frequency (RF) chains (one per antenna). To mitigate this, we partition the array in 4\(\times\)4 antennas sub-arrays, which results in a new unit cell dimension and an inter-element spacing of 3.5\(\lambda_{0}\)[4]. Consequently, the number of RF chains reduces to 36\(\times\)36, each connected to one of the sub-arrays or unit cells (referred to as elements in the rest of the paper). This configuration of the beamforming matrix will be reflected in the radiation pattern provided by each beam, where we will obtain a beamwidth in \(b-th\) beam, \(\theta_{\mathbf{t}}^{b}\), an \(EIRP_{t}^{b}\), and a specific SLL (see Fig. 1).
### _Beamwidth, SLL, Nulling, and EIRP Control_
To optimize the antenna system, several parameters require control:
* **Scanning Angles:** Steering the beam is accomplished by modifying the complex component of the weight matrix. This can be achieved through progressive phase shifts, FFT-based methods [10, 11], or codebook-based beamforming [12]. We calculate incremental phase shifts using \[\Theta_{\mathrm{mn}}=k(md_{x}\sin(\theta_{0})\cos(\phi_{0})+nd_{y}\sin(\theta_ {0})\sin(\phi_{0})),\] (2) where \(k=\frac{2\pi}{\lambda_{0}}\) is the wave number, \(m\) and \(n\) are the positions of the elements in the \(x\) and \(y\)-axis, respectively, \(dx\) and \(dy\) are the corresponding periods, and \(\theta_{0}\) and \(\phi_{0}\) are the scanning angles [12].
* **Beamwidth and SLL Control:** Beamwidth and SLL are controlled using tapering techniques. Chebyshev amplitude tapering is chosen due to its effectiveness in narrowing the beam [13].
* **Nulling Control:** Nulling is achieved by modifying the antenna's progressive phase shift. \(W_{\theta_{0}\phi_{0}}\) represents the weight matrix with the progressive phase shift to the steering direction, and \(W_{\mathrm{null}}\) represents the weight matrix with the progressive phase shift towards the desired nulling angle.
* **EIRP Control:** EIRP control depends on beamwidth, scanning angle, and power radiated by each antenna element. Compensating for scanning losses and achieving the desired EIRP may require adjusting the power per element, especially in scenarios involving subarrays.
### _Beamforming Cost Function_
To encapsulate these considerations, we define a beamforming cost function with the objective of minimizing \(\mathrm{F}(Z_{1}+Z_{2}+Z_{3})\). This cost function comprises three sub-objectives, each addressing different aspects of beamforming optimization.
The first sub-objective quantifies the error between the desired beamwidths, both in azimuth (\(\theta_{-3\mathrm{dB}_{\mathrm{A_{so}}}^{\mathrm{b}}}\) and \(\theta_{-3\mathrm{dB}_{\mathrm{A_{so}}}^{\mathrm{b}}}\)) and elevation (\(\theta_{-3\mathrm{dB}_{\mathrm{E_{so}}}^{\mathrm{b}}}\) and \(\theta_{-3\mathrm{dB}_{\mathrm{E_{so}}}^{\mathrm{b}}}\)), for each beam.
The second sub-objective assesses the error between the minimum SLL requirements, in both azimuth (\(\mathrm{SLL}_{\mathrm{A_{so}}}^{\mathrm{b}}\) and \(\mathrm{SLL}_{\mathrm{A_{so}}}^{\mathrm{b}}\)) and elevation (\(\mathrm{SLL}_{\mathrm{E_{so}}}^{\mathrm{b}}\) and \(\mathrm{SLL}_{\mathrm{E_{so}}}^{\mathrm{b}}\)), and the SLL achieved by the beamforming matrix.
Lastly, the third sub-objective calculates the error between the desired EIRP (\(\mathrm{EIRP}_{\mathrm{o}}^{\mathrm{b}}\)) and the computed EIRP (\(\mathrm{EIRP}_{\mathrm{c}}^{\mathrm{b}}\)) for each beam.
These terms are weighted by factors \(k_{1}\), \(k_{2}\), and \(k_{3}\), allowing for fine-tuned adjustments to their importance within the optimization process.
The optimization problem can be succinctly expressed as follows:
\[\min_{W_{p\times q}^{p}}\left(Z_{1}(W_{p\times q}^{B})+Z_{2}(W_{p\times q}^{B} )+Z_{3}(W_{p\times q}^{B})\right), \tag{3}\]
where:
\[\begin{cases}Z_{1}=\left(\frac{\left|\theta_{-3\mathrm{dB}_{\mathrm{A_{so}}}}^ {\mathrm{b}}(W_{p\times q}^{B})-\theta_{-3\mathrm{dB}_{\mathrm{A_{so}}}}^{ \mathrm{b}}\right|}{\theta_{-3\mathrm{dB}_{\mathrm{A_{so}}}}^{\mathrm{b}}}+ \right.\\ \left.\frac{\left|\theta_{-3\mathrm{dB}_{\mathrm{E_{so}}}}^{\mathrm{b}}(W_{p \times q}^{B})-\theta_{-3\mathrm{dB}_{\mathrm{E_{so}}}}^{\mathrm{b}}\right|}{ \theta_{-3\mathrm{dB}_{\mathrm{E_{so}}}}^{\mathrm{b}}}\right)}k_{1}\\ Z_{2}=\left(\frac{\left|\mathrm{SLL}_{\mathrm{A_{so}}}^{\mathrm{b}}(W_{p\times q }^{B})-\mathrm{SLL}_{\mathrm{A_{so}}}^{\mathrm{b}}\right|}{\mathrm{SLL}_{ \mathrm{A_{so}}}^{\mathrm{b}}}+\right.\\ \left.\frac{\left|\mathrm{SLL}_{\mathrm{E_{so}}}^{\mathrm{b}}(W_{p\times q }^{B})-\mathrm{SLL}_{\mathrm{E_{so}}}^{\mathrm{b}}\right|}{\mathrm{SLL}_{ \mathrm{E_{so}}}^{\mathrm{b}}}\right|}{k_{2}}\\ Z_{3}=\left(\frac{\mathrm{EIRP}_{\mathrm{c}}^{\mathrm{b}}(W_{p\times q}^{B})- \mathrm{EIRP}_{\mathrm{o}}^{\mathrm{b}}}{\mathrm{EIRP}_{\mathrm{o}}^{ \mathrm{b}}}\right)k_{3}.\end{cases}\]
## III Supervised Learning for Adaptive Beamforming
In this section, we present our classification approach for selecting the optimal beamforming matrix using neural networks. We aim to employ a neural network-based classifier to predict the most suitable beamforming matrix for a given set of input parameters.
### _Beamforming Matrix Clustering_
The core challenge we confront is the vast search space of potential beamforming matrices. As elucidated in Section II, our beamforming matrix consists of 36x36 elements, totaling 1296 individual components. Each element can either be activated or remain inactive, resulting in an astronomically large number of possible combinations. Effectively navigating this expansive solution space necessitates a method that can distill and comprehend the intricate relationships between various parameters, such as azimuth and elevation beamwidths (\(\theta_{3dB,ele}\) and \(\theta_{3dB,azi}\)), SLL in elevation and azimuth (\(SLL_{el}\) and \(SLL_{az}\)), desired EIRP, and pointing coordinates in elevation and azimuth.
To tackle this complexity, we employ a clustering-based approach, specifically leveraging the K-means algorithm. This algorithm plays a pivotal role in grouping and categorizing influential variables within our system, ultimately shedding light on their combined impact on the design of the beamforming array. The variables considered for clustering include \(\theta_{3dB,ele}\) (elevation beamwidth), \(\theta_{3dB,azi}\) (azimuth beamwidth), \(SLL_{el}\) (SLL in elevation), \(SLL_{az}\) (SLL in azimuth), \(EIRP\), elevation, and azimuth (as illustrated in Figure 2).
The K-means algorithm aims to minimize the within-cluster sum of squares, which can be formulated as follows:
\[\min_{S}\sum_{i=1}^{K}\sum_{\mathbf{x}\in S_{i}}||\mathbf{x}-\boldsymbol{ \mu}_{i}||^{2}, \tag{4}\]
where \(K\) represents the number of clusters, \(S_{i}\) is the set of data points assigned to cluster \(i\), \(\mathbf{x}\) represents a data point, \(\boldsymbol{\mu}_{i}\) is the centroid of cluster \(i\).
Utilizing clustering, we identify similar sets of input parameters and assign them to corresponding clusters. Each cluster is associated with a specific pre-stacked beamforming matrix, effectively encapsulating a subset of the vast dataset. This transformative approach effectively converts the problem into a binary classification scenario, where each class corresponds to a distinct pre-defined matrix within our extensive set of beamforming options.
Using \(K\) clusters as representative classes, each aligned with a specific set of input data points and their corresponding beamforming matrices. This approach forms the foundation of our supervised learning framework, facilitating the intelligent adaptation of beamforming in response to dynamic communication requirements.
### _Classification Approach to select the best Beamforming Matrix_
Our classification model is based on a feedforward neural network. Let \(\mathbf{X}\) represent the input feature vector, which comprises parameters such as azimuth and elevation beamwidths (\(\theta_{3dB,ele}\) and \(\theta_{3dB,azi}\)), Side Lobe Level in elevation and azimuth (\(SLL_{el}\) and \(SLL_{az}\)), desired \(EIRP\), \(elevation\), and \(azimuth\). The output layer of the neural network consists of \(K\) neurons, each corresponding to one of the \(K\) pre-defined clusters of beamforming matrices.
The neural network's architecture can be summarized as follows (See Fig. 3):
1. **Input Layer:** The input layer consists of neurons representing the input features \(\mathbf{X}\).
2. **Hidden layers:** We incorporate one or more hidden layers to capture complex relationships within the data. Each hidden layer contains a varying number of neurons.
Fig. 1: Proposed DRA: \(36\times 36\) sub-arrays (elements), where each element can be active or not. Each element weight is defined per beam and defines the beamwidth in \(b-th\) beam, \(\theta_{t}^{b}\), an \(EIRP_{t}^{b}\), and a specific SLL
The choice of the number of hidden layers and neurons per layer can be determined through experimentation.
3. **Output Layer:** The output layer comprises \(K\) neurons, where \(K\) represents the number of clusters determined by the K-means algorithm in the "Beamforming Matrix Clustering" approach. Each output neuron corresponds to one of the \(K\) pre-defined clusters.
The neural network training involves supervised learning, where we use labeled data to teach the model to predict the appropriate beamforming matrix cluster. The loss function used for training can be a categorical cross-entropy loss, defined as [14, 15]:
Fig. 3: Artificial Neural Network for Classification Approach to select the best Beamforming Matrix
Fig. 2: Analysis of key input variables that affect beamforming design. Each color represents one cluster
\[L(\mathbf{X},y)=-\sum_{i=1}^{K}y_{i}\log(p_{i}), \tag{5}\]
where, \(y_{i}\) is the ground truth label for class \(i\) (one-hot encoded), \(p_{i}\) is the predicted probability of the input belonging to class \(i\).
For a dataset with \(S\) samples in all, the categorical cross-entropy loss is given by [16]:
\[L(\mathbf{X},y)=-\sum_{j=1}^{S}\sum_{i=1}^{K}y_{i}\log(p_{i}), \tag{6}\]
The neural network is trained to minimize this loss function using optimization techniques such as stochastic gradient descent (SGD) or Adam optimization.
Once the neural network is trained, it can be used for inference to select the most appropriate beamforming matrix for a given set of input parameters. The neural network predicts the probability distribution over the \(K\) clusters when new input data is provided. The cluster with the highest predicted probability is selected as the best beamforming matrix.
Algorithm 1 shows a simplified pseudocode for the classification approach using neural networks to select the best beamforming matrix.
## IV Numerical Results
To facilitate the mapping of input data to the most suitable beamforming matrix, we chose to use 20 clusters as representative classes. Each class corresponds to a pre-defined matrix within the set of 20 matrices.
Categorizing the input data into these distinct classes establishes a clear mapping between each data point and the corresponding pre-defined matrix. This mapping enables us to effectively assign and utilize the appropriate beamforming matrix based on the input variables' characteristics during the beamforming process.
We evaluate the performance of our model using key metrics, including loss and accuracy. Figures 4 and 5 display the training and validation results, showcasing the model's effectiveness. We achieved a training and validation loss of less than 0.03, indicating that the model effectively learned to classify input data into the predefined classes. Our model achieved an accuracy greater than \(97\%\) in both training and validation, affirming its ability to accurately predict the appropriate beamforming matrix.
Figure 6 shows the ROC curve that illustrates the trade-off between sensitivity (true positive rate) and specificity (true negative rate) for different classification thresholds. Although our classification problem involves 20 classes, the ROC curve is generated by considering each class against the rest, offering insights into the model's discrimination ability. The ROC curve demonstrates the model's precision, with an accuracy exceeding \(94\%\) for each class.
In comparison with the algori
Fig. 4: Loss performance during the training for training and validation data
Fig. 5: \(Accuracy\) performance during the training for training and validation data
Fig. 6: Confusion Matrix for approach 2
approach offers significant advantages in terms of execution time. While the algorithm in [4] requires at least 10 minutes to obtain a beamforming matrix for each beam, our deep neural network (DNN) model, once trained, takes only about 3 seconds. Figure 7 provides a comprehensive time comparison for a system with 10 beams, highlighting the efficiency of our approach.
Figures 8 and 9 depict the radiation patterns obtained through our model's predictions for azimuth and elevation. These patterns align with the desired specifications, meeting the minimum SLL requirements and exhibiting beamwidth differences of only 0.03 degrees. Additionally, the gain differences are less than 0.5 dB, highlighting the model's ability to optimize system performance.
## V Conclusion
Our model exhibited remarkable performance. Comparing our approach to the algorithm presented in [4], we significantly reduced execution time, making real-time beamforming feasible.
Moreover, our model met and improved system requirements, ensuring minimum side lobe levels and precise beamwidth control. For future work, we plan to explore the scalability of our approach to handle a larger number of beams and classes effectively. Additionally, we aim to enhance the model's adaptability to dynamic communication environments, allowing it to respond dynamically to changing requirements. Integrating reinforcement learning techniques may further optimize beamforming decisions for evolving satellite communication needs [17].
## Acknowledgment
This work was supported by the European Space Agency (ESA) funded under Contract No. 4000134522/21/NL/FGL named "Satellite Signal Processing Techniques using a Commercial Off-The-Shelf AI Chipset (SPAICE)". Please note that the views of the authors of this paper do not necessarily reflect the views of the ESA. Furthermore, this work was partially supported by the Luxembourg National Research Fund (FNR) under the project SmartSpace (C21/IS/16193290).
|
2309.04330 | Solutions to the stochastic heat equation with polynomially growing
multiplicative noise do not explode in the critical regime | We investigate the finite time explosion of the stochastic heat equation
$\frac{\partial u}{\partial t} = \Delta u(t,x) + \sigma(u(t,x))\dot{W}(t,x)$ in
the critical setting where $\sigma$ grows like $\sigma(u) \approx C(1 +
|u|^\gamma)$ and $\gamma = \frac{3}{2}$. Mueller previously identified
$\gamma=\frac{3}{2}$ as the critical growth rate for explosion and proved that
solutions cannot explode in finite time if $\gamma< \frac{3}{2}$ and solutions
will explode with positive probability if $\gamma>\frac{3}{2}$. This paper
proves that explosion does not occur in the critical $\gamma=\frac{3}{2}$
setting. | Michael Salins | 2023-09-08T13:53:38Z | http://arxiv.org/abs/2309.04330v1 | Solutions to the stochastic heat equation with polynomially growing multiplicative noise do not explode in the critical regime
###### Abstract
We investigate the finite time explosion of the stochastic heat equation \(\frac{\partial u}{\partial t}=\Delta u(t,x)+\sigma(u(t,x))\dot{W}(t,x)\) in the critical setting where \(\sigma\) grows like \(\sigma(u)\approx C(1+|u|^{\gamma})\) and \(\gamma=\frac{3}{2}\). Mueller previously identified \(\gamma=\frac{3}{2}\) as the critical growth rate for explosion and proved that solutions cannot explode in finite time if \(\gamma<\frac{3}{2}\) and solutions will explode with positive probability if \(\gamma>\frac{3}{2}\). This paper proves that explosion does not occur in the critical \(\gamma=\frac{2}{2}\) setting.
## 1 Introduction
We investigate whether solutions to the stochastic heat equation explode in finite time. The equation is
\[\begin{cases}\frac{\partial u}{\partial t}(t,x)=\Delta u(t,x)+\sigma(u(t,x)) \dot{W}(t,x),&x\in[-\pi,\pi],t>0\\ u(t,-\pi)=u(t,\pi),&t>0\\ u(0,x)=u_{0}(x)\text{ bounded and periodic}.\end{cases} \tag{1.1}\]
where \(\sigma\) is locally Lipschitz continuous and satisfies the critical superlinear growth restriction that there exists \(C>0\) such that for all \(u\in\mathbb{R}\)
\[|\sigma(u)|\leq C(1+|u|^{\frac{3}{2}}) \tag{1.2}\]
The spatial domain is \(D=[-\pi,\pi]\) and we impose periodic boundary conditions. The stochastic noise \(\dot{W}\) is spacetime white noise and the initial data \(u_{0}(x)\) is a bounded, continuous, periodic function.
In [17, 20, 21, 22], Mueller and Sowers proved that the polynomial growth rate of \(|u|^{\frac{3}{2}}\) is critical in the sense that if \(\sigma(u)\leq C(1+|u|^{\gamma})\) for some \(C>0\) and \(\gamma<\frac{3}{2}\), the solution to the SPDE (1.1) cannot explode in finite time. If \(\sigma(u)\geq c|u|^{\gamma}\) for some \(c>0\) and \(\gamma>\frac{3}{2}\) then solutions will explode with positive probability. The question of whether solutions can explode in finite time in the critical case of \(\gamma=3/2\) was left unsolved. In this paper we prove that solutions cannot explode in the critical regime where \(\gamma=\frac{3}{2}\).
Mueller's results have been generalized to other settings including fractional heat equations [2, 10], nonlinear Schrodinger equation [8], and stochastic wave equation [19]. More recently, researchers have investigated the effects that adding superlinear deterministic forcing terms \(f(u(t,x))\) to the right-hand side of (1.1) has on the finite time explosion of the stochastic heat equation [1, 4, 7, 9, 11, 13, 15, 24, 25]. Similar explosion problems have been investigated for the stochastic wave equation [12, 16]. Interestingly, in [7], for example, the authors prove that if the additional force \(f(u)\) grows like \(|u|\log(|u|)\) then \(\sigma\) can grow like \(|u|(\log(|u|))^{\frac{1}{4}}\) and solutions will never explode - a much slower growth rate than the allowable \(|u|^{\frac{3}{2}}\) growth rate when \(f\equiv 0\). This \(|u|(\log|u|)^{\frac{1}{4}}\) growth rate is not known to be optimal and it will be interesting to see if explosion can occur is \(\sigma(u)\approx(1+|u|^{\frac{3}{2}})\) when \(f\) grows superlinearly.
In the opposite setting where \(f\) is strongly dissipative, \(\sigma\) can grow faster than \(|u|^{\frac{3}{2}}\) and solutions will not explode because the dissipative forcing counteracts the expansion due to the noise [23]. Specifically, in this space-time white noise setting, if \(f(u)\text{sign}(u)\leq-\mu|u|^{\beta}\) for some \(\beta>3\), then \(\sigma\) can grow like \(C(1+|u|^{\gamma})\) for any \(\gamma<\frac{\beta+3}{4}\) and solutions will not explode. In the setting of the current paper, \(f\equiv 0\) and the maximal allowable growth rate for \(\sigma\) is (1.2).
The mild solution to (1.1) is defined to be the solution to the integral equation
\[u(t,x)=\int_{D}G(t,x-y)u(0,y)dy+\int_{0}^{t}\int_{D}G(t-s,x-y)\sigma(u(s,y))W( dyds) \tag{1.3}\]
where \(G(t,x)\) is the fundamental solution to the heat equation on \(D\) with periodic boundary conditions. Because \(\sigma\) is locally Lipschitz continuous, standard localization arguments prove that there exists a unique _local_ mild solution to (1.3) that exists until the explosion time
\[\tau_{\infty}^{\infty}:=\sup_{n>0}\tau_{n}^{\infty} \tag{1.4}\]
where
\[\tau_{n}^{\infty}:=\inf\left\{t>0:\sup_{x\in D}|u(t,x)|\geq n\right\}. \tag{1.5}\]
A local mild solution _explodes in finite time_ if \(\tau_{\infty}^{\infty}<\infty\). A local mild solution is called a _global_ mild solution if the solution never explodes with probability one, \(\mathbb{P}(\tau_{\infty}^{\infty}=\infty)=1\).
The main result of this paper, Theorem 1.1 proves that when (1.2) is satisfied, the mild solution is global.
**Theorem 1.1**.: _Assume that the initial data is \(x\mapsto u(0,x)\) is a bounded, continuous, periodic function on \([-\pi,\pi]\) and assume that \(\sigma\) is locally Lipschitz continuous and satisfies (1.2). Then there exists a unique global mild solution to (1.1)._
The method of proof is inspired by [17, 20], but a new strategy is needed to prove non-explosion in the critical \(\gamma=\frac{3}{2}\) setting. The first step is to prove that the \(L^{1}\) norm of the solutions cannot explode. The fact that the \(L^{1}\) norm cannot explode is easiest to see in the special case where \(u(t,x)\geq 0\) for all \(t>0\) and all \(x\in D\). Imposing the additional assumptions that \(\sigma(0)=0\) and \(u(0,x)\geq 0\), for example, would imply that \(u(t,x)\geq 0\) for all \(t>0\) with probability one because of the comparison principle [14, 18]. In the case of a positive solution, formally integrating mild solutions in space indicates that
\[|u(t)|_{L^{1}}=\int_{D}u(t,x)dx=\int_{D}u(0,x)dx+\int_{0}^{t}\int_{D}\sigma(u( s,x))W(dsdx) \tag{1.6}\]
is a nonnegative one-dimensional martingale and therefore cannot explode in finite time. This argument can be made rigorous with stopping times.
In the more general setting of this paper, where solutions \(u(t,x)\) may take both positive and negative values, we follow the ideas of [20] to construct nonnegative processes \(v(t,x)\) and \(v_{-}(t,x)\) that almost surely dominate \(u(t,x)\) in the sense that
\[-v_{-}(t,x)\leq u(t,x)\leq v(t,x). \tag{1.7}\]
Specifically, let \(\alpha>3\) and let \(f(u)=u^{-\alpha}\). Let \(v(t,x)\) be the mild solution to
\[\frac{\partial v}{\partial t}=\Delta v(t,x)+f(v(t,x))+\sigma(v(t,x))\dot{W}(t,x) \tag{1.8}\]
with initial data \(v(0,x)=\max\{u(0,x),1\}\). Corollary 1.1 of [20] proves that solutions \(v(t,x)\) remain nonnegative. The comparison principle of [14, Theorem 2.5] proves that \(u(t,x)\leq v(t,x)\) with probability one. \(v_{-}(t,x)\) is constructed similarly. Then if we can prove that \(v(t,x)\) and \(v_{-}(t,x)\) do
not explode to \(+\infty\) in finite time, then \(u(t,x)\) cannot explode in finite time either.
We construct several stopping times to analyze these solutions. for any \(n\in\mathbb{N}\) we define the \(L^{\infty}\) stopping times
\[\tau_{n}^{\infty} =\inf\{t>0:\sup_{x\in D}v(t,x)\geq n\}, \tag{1.9}\] \[\tau_{\infty}^{\infty} =\sup_{n}\tau_{n}^{\infty}. \tag{1.10}\]
The solution explodes in finite time if and only if \(\tau_{\infty}^{\infty}<\infty\). Therefore, the goal of this paper is to prove that \(\mathbb{P}(\tau_{\infty}^{\infty}=\infty)=1\). Becuase \(f\) is unbounded near \(0\), we also need to define the infimum stopping times for any \(\varepsilon>0\),
\[\tau_{\varepsilon}^{\inf}=\inf\left\{t\in[0,\tau_{\infty}^{\infty}):\inf_{x \in D}v(t,x)\leq\varepsilon\right\} \tag{1.11}\]
Because \(f(u)\) is Lipschitz continuous on \([\varepsilon,\infty)\) for any \(\varepsilon>0\) and \(\sigma(u)\) is Lipschitz continuous for \(u\in[0,n]\) for any \(n>0\), there exists a local mild solution for \(v(t,x)\) until the time \(\tau_{\infty}^{\infty}\wedge\tau_{0}^{\inf}\) where \(\wedge\) denotes the minimum.
Corollary 1.1 of [20] proves that \(v(t,x)\) never hits zero. Specifically, for any \(T>0\),
\[\lim_{\varepsilon\to 0}\mathbb{P}(\tau_{\varepsilon}^{\inf}\leq T\wedge\tau_{ \infty}^{\infty})=0. \tag{1.12}\]
For \(M>0\), we define the \(L^{1}\) stopping times
\[\tau_{M}^{1}:=\inf\{t\in[0,\tau_{\infty}^{\infty}):|v(t)|_{L^{1}}>M\} \tag{1.13}\]
and we prove that the \(L^{1}\) norm \(\int_{D}v(t\wedge\tau_{\varepsilon}^{\inf}\wedge\tau_{n}^{\infty},x)dx\) is a submartingale. Using Doob's submartingale inequality we can prove that for any \(T>0\) and \(\varepsilon>0\) the \(L^{1}\) norm cannot explode before \(T\wedge\tau_{\varepsilon}^{\inf}\). The estimates are independent of \(n\).
The novel observation, which is necessary to extend Mueller's results to the critical case where \(\gamma=\frac{3}{2}\), is that we can show that the expected value of the _quadratic variation_ of the \(L^{1}\) norm is also bounded in a way that is independent of \(n\). We prove in Lemma 4.1 that
\[\mathbb{E}\int_{0}^{\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}}(\sigma(v(s,y )))^{2}dyds\leq M^{2}, \tag{1.14}\]
an estimate that is independent of \(n\) and \(\varepsilon\).
In Section 5, we prove an improved \(L^{\infty}\) moment bound on the stochastic convolution, inspired by [4], which may be of independent interest.
**Theorem 1.2**.: _Let \(p>6\). Assume that \(\varphi(t,x)\) is an adapted random field such that_
\[\mathbb{E}\int_{0}^{T}\int_{D}|\varphi(t,x)|^{p}dxdt<+\infty. \tag{1.15}\]
_Define the stochastic convolution_
\[Z^{\varphi}(t,x)=\int_{0}^{t}\int_{D}G(t-s,x,y)\varphi(s,y)W(dyds). \tag{1.16}\]
_For any \(p>6\) there exists \(C_{p}>0\), independent of \(T>0\) and \(L>0\), such that_
\[\mathbb{E}\sup_{t\in[0,T]}\sup_{x\in D}|Z^{\varphi}(t,x)|^{p}\leq C_{p}T^{ \frac{p}{4}-\frac{3}{2}}\mathbb{E}\int_{0}^{T}\int_{D}|\varphi(s,y)|^{p}dyds. \tag{1.17}\]
We remark that in the case where there exists \(L>0\) such that
\[\mathbb{P}\left(\sup_{t\in[0,T]}\sup_{x\in D}|\varphi(t,x)|\leq L\right)=1, \tag{1.18}\]
an obvious upper bound of (1.17) is
\[C_{p}T^{\frac{p}{4}-\frac{3}{2}}\mathbb{E}\int_{0}^{T}\int_{D}|\varphi(s,y)|^ {p}dyds\leq C_{p}L^{p}T^{\frac{p}{4}-\frac{1}{2}}. \tag{1.19}\]
This looser upper bound can be used to prove non-explosion in the subcritical \(\gamma<\frac{3}{2}\) regime. Unfortunately, this looser bound will not be helpful when we prove the main non-explosion result in the critical setting and we will need the tighter upper bound (1.17).
We then define a sequence of stopping times \(\rho_{n}\) that keep track of when the \(|v(t)|_{L^{\infty}}\) doubles or halves. The stopping times are defined so that \(|v(\rho_{n})|_{L^{\infty}}=2^{m}\) for some \(m\in\mathbb{N}\). Using all of the estimates mentioned above we can prove that for any \(\varepsilon>0\) and \(M>0\), the \(L^{\infty}\) norm \(|v(\rho_{n})|_{L^{\infty}}\) can only double a finite number of times before the time \(\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\). This estimate relies on estimates of the quadratic variation of the \(L^{1}\) norm (1.14), which were not required in the subcritical setting. Therefore, for any \(\varepsilon>0\) and \(M>0\), the explosion time
\[\tau_{\infty}^{\infty}>(\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1})\text{ with probability one}. \tag{1.20}\]
Taking the limit as \(M\to\infty\) and \(\varepsilon\to 0\), we can prove that explosion cannot occur in finite time.
In Section 2 we introduce some notations and recall the properties of the heat kernel. In Section 3 we introduce the positive solutions \(v(t,x)\) and \(v_{-}(t,x)\) and prove that they dominate \(u(t,x)\). In Section 4 we prove that the \(L^{1}\) norm of the solutions its quadratic variation remain finite in a way that does not depend on the \(L^{\infty}\) norm of the solutions. In Section 5 we prove the stochastic convolution moment bound Theorem 1.2. Finally in Section 6 we prove that \(v(t,x)\) cannot explode in finite time.
## 2 Some notations and definitions
The spatial domain \(D=[-\pi,\pi]\).
Let \(L^{p}:=L^{p}(D)\), \(p\geq 1\) denote the standard \(L^{p}\) spaces on \(D\) endowed with the norms
\[|\varphi|_{L^{p}} :=\left(\int_{D}|\varphi(y)|^{p}\right)^{\frac{1}{p}},\quad p\in[ 1,\infty), \tag{2.1}\] \[|\varphi|_{L^{\infty}} :=\sup_{x\in D}|\varphi(x)|. \tag{2.2}\]
The driving noise \(\dot{W}\) is a space-time white noise defined on a filtered probability space \((\Omega,\mathcal{F},\mathcal{F}_{t},\mathbb{P})\). This means that for any non-random \(\psi,\varphi\in L^{2}([0,T]\times D)\),
\[\int_{0}^{T}\int_{D}\varphi(s,y)W(dyds)\text{ and }\int_{0}^{T}\int_{D}\psi(s,y )W(dyds)\]
are mean-zero Gaussian random variables with covariance
\[\mathbb{E}\left(\int_{0}^{T}\int_{D}\varphi(s,y)W(dyds)\right) \left(\int_{0}^{T}\int_{D}\psi(s,y)W(dyds)\right)\] \[=\int_{0}^{T}\int_{D}\varphi(s,y)\psi(s,y)dyds. \tag{2.3}\]
If \(\varphi(t,x)\) is an \(\mathcal{F}_{t}\)-adapted process then the stochastic integral
\[\int_{0}^{t}\int_{D}\varphi(s,y)W(dyds)\]
is an Ito-Walsh integral [26].
The heat kernel on \(D\) with periodic boundary is defined to be
\[G(t,x)=\frac{1}{\sqrt{2\pi}}+\sum_{k=1}^{\infty}\sqrt{\frac{2}{\pi}}e^{-|k|^{2}t} \cos(kx). \tag{2.4}\]
For any \(\varphi\in L^{2}(D)\), \(h(t,x)=\int_{D}G(t,x-y)\varphi(y)dy\) solves the linear heat equation \(\frac{\partial h}{\partial t}=\Delta h\) with initial data \(h(0,x)=\varphi(x)\).
**Lemma 2.1**.: _The heat kernel has the following properties._
1. _The heat kernel is nonnegative:_ \(G(t,x)\geq 0\) _for all_ \(t>0\)_,_ \(x\in D\)_._
2. \(|G(t,\cdot)|_{L^{1}}=\sqrt{2\pi}\)_._
3. _There exists_ \(C>0\) _such that for any_ \(t>0\)_,_ \[|G(t,\cdot)|_{L^{\infty}}\leq Ct^{-\frac{1}{2}},\] (2.5)
Proof.: The positivity of the heat kernel is a consequence of the comparison principle for linear heat equations. Specifically, let \(\varphi:D\to\mathbb{R}\) be any nonnegative periodic function. \(h(t,x)=\int_{D}G(t,x-y)\varphi(y)dy\) solves the heat equation and therefore satisfies the comparison principle. Therefore \(h(t,x)\geq 0\) for all \(t>0\) and \(x\in D\) because \(\varphi(x)\geq 0\) for all \(x\in D\). This is true for any nonnegative \(\varphi\), implying that \(G(t,x)\geq 0\).
The \(L^{1}\) norm claim can be calculated exactly because \(G(t,x)\) is nonnegative and \(\int_{-\pi}^{\pi}\frac{1}{\sqrt{2\pi}}=\sqrt{2\pi}\) and \(\int_{-\pi}^{\pi}\cos(kx)dx=0\) for \(k\geq 1\).
For the \(L^{\infty}\) norm we notice that for any \(t>0\) and \(x\in D\)
\[|G(t,x)|\leq G(t,0)\leq\sqrt{\frac{2}{\pi}}+\frac{1}{\sqrt{\pi}} \sum_{k=1}^{\infty}e^{-|k|^{2}t}\] \[\leq\sqrt{\frac{2}{\pi}}+\sqrt{\frac{1}{\pi}}\int_{0}^{\infty}e^{ -|x|^{2}t}dx\] \[\leq\sqrt{\frac{2}{\pi}}+\frac{1}{2}t^{-\frac{1}{2}}\] \[\leq Ct^{-\frac{1}{2}}. \tag{2.6}\]
Throughout the paper we use the notation \(a\wedge b=\min\{a,b\}\) and \(C\) denotes an arbitrary constant whose value may change from line to line.
Comparison to positive solutions
We follow the arguments of [20] to construct nonnegative stochastic processes that dominate \(u(t,x)\). Specifically, let \(f(u)=u^{-\alpha}\) for some \(\alpha>3\).
Let \(v(t,x)\) be the solution to
\[\frac{\partial v}{\partial t}(t,x)=\Delta v(t,x)+f(v(t,x))+\sigma(v(t,x))\dot{ W}(t,x) \tag{3.1}\]
with initial data
\[v(0,x)=\max\{u(0,x),1\} \tag{3.2}\]
and let \(v_{-}(t,x)\) be the solution to
\[\frac{\partial v_{-}}{\partial t}(t,x)=\Delta v_{-}(t,x)+f(v_{-}(t,x))+\sigma (-v_{-}(t,x))\dot{W}(t,x) \tag{3.3}\]
with initial data
\[v_{-}(0,x)=\max\{-u(0,x),1\}. \tag{3.4}\]
\(v(t,x)\) and \(v_{-}(t,x)\) have the same properties. For this reason, we only prove results for \(v(t,x)\) because the proofs for \(v_{-}(t,x)\) are identical.
We now recall the standard arguments for the construction of the unique _local mild solution_ to (3.1). For any \(\varepsilon>0\) define
\[f_{\varepsilon}(u)=(\max\{\varepsilon,u\})^{-\alpha}. \tag{3.5}\]
Notice that for any \(\varepsilon>0\), \(f_{\varepsilon}\) is globally Lipschitz continuous. For any \(n>0\) define
\[\sigma_{n}(u)=\begin{cases}\sigma(-n)&\text{ if }u<-n\\ \sigma(u)&\text{ if }u\in[-n,n]\\ \sigma(n)&\text{ if }u>n\end{cases}. \tag{3.6}\]
For each \(n>0\), \(\sigma_{n}\) is globally Lipschitz continuous. Therefore by standard arguments [3, 6, 26] for any \(\varepsilon>0\) and \(n>0\) there exists a unique global mild solution \(v_{\varepsilon,n}\) solving
\[v_{\varepsilon,n}(t,x)= \int_{D}G(t,x-y)v(0,y)dy+\int_{0}^{t}\int_{D}G(t-s,x-y)f_{ \varepsilon}(v_{\varepsilon,n}(s,y))dyds\] \[+\int_{0}^{t}\int_{D}G(t-s,x-y)\sigma_{n}(v_{\varepsilon,n}(s,y) )W(dyds) \tag{3.7}\]
where \(G(t,x)\) is the heat kernel defined in (2.4).
For any \(0<\varepsilon<n\) define the stopping times
\[\tilde{\tau}_{\varepsilon,n}:=\inf\left\{t>0:\inf_{x\in D}v_{\varepsilon,n}(t,x)< \varepsilon\text{ or }\sup_{x\in D}v_{\varepsilon,n}>n\right\} \tag{3.8}\]
For any \(0<\varepsilon_{2}<\varepsilon_{1}<n_{1}<n_{2}\), the functions \(f_{\varepsilon_{1}}(u)=f_{\varepsilon_{2}}(u)\) and \(\sigma_{n_{1}}(u)=\sigma_{n_{2}}(u)\) for all \(u\in[\varepsilon_{1},n_{1}]\). Therefore, the uniqueness of solutions implies that these solutions are _consistent_ in the sense that if \(0<\varepsilon_{2}<\varepsilon_{1}<n_{1}<n_{2}\) then
\[v_{\varepsilon_{1},n_{1}}(t,x)=v_{\varepsilon_{2},n_{2}}(t,x)\text{ for all }x\in D\text{ and }t\in[0,\tilde{\tau}_{\varepsilon_{1},n_{1}}]. \tag{3.9}\]
We can, therefore, uniquely define the unique local mild solution by
\[v(t,x):=v_{\varepsilon,n}(t,x)\text{ for all }x\in D\text{ and }t\in[0,\tilde{\tau}_{ \varepsilon,n}] \tag{3.10}\]
and the local mild solution is well defined for all \(t\in[0,\sup_{0<\varepsilon<n}\tilde{\tau}_{\varepsilon,n}]\) and solves the integral equation
\[v(t,x)= \int_{D}G(t,x-y)v(0,y)dy+\int_{0}^{t}\int_{D}G(t-s,x-y)f(v(s,y))dyds\] \[+\int_{0}^{t}\int_{D}G(t-s,x-y)\sigma(v(s,y))W(dyds). \tag{3.11}\]
The construction of \(v_{-}(t,x)\) is identical so we do not repeat the proof.
Define the infimum stopping times for \(\varepsilon\in(0,1)\)
\[\tau_{\varepsilon}^{\inf} :=\inf\left\{t>0:\inf_{x\in D}v(t,x)<\varepsilon\right\} \tag{3.12}\] \[\tau_{\varepsilon,-}^{\inf} :=\inf\left\{t>0:\inf_{x\in D}v_{-}(t,x)<\varepsilon\right\} \tag{3.13}\]
and the \(L^{\infty}\) stopping times for \(n>1\)
\[\tau_{n}^{\infty} :=\inf\left\{t>0:\sup_{x\in D}v(t,x)>n\right\} \tag{3.14}\] \[\tau_{\infty}^{\infty} :=\sup_{n>0}\tau_{n}^{\infty}. \tag{3.15}\]
\[\tau_{n,-}^{\infty} :=\inf\left\{t>0:\sup_{x\in D}v_{-}(t,x)>n\right\} \tag{3.16}\] \[\tau_{\infty,-}^{\infty} :=\sup_{n>0}\tau_{n,-}^{\infty}. \tag{3.17}\]
The comparison principle of [14, Theorem 2.5] guarantees that the following holds.
**Proposition 3.1**.: _With probability one_
\[-v_{-}(t,x)\leq u(t,x)\leq v(t,x) \tag{3.18}\]
_for all \(t\in[0,\tau_{0}^{\inf}\wedge\tau_{0,-}^{\inf}\wedge\tau_{\infty}^{\infty}\wedge \tau_{\infty,-}^{\infty}]\) and for all \(x\in[-\pi,\pi]\)._
Proof.: The comparison principle of [14] is stated for heat equations with globally Lipschitz continuous \(f(v)\) and \(\sigma(v)\). But \(f(v)\) and \(\sigma(v)\) are both Lipschitz continuous for \(v\in[\varepsilon,n]\) for any \(0<\varepsilon<n<\infty\). Therefore, with probability one,
\[-v_{-}(t,x)\leq u(t,x)\leq v(t,x) \tag{3.19}\]
for all \(t\in[0,\tau_{\varepsilon}^{\inf}\wedge\tau_{\varepsilon,-}^{\inf}\wedge\tau_ {n}^{\infty}\wedge\tau_{n,-}^{\infty}]\). Taking the limit as \(\varepsilon\to 0\) and \(n\to\infty\) proves the result.
Corollary 1.1 of [20] proves that the \(f(u)=u^{-\alpha}\) forcing and the non-negative initial data of \(v(0,x)\) prevent \(v(t,x)\) from becoming negative. We restate this result below.
**Proposition 3.2** (Corollary 1.1 of [20]).: _For any \(T>0\)_
\[\lim_{\varepsilon\to 0}\mathbb{P}\left(\inf_{t\in[0,T\wedge\tau_{\infty}^{ \infty}]}\inf_{x\in D}v(t,x)<\varepsilon\right)=0. \tag{3.20}\]
We will prove that under the assumptions of Theorem 1.1, the solutions of \(v(t,x)\) cannot explode in finite time. Because \(v_{-}(t,x)\) satisfies the same assumptions as \(v(t,x)\), \(v_{-}(t,x)\) cannot explode in finite time either.
**Theorem 3.3**.: _Let \(v(t,x)\), \(v_{-}(t,x)\) be the local mild solutions to (3.1) and (3.3). Then both \(\tau_{\infty}^{\infty}=\infty\) and \(\tau_{\infty,-}^{\infty}=\infty\) with probability one._
We will prove Theorem 3.3 in Section 6. Then the comparison principle, Proposition 3.1, guarantees that \(u(t,x)\) cannot explode in finite time. The main result of our paper, Theorem 1.1, will hold once we prove Theorem 3.3.
Proof of Theorem 1.1, assuming that Theorem 3.3 holds.: By the comparison principle, Proposition 3.1,
\[-v_{-}(t,x)\leq u(t,x)\leq v(t,x) \tag{3.21}\]
For \(t\in[0,\tau_{0}^{\inf}\wedge\tau_{0,-}^{\inf}\wedge\tau_{\infty}^{\infty} \wedge\tau_{\infty,-}^{\infty}]\) and for all \(x\in[-\pi,\pi]\). Theorem 3.3 proves that \(\tau_{\infty}^{\infty}=\tau_{\infty,-}^{\infty}=\infty\) with probability one. Proposition 3.2 proves that
\[\tau_{0}^{\inf}\geq T\wedge\tau_{\infty}^{\infty} \tag{3.22}\]
for any \(T>0\). This is true for arbitrary \(T>0\) and therefore \(\tau_{0}^{\inf}=\tau_{0,-}^{\inf}=\infty\).
Therefore \(u(t,x)\) can never explode.
The rest of the paper is devoted to proving Theorem 3.3.
The \(L^{1}\) norm of \(v(t,x)\)
The first step to prove that the solutions to \(v(t,x)\) do not explode is to prove that the \(L^{1}\) norms of solutions do not explode.
Let \(v(t,x)\) be the nonnegative local mild solution to (3.1). Define for \(t\in[0,\tau_{\infty}^{\infty}]\)
\[|v(t)|_{L^{1}}:=\int_{D}v(t,x)dx. \tag{4.1}\]
Define the \(L^{1}\) stopping times for \(M>0\)
\[\tau_{M}^{1}:=\inf\{t\in[0,\tau_{\infty}^{\infty}]:|v(t)|_{L^{1}}>M\}. \tag{4.2}\]
**Lemma 4.1**.: _For any \(T>0\), \(\varepsilon>0\) and \(M>0\),_
\[\mathbb{P}\left(\sup_{t\in[0,T\wedge\tau_{\infty}^{\infty}\wedge\tau_{ \varepsilon}^{\inf}]}|v(t)|_{L^{1}}>M\right)\leq\frac{|u(0)|_{L^{1}}+2\pi T \varepsilon^{-\alpha}}{M}, \tag{4.3}\]
_In particular,_
\[\mathbb{P}\left(\sup_{t\in[0,T\wedge\tau_{\infty}^{\infty}\wedge\tau_{ \varepsilon}^{\inf}]}|v(t)|_{L^{1}}<\infty\right)=1. \tag{4.4}\]
_Furthermore, for any \(M>0\) and \(\varepsilon>0\), the quadratic variation of \(|v(t)|_{L^{1}}\) satisfies_
\[\mathbb{E}\int_{0}^{\tau_{M}^{1}\wedge\tau_{\varepsilon}^{\inf}}\int_{D}| \sigma(v(t,x))|^{2}dyds\leq M^{2}. \tag{4.5}\]
Proof.: Let \(n>0\) be big and \(\varepsilon>0\) be small enough so that \(\varepsilon<v(0,x)<n\) for all \(x\in D\). Let
\[I_{n,\varepsilon}(t):=\int_{D}v(t\wedge\tau_{n}^{\infty}\wedge\tau_{ \varepsilon}^{\inf},x)dx. \tag{4.6}\]
The \(\tau_{\varepsilon}^{\inf}\) stopping time guarantees that \(v(t\wedge\tau_{n}^{\infty}\wedge\tau_{\varepsilon}^{\inf},x)\geq\varepsilon\) so that \(I_{n,\varepsilon}\) is the \(L^{1}\) norm \(|v(t\wedge\tau_{n}^{\infty}\wedge\tau_{\varepsilon}^{\inf})|_{L^{1}}\). Integrating the mild solution (3.11) and using the fact that \(\int_{D}G(t,x-y)dx=1\),
\[I_{n,\varepsilon}(t)= \int_{D}v(0,y)dy+\int_{0}^{t\wedge\tau_{n}^{\infty}\wedge\tau_{ \varepsilon}^{\inf}}\int_{D}f(v(s,x))dxds\] \[+\int_{0}^{t\wedge\tau_{n}^{\infty}\wedge\tau_{\varepsilon}^{\inf }}\int_{D}\sigma(v(s,y))W(dyds). \tag{4.7}\]
\(I_{n,\varepsilon}(t)\) is a nonnegative submartingale because \(f(v)>0\) and because the stochastic integral in (4.7) is a martingale. Therefore, for any \(M>0\) and \(T>0\), by Doob's inequality
\[\mathbb{P}\left(\sup_{t\in[0,T]}I_{n,\varepsilon}(t)>M\right)\leq\frac{\mathbb{ E}I_{n,\varepsilon}(T)}{M}\leq\frac{|v(0)|_{L^{1}}+2\pi T\varepsilon^{-\alpha}}{M} \tag{4.8}\]
because \(f(v)\leq\varepsilon^{-\alpha}\) when \(v>\varepsilon\), the length of \(D=[-\pi,\pi]\) is \(2\pi\), and because the expectation of the stochastic integral in (4.7) is zero. This bound does not depend on \(n\). Therefore,
\[\mathbb{P}\left(\sup_{t\in[0,T\wedge\tau_{\varepsilon}^{\inf}\wedge\tau_{ \infty}^{\infty}]}\int_{D}v(t,x)dx>M\right)\leq\frac{|v(0)|_{L^{1}}+2\pi T \varepsilon^{-\alpha}}{M}. \tag{4.9}\]
Now take \(M\uparrow\infty\) to see that
\[\mathbb{P}\left(\sup_{t\in[0,T\wedge\tau_{\varepsilon}^{\inf}\wedge\tau_{ \infty}^{\infty}]}\int_{D}u(t,x)dx<\infty\right)=1. \tag{4.10}\]
Now we apply Ito formula to (4.7). For any \(M>0\), \(n>0\), \(\varepsilon>0\),
\[\mathbb{E}(I_{n,\varepsilon}(t\wedge\tau_{M}^{1}))^{2}\] \[=\mathbb{E}(I_{n,\varepsilon}(0))^{2}+2\mathbb{E}\int_{0}^{t \wedge\tau_{n}^{\infty}\wedge\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}}\int _{D}f(v(s,y))I_{n,\varepsilon}(s)dyds\] \[\qquad+\mathbb{E}\int_{0}^{t\wedge\tau_{n}^{\infty}\wedge\tau_{ \varepsilon}^{\inf}\wedge\tau_{M}^{1}}\int_{D}|\sigma(v(s,y))|^{2}dyds. \tag{4.11}\]
Each term on the right-hand side is nonnegative and \(\mathbb{E}(I_{n,\varepsilon}(t\wedge\tau_{M}^{1}))^{2}\leq M^{2}\) by the definition of \(\tau_{M}^{1}\). Therefore,
\[\mathbb{E}\int_{0}^{t\wedge\tau_{n}^{\infty}\wedge\tau_{\varepsilon}^{\inf} \wedge\tau_{M}^{1}}\int_{D}|\sigma(v(s,y))|^{2}dyds\leq M^{2}. \tag{4.12}\]
This bound does not depend on \(n\), \(\varepsilon\), or \(t\).
## 5 Moment estimates on the stochastic convolution
In this section we prove the moment estimate Theorem 1.2.
Proof of Theorem 1.2.: Let \(p>6\) and assume that \(\varphi(t,x)\) is adapted and
\[\mathbb{E}\int_{0}^{T}\int_{D}|\varphi(t,x)|^{p}dxdt<+\infty. \tag{5.1}\]
We use Da Prato and Zabczyk's factorization method [5, Theorem 5.10]. Given \(p>6\) let \(\beta\in\left(\frac{3}{2p},\frac{1}{4}\right)\) and define
\[Z_{\beta}^{\varphi}(t,x)=\int_{0}^{t}\int_{D}(t-s)^{-\beta}G(t-s,x,y)\varphi(s,y)W(dyds). \tag{5.2}\]
Then
\[Z^{\varphi}(t,x)=\frac{\sin(\pi\beta)}{\pi}\int_{0}^{t}\int_{D}(t-s)^{\beta-1} G(t-s,x,y)Z_{\beta}^{\varphi}(s,y)dyds. \tag{5.3}\]
We can get supremum bounds on \(Z^{\varphi}(t,x)\) by Holder's inequality. This method was used, for example, by Chen and Huang [4, Proof of Theorem 1.6].
\[\sup_{t\in[0,T]}\sup_{x\in D}|Z^{\varphi}(t,x)|\leq C\left(\int_{0}^{t}\int_{D}(t-s)^{\frac{(\beta-1)p}{p-1}}G^{\frac{p}{p-1 }}(t-s,x-y)dyds\right)^{\frac{p-1}{p}}\] \[\times\left(\int_{0}^{T}\int_{D}|Z_{\beta}^{\varphi}(t,x)|^{p}dx \right)^{\frac{1}{p}}. \tag{5.4}\]
The integral
\[\int_{D}G^{\frac{p}{p-1}}(t-s,x-y)dy\leq|G(t-s)|_{L^{1}}|G(t-s)|_{L^{\infty}}^{ \frac{p}{p-1}-1}\leq C(t-s)^{-\frac{1}{2(p-1)}}\]
because of Lemma 2.1. Because we chose \(p\beta>\frac{3}{2}\), it follows that \(\frac{(\beta-1)p-\frac{1}{2}}{p-1}>-1\) and therefore
\[\mathbb{E}\sup_{t\in[0,T]}\sup_{x\in D}|Z(t,x)|^{p}\] \[\leq C\left(\int_{0}^{t}(t-s)^{\frac{(\beta-1)p-\frac{1}{2}}{p-1} }ds\right)^{p-1}\mathbb{E}\int_{0}^{T}\int_{D}|Z_{\beta}^{\varphi}(t,x)|^{p}dxdt\] \[\leq CT^{\beta p-\frac{3}{2}}\mathbb{E}\int_{0}^{T}\int_{D}|Z_{ \beta}^{\varphi}(t,x)|^{p}dxdt \tag{5.5}\]
It remains to estimate \(\mathbb{E}\int_{0}^{T}\int_{D}|Z^{\varphi}_{\beta}(t,x)|^{p}dxdy\). By the BDG inequality,
\[\mathbb{E}|Z^{\varphi}_{\beta}(t,x)|^{p}\leq C_{p}\mathbb{E}\left(\int_{0}^{t} \int_{D}G^{2}(t-s,x-y)(t-s)^{-2\beta}|\varphi(s,y)|^{2}dyds\right)^{\frac{p}{2}}. \tag{5.6}\]
By Young's inequality for convolutions,
\[\int_{0}^{T}\int_{D}\mathbb{E}|Z^{\varphi}_{\beta}(t,x)|^{p}dxdt\] \[\leq C_{p}\left(\int_{0}^{T}\int_{D}G^{2}(s,y)s^{-2\beta}dyds \right)^{\frac{p}{2}}\left(\int_{0}^{T}\int_{D}\mathbb{E}(|\varphi(s,y)|^{p}) dyds\right)\] \[\leq C_{p}\left(\int_{0}^{T}s^{-\frac{1}{2}-2\beta}ds\right)^{ \frac{p}{2}}\left(\int_{0}^{T}\int_{D}\mathbb{E}(|\varphi(s,y)|^{p})dyds\right)\] \[\leq C_{p}T^{\frac{p}{4}-p\beta}\mathbb{E}\int_{0}^{T}\int_{D}| \varphi(s,y)|^{p}dyds. \tag{5.7}\]
In the second-to-last line we used Lemma 2.1 to estimate that
\[\int_{D}G^{2}(s,y)dy\leq|G(s,\cdot)|_{L^{\infty}}|G(s,\cdot)|_{L^{1}}\leq Cs^{ -\frac{1}{2}}.\]
Combining this with (5.5) we conclude that
\[\mathbb{E}\sup_{t\in[0,T]}\sup_{x\in D}|Z(t,x)|^{p}\leq CT^{\frac{p}{4}-\frac {3}{2}}\mathbb{E}\int_{0}^{T}\int_{D}|\varphi(s,y)|^{p}dyds. \tag{5.8}\]
## 6 Non-explosion of \(v(t,x)\)
Let \(M>0\) and \(\varepsilon>0\) be arbitrary. We will show that \(v(t,x)\) cannot explode before time \(\tau^{1}_{M}\wedge\tau^{\inf}_{\varepsilon}\). After we prove this, we can take the limits as \(M\to\infty\) in Lemma 4.1 and \(\varepsilon\to 0\) in Proposition 3.2 to prove that explosion cannot ever occur.
Fix \(\varepsilon>0\), \(M>0\) and define a sequence of stopping times \(\rho_{n}\). These stopping times depend on the choices of \(\varepsilon\), and \(M\).
\[\rho_{0}=\inf\{t\in[0,\tau^{\inf}_{\varepsilon}\wedge\tau^{1}_{M}]:|v(t)|_{L^ {\infty}}=2^{m}\text{ for some }m\in\{1,2,3,...\}\}. \tag{6.1}\]
Then if \(|v(\rho_{n})|_{L^{\infty}}=2^{m}\) for \(m\geq 2\) we define
\[\rho_{n+1}=\inf\left\{t\in[\rho_{n},\tau^{\inf}_{\varepsilon}\wedge\tau^{1}_{ M}]:\begin{array}{l}|v(t)|_{L^{\infty}}\geq 2^{m+1}\\ \text{or }|v(t)|_{L^{\infty}}\leq 2^{m-1}\end{array}\right\}, \tag{6.2}\]
and if \(|v(\rho_{n})|_{L^{\infty}}=2\) then
\[\rho_{n+1}=\inf\left\{t\in[\rho_{n},\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}]: |v(t)|_{L^{\infty}}\geq 2^{2}\right\}. \tag{6.3}\]
These times keep track of how long it takes for the \(L^{\infty}\) norm of the process to either double or half. We use the convention that \(\rho_{n+1}=\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\) if the process stops doubling or halving after \(\rho_{n}\). If the process were to explode before time \(\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\), then the \(L^{\infty}\) norm would need to double an infinite amount of times before that \(\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\). We prove that the process does not explode by proving that there can only be a finite number of times that the \(L^{\infty}\) norm doubles when \(m\) is big.
Next we recall a result that proves that the \(L^{\infty}\) norm falls quickly if the \(L^{1}\) norm is bounded.
**Lemma 6.1**.: _There exists \(C>0\) such that if \(v\in L^{1}(D))\) then for any \(t\in[0,1]\),_
\[\int_{D}G(t,x-y)v(y)dy\leq Ct^{-\frac{1}{2}}|v|_{L^{1}}. \tag{6.4}\]
Proof.: We proved in Lemma 2.1 that \(|G(t,\cdot)|_{L^{\infty}}\leq Ct^{-\frac{1}{2}}\). Therefore, for any \(v\in L^{1}\), (6.4) holds.
**Lemma 6.2**.: _For any \(p>6\) there exists a nonrandom constant \(C_{p}>0\) and for any \(\varepsilon>0\) and \(M>0\) there exists a nonrandom constant \(m_{0}=m_{0}(\varepsilon,M)>0\) such that for any \(n\in\mathbb{N}\), and \(m>m_{0}\)_
\[\mathbb{P}\left(|v(\rho_{n+1})|_{L^{\infty}}=2^{m+1}\Big{|}|v( \rho_{n})|_{L^{\infty}}=2^{m}\right)\] \[\leq C_{p}M^{\frac{p}{2}-3}\mathbb{E}\left(\int_{\rho_{n}}^{\rho _{n+1}}\int_{D}(\sigma(v(s,y)))^{2}dyds\Bigg{|}|v(\rho_{n})|_{L^{\infty}}=2^{ m}\right) \tag{6.5}\]
_Importantly,the constant \(C_{p}\) is independent of \(m>m_{0}\)._
Proof.: Let \(M>0\) and assume that \(2^{m}=|v(\rho_{n})|_{L^{\infty}}\). By the semigroup property of the heat semigroup, the mild solution for \(t\in[0,\rho_{n+1}-\rho_{n}]\)
satisfies
\[v((t+\rho_{n}),x)\] \[=\int_{D}G(t,x-y)v(\rho_{n},y)dy\] \[\qquad+\int_{0}^{t}\int_{D}G(t-s,x-y)f(v(s+\rho_{n},y))dyds\] \[\qquad+\int_{0}^{t}\int_{D}G(t-s,x-y)\sigma(v(s+\rho_{n},y)) \mathbbm{1}_{\{s\leq\rho_{n+1}-\rho_{n}\}}W(dy(ds+\rho_{n}))\] \[=:S_{n}(t,x)+K_{n}(t,x)+Z_{n}(t,x) \tag{6.6}\]
By Lemma 6.1 and the fact that \(|u(\rho_{n})|_{L^{1}}\leq M\) (remember that by definition \(\rho_{n}\leq\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\)), it follows that for \(t\in(0,1)\),
\[|S_{n}(t)|_{L^{\infty}}\leq CMt^{-\frac{1}{2}}. \tag{6.7}\]
Let \(T_{m}=\frac{C^{2}M^{2}}{2^{2m-6}}\) so that \(S_{n}(T_{m})\leq 2^{m-3}\). We can bound
\[\sup_{t\leq T_{m}}\sup_{x\in D}|K_{n}(t,x)|\leq 2\pi T_{m}\varepsilon^{-\alpha} \leq\frac{C^{2}M^{2}}{\varepsilon^{\alpha}2^{2m-6}}, \tag{6.8}\]
because \(f(v(s,y))\leq\varepsilon^{-\alpha}\) for all \(s\leq\rho_{n+1}\leq\tau_{\varepsilon}^{\inf}\). Choose \(m_{0}=m_{0}(\varepsilon,M)\) large enough so that for \(m>m_{0}\),
\[\sup_{t\leq T_{m}}\sup_{x\in D}|K_{n}(t,x)|\leq\frac{C^{2}M^{2}}{\varepsilon^ {\alpha}2^{2m-2}}<2^{m-3}. \tag{6.9}\]
Theorem 1.2 with
\[\varphi(t,x):=\sigma(v(\rho_{n}+t,x))\mathbbm{1}_{\{t\leq\rho_{n+1}-\rho_{n}\}}\]
and the Chebyshev inequality guarantee that
\[\mathbb{P}\left(\sup_{t\leq T_{m}}\sup_{x\in D}|Z_{n}((t+\rho_{n }),x)|>2^{m-2}\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m}\right)\\ \leq 2^{-p(m-2)}\mathbb{E}\left(\sup_{t\leq T_{m}}\sup_{x\in D}|Z _{n}((t+\rho_{n}),x)|^{p}\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m}\right)\\ \leq C2^{-p(m-2)}T_{m}^{\left(\frac{p}{4}-\frac{3}{2}\right)}\\ \times\mathbb{E}\left(\int_{\rho_{n}}^{(\rho_{n}+T_{m})\wedge \rho_{n+1}}\int_{D}(\sigma(v(s,y)))^{p}dyds\Big{|}|v(\rho_{n})|_{L^{\infty}}= 2^{m}\right). \tag{6.10}\]
Because \(|v(s,y)|\leq 2^{m+1}\) for \(t\leq\rho_{n+1}\), our \(\sigma\) growth restriction 1.2 guarantees that \(|\sigma(v(s,y))|\leq C(1+2^{\frac{3(m+1)}{2}})\leq C2^{\frac{3m}{2}}\). We bound
\[|\sigma(v(s,y))|^{p}\leq|\sigma(v(s,y))|^{p-2}|\sigma(v(s,y))|^{2}\leq C2^{ \left(\frac{3pm}{2}-3m\right)}|\sigma(v(s,y))|^{2} \tag{6.11}\]
and therefore
\[\mathbb{P}\left(\sup_{t\leq T_{m}}\sup_{x\in D}|Z_{n}((t+\rho_{n} ),x)|>2^{m-2}\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m}\right)\] \[\leq C2^{-p(m-2)}2^{\left(\frac{3pm}{2}-3m\right)}T_{m}^{\left( \frac{p}{4}-\frac{3}{2}\right)}\] \[\qquad\times\mathbb{E}\left(\int_{\rho_{n}}^{(\rho_{n}+T_{m}) \wedge\rho_{n+1}}\int_{D}(\sigma(v(s,y)))^{2}dyds\Big{|}|v(\rho_{n})|_{L^{ \infty}}=2^{m}\right)\] \[\leq C2^{\left(\frac{pm}{2}-3m\right)}T_{m}^{\left(\frac{p}{4}- \frac{3}{2}\right)}\] \[\qquad\times\mathbb{E}\left(\int_{\rho_{n}}^{(\rho_{n}+T_{m}) \wedge\rho_{n+1}}\int_{D}(\sigma(v(s,y)))^{2}dyds\Big{|}|v(\rho_{n})|_{L^{ \infty}}=2^{m}\right) \tag{6.12}\]
\(T_{m}\) is defined in such a way that \(T_{m}^{\frac{1}{2}}2^{m}\leq CM\). This means that
\[2^{\left(\frac{pm}{2}-3m\right)}T_{m}^{\left(\frac{p}{4}-\frac{3}{2}\right)} \leq CM^{\frac{p}{2}-3}.\]
We also can bound \((\rho_{n}+T_{m})\wedge\rho_{n+1}\leq\rho_{n+1}\). Therefore,
\[\mathbb{P}\left(\sup_{t\leq T_{m}}\sup_{x\in D}|Z_{n}((t+\rho_{n }),x)|>2^{m-2}\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m}\right)\] \[\leq CM^{\frac{p}{2}-3}\mathbb{E}\left(\int_{\rho_{n}}^{\rho_{n+1 }}\int_{D}(\sigma(v(s,y)))^{2}dyds\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m} \right). \tag{6.13}\]
Finally we prove that if the event
\[\left\{\sup_{t\leq T_{m}}\sup_{x\in D}|Z_{n}((t+\rho_{n}),x)|\leq 2^{m-2}\right\} \tag{6.14}\]
occurs, then \(|v(\rho_{n+1})|_{L^{\infty}}\) falls to \(2^{m-1}\) before it can reach \(2^{m+1}\). Specifically, because (6.7)-(6.9) prove that \(|S_{n}(T_{m})|_{L^{\infty}}+|K(T_{m})|_{L^{\infty}}\leq 2^{m-2}\) it follows that \(|u(\rho_{n}+T_{m})|_{L^{\infty}}\leq 2^{m-1}\) on this event (6.14). On the other hand, \(|S_{n}(t)|_{L^{\infty}}\leq 2^{m}\) for all \(t\in[0,T_{m}]\) and it follows that on this event (6.14), \(\sup_{t\leq T_{m}}|u(\rho_{n}+t)|\leq 2^{m}+2^{m-3}+2^{m-2}<2^{m+1}\). This implies that if the
event (6.14) occurs that \(|u(\rho_{n}+t)|_{L^{\infty}}\) falls to the level \(2^{m-1}\) before it can rise to the level \(2^{m+1}\). Therefore, for \(m>m_{0}\)
\[\mathbb{P}\left(|u(\rho_{n+1})|_{L^{\infty}}=2^{m+1}\Big{|}|v(\rho_ {n})|_{L^{\infty}}=2^{m}\right)\] \[\leq\mathbb{P}\left(\sup_{t\leq T_{m}}\sup_{x\in D}|Z_{n}((t+\rho _{n})\wedge\tau_{M}^{1},x)|>2^{m-2}\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m}\right)\] \[\leq C_{p}M^{\frac{p}{2}-3}\mathbb{E}\left(\int_{\rho_{n}}^{\rho _{n+1}}\int_{D}(\sigma(v(s,y)))^{2}dyds\Big{|}|v(\rho_{n})|_{L^{\infty}}=2^{m} \right). \tag{6.15}\]
Proof of Theorem 3.3.: Fix \(M>0,\varepsilon>0\) and let \(\rho_{n}\) be defined from (6.2). Let \(m_{0}\) be from (6.9). We add up the conditional probabilities from Lemma 6.2 to see that for any \(n\in\mathbb{N}\),
\[\mathbb{P}\left(|v(\rho_{n+1})|_{L^{\infty}}=2|v(\rho_{n})|_{L^{ \infty}}\text{ and }|v(\rho_{n})|_{L^{\infty}}>2^{m_{0}}\right)\] \[=\sum_{m=m_{0}}^{\infty}CM^{\frac{3p}{2}-3}\mathbb{E}\left(\int_{ \rho_{n}}^{\rho_{n+1}}\int_{D}(\sigma(v(s,y)))^{2}dyds\Big{|}|v(\rho_{n})|_{L^ {\infty}}=2^{m}\right)\mathbb{P}\left(|v(\rho_{n})|_{L^{\infty}}=2^{m}\right)\] \[\leq CM^{\frac{3p}{2}-3}\mathbb{E}\left(\int_{\rho_{n}}^{\rho_{n+ 1}}\int_{D}(\sigma(v(s,y)))^{2}dyds\right). \tag{6.16}\]
Now add these probabilities with respect to \(n\)
\[\sum_{n=1}^{\infty}\mathbb{P}\left(|v(\rho_{n+1})|_{L^{\infty}}=2 |v(\rho_{n})|_{L^{\infty}}\text{ and }|v(\rho_{n})|_{L^{\infty}}>2^{m_{0}}\right)\] \[\leq CM^{\frac{3p}{2}-3}\sum_{n=1}^{\infty}\mathbb{E}\int_{\rho_{ n}}^{\rho_{n+1}}(\sigma(v(s,y)))^{2}dyds\] \[\leq CM^{\frac{3p}{2}-3}\mathbb{E}\int_{0}^{\tau_{\varepsilon}^{ \inf}\wedge\tau_{M}^{1}}(\sigma(v(s,y)))^{2}dyds. \tag{6.17}\]
The last line is a consequence of the fact that all of the \(\rho_{n}\) are defined to be smaller than \(\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\). The right-hand side of (6.17) is proportional to the expectation of the quadratic variation from (4.5), which is finite. Therefore,
\[\sum_{n=1}^{\infty}\mathbb{P}\left(|v(\rho_{n+1})|_{L^{\infty}}=2|v(\rho_{n})| _{L^{\infty}}\text{ and }|v(\rho_{n})|_{L^{\infty}}>2^{m_{0}}\right)<+\infty. \tag{6.18}\]
The Borel-Cantelli Lemma guarantees that with probability one, the events \(\{|v(\rho_{n+1})|_{L^{\infty}}=2|v(\rho_{n})|_{L^{\infty}}\) and \(|v(\rho_{n})|_{L^{\infty}}>2^{m_{0}}\}\) only happen a finite number of times. This means that the \(L^{\infty}\) norm cannot possibly explode before time \(\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1}\) because the \(L^{\infty}\) norm stops doubling when \(m\) gets big. This proves that for any \(\varepsilon>0\) and \(M<\infty\),
\[\mathbb{P}\left((\tau_{\varepsilon}^{\inf}\wedge\tau_{M}^{1})<\tau_{\infty}^{ \infty}\right)=1. \tag{6.19}\]
Next, we argue via Proposition 3.2 and Lemma 4.1 that for arbitrary \(T>0\) and small enough \(\varepsilon>0\) and large enough \(M>0\), the stopping times \(\tau_{\varepsilon}^{\inf}\) and \(\tau_{M}^{1}\) are both larger than \(T\) with high probability. Let \(\eta\in(0,1)\) and \(T>0\) be arbitrary. Proposition 3.2 implies that there exists \(\varepsilon>0\) small enough so that
\[\mathbb{P}\left(\tau_{\varepsilon}^{\inf}<(T\wedge\tau_{\infty}^{\infty}) \right)=\mathbb{P}\left(\inf_{t\in[0,T\wedge\tau_{\infty}^{\infty})}\inf_{x \in D}v(t,x)\leq\varepsilon\right)<\frac{\eta}{2}. \tag{6.20}\]
With this choice of \(\varepsilon>0\), we estimate the probability that the \(L^{1}\) norm of \(v(t,x)\) is large. For \(M>0\) to be chosen later,
\[\mathbb{P}\left(\tau_{M}^{1}<(T\wedge\tau_{\infty}^{\infty})\text { or }\tau_{\varepsilon}^{\inf}<(T\wedge\tau_{\infty}^{\infty})\right)\] \[=\mathbb{P}\left(\sup_{t\in[0,(T\wedge\tau_{\infty}^{\infty})]} \int_{D}v(t,x)dx>M\text{ or }\tau_{\varepsilon}^{\inf}<(T\wedge\tau_{\infty}^{ \infty})\right)\] \[\leq\mathbb{P}\left(\tau_{\varepsilon}^{\inf}<(T\wedge\tau_{ \infty}^{\infty})\right)\] \[\qquad+\mathbb{P}\left(\sup_{t\in[0,T\wedge\tau_{\infty}^{\infty} ]}\int_{D}v(t,x)dx>M\text{ and }\tau_{\varepsilon}^{\inf}\geq(T\wedge\tau_{\infty}^{ \infty})\right)\] \[\leq\frac{\eta}{2}+\mathbb{P}\left(\sup_{t\in[0,T\wedge\tau_{ \infty}^{\infty}\wedge\tau_{\varepsilon}^{\inf}]}\int_{D}v(t,x)dx>M\right) \tag{6.21}\]
The last line follows from (6.20) and the fact that on the event \(\{\tau_{\varepsilon}^{\inf}\geq T\wedge\tau_{\infty}^{\infty}\}\), \(T\wedge\tau_{\infty}^{\infty}=T\wedge\tau_{\infty}^{\infty}\wedge\tau_{ \varepsilon}^{\inf}\). By Lemma 4.1, we can choose \(M>0\) large enough so that
\[\mathbb{P}\left(\tau_{M}^{1}<(T\wedge\tau_{\infty}^{\infty})\text { or }\tau_{\varepsilon}^{\inf}<(T\wedge\tau_{\infty}^{\infty})\right)<\eta. \tag{6.22}\]
Therefore, with these choices of \(\varepsilon>0\) and \(M>0\), (6.20) and (6.22) imply
\[\mathbb{P}\left(\tau_{\varepsilon}^{\inf}\geq T\wedge\tau_{\infty}^{\infty} \text{ and }\tau_{M}^{1}\geq T\wedge\tau_{\infty}^{\infty}\right)\geq 1-\eta. \tag{6.23}\]
This combined with (6.19) implies that
\[\mathbb{P}(T<\tau_{\infty}^{\infty})>1-\eta. \tag{6.24}\]
The choice of \(\eta>0\) was arbitrary. Therefore,
\[\mathbb{P}(T<\tau_{\infty}^{\infty})=1. \tag{6.25}\]
This is true for arbitrary \(T>0\) and therefore,
\[\mathbb{P}\left(\tau_{\infty}^{\infty}=\infty\right)=1 \tag{6.26}\]
and \(v(t,x)\) cannot explode in finite time.
|
2309.12552 | Adaptive Model Predictive Control for Engine-Driven Ducted Fan Lift
Systems using an Associated Linear Parameter Varying Model | Ducted fan lift systems (DFLSs) powered by two-stroke aviation piston engines
present a challenging control problem due to their complex multivariable
dynamics. Current controllers for these systems typically rely on
proportional-integral algorithms combined with data tables, which rely on
accurate models and are not adaptive to handle time-varying dynamics or system
uncertainties. This paper proposes a novel adaptive model predictive control
(AMPC) strategy with an associated linear parameter varying (LPV) model for
controlling the engine-driven DFLS. This LPV model is derived from a global
network model, which is trained off-line with data obtained from a general mean
value engine model for two-stroke aviation engines. Different network models,
including multi-layer perceptron, Elman, and radial basis function (RBF), are
evaluated and compared in this study. The results demonstrate that the RBF
model exhibits higher prediction accuracy and robustness in the DFLS
application. Based on the trained RBF model, the proposed AMPC approach
constructs an associated network that directly outputs the LPV model parameters
as an adaptive, robust, and efficient prediction model. The efficiency of the
proposed approach is demonstrated through numerical simulations of a vertical
take-off thrust preparation process for the DFLS. The simulation results
indicate that the proposed AMPC method can effectively control the DFLS thrust
with a relative error below 3.5%. | Hanjie Jiang, Ye Zhou, Hann Woei Ho, Wenjie Hu | 2023-09-22T00:49:40Z | http://arxiv.org/abs/2309.12552v1 | Adaptive Model Predictive Control for Engine-Driven Ducted Fan Lift Systems using an Associated Linear Parameter Varying Model
###### Abstract
Ducted fan lift systems (DfLSs) powered by two-stroke aviation piston engines present a challenging control problem due to their complex multivariable dynamics. Current controllers for these systems typically rely on proportional-integral algorithms combined with data tables, which rely on accurate models and are not adaptive to handle time-varying dynamics or system uncertainties. This paper proposes a novel adaptive model predictive control (AMPC) strategy with an associated linear parameter varying (LPV) model for controlling the engine-driven DFLS. This LPV model is derived from a global network model, which is trained off-line with data obtained from a general mean value engine model for two-stroke aviation engines. Different network models, including multi-layer perceptron, Elman, and radial basis function (RBF), are evaluated and compared in this study. The results demonstrate that the RBF model exhibits higher prediction accuracy and robustness in the DFLS application. Based on the trained RBF model, the proposed AMPC approach constructs an associated network that directly outputs the LPV model parameters as an adaptive, robust, and efficient prediction model. The efficiency of the proposed approach is demonstrated through numerical simulations of a vertical take-off thrust preparation process for the DFLS. The simulation results indicate that the proposed AMPC method can effectively control the DFLS thrust with a relative error below 3.5%.
keywords: adaptive model predictive control, radial basis functions, linear parameter varying model, ducted fan lift system, two-stroke piston engine control +
Footnote †: journal: Elsevier
## 1 Introduction
The ducted fan lift system (DfLS) is widely used in current vertical takeoff and landing (VTOL) aircraft, many of which use fuel engines as the ducted fan drive devices[1; 2; 3; 4]. Compared to an electrically powered DFLS, a fuel-engine-powered DfLS is more likely to meet the requirements of high power and high energy density, thereby delivering superior flight performance. Two-stroke aviation piston engines, which serve as the power unit of the DfLS, have rapid, highly nonlinear dynamics with state and input constraints [5; 6]. In addition, the complex geometry of the ducted fan of the DFLS makes it difficult to analyze its aerodynamic properties [7; 8]. These characteristics make the engine-driven DFLS a multivariable system with tightly coupled nonlinear dynamics, posing modeling and control challenges.
Many current spark ignition (SI) engines employ feed-forward control based on a state observer and a proportional-integral (PI) type feedback control[9; 10]. Typically, look-up tables are used to implement the PI controller, which necessitates a laborious process of calibration and tuning. When the state of the engine changes rapidly, control accuracy tends to decrease[10]. To address these challenges, researchers have developed advanced control strategies [11; 12; 13] that can be applied to SI engines, enabling more precise and energy-efficient control. For instance, a global optimal control method based on \(H\infty\) theory was proposed for systematic control of air-fuel ratio (AFR) with high robustness and quick responses [12]. Experimental results applied to a four-cylinder multi-port injection (MPI) engine indicate that the AFR control error can be limited to within 3% across a broad spectrum of operating conditions. However, the sixth-order controller is unsuitable for real-time computation, restricting its practical applications. Another approach applied the dynamical sliding mode control (SMC) with a radial basis function (RBF) neural network model to a piston engine [13]. This work demonstrated that the SMC algorithm is robust, fast, and insensitive to parameter changes and external
disturbances within the context of nonlinear system control problems. However, it's important to acknowledge that as the system state approaches the sliding mode surface, achieving precise sliding along the surface to reach the equilibrium point becomes challenging, resulting in non-convergence.
In contrast, Model predictive control (MPC) is widely recognized as an advanced control method in practical control engineering[14, 15]. It can effectively address multivariable constrained optimal control problems and offers the advantages of simplicity, straightforward design, high stability, robustness, and adaptability[16, 17, 18]. MPC has proven to be effective for fuel engine systems containing multiple variables, nonlinear dynamics, and time delays[19, 20]. However, the dynamics of the DFLS can be highly nonlinear, particularly concerning the throttle position (TPS) and the injection fuel mass flow. It may even exhibit open-loop instability during the transition between operation points. Thus, the performance of a linear MPC designed for a specific operating condition will degrade near another operating point. Recent studies[21, 22, 23] explored gain-scheduled MPC to cope with nonlinear systems when the plant models have different orders or time delays. This approach involves incorporating multiple predictive controllers into a gain-scheduled MPC for various operating points, switchable based on a predetermined scheduling policy. However, gain-scheduled MPC faces limitations in ensuring control accuracy during transitions for nonlinear systems and demands substantial computing resources [21]. In the realm of SI engine control [24, 19], nonlinear MPC has gained traction due to its potential for more precise control performance. Yet, achieving effective and stable control performance necessitates meticulous considerations, such as computational complexity, convergence challenges, and parameter selection, particularly in real-time or large-scale applications.
In piston engine control, adaptive Model Predictive Control (AMPC) has emerged as an alternative, utilizing an online-identified linear model through successive linearization or online model estimation. This approach offers enhanced efficiency compared to nonlinear MPC[25, 13, 26, 20]. The successive linearization method employs a set of nonlinear ordinary differential and algebraic equations to build the plant model and derives the linear time-invariant (LTI) approximation at the current operating condition to update the model parameters. However, when dealing with highly nonlinear systems, successive linearization techniques typically require a large number of iterations and computational resources. Online AMPC, on the other hand, has good control accuracy and robustness, but it still comes at the cost of significantly increased control optimization time and memory requirements due to extensive online operations. During the data sample identification process, online AMPC is susceptible to model errors stemming from issues such as delays, noise, and insufficient excitation.
The objective of this paper is to develop a prediction model for AMPC that mitigates the need for complex and time-consuming online operations. Our proposed approach involves the direct construction of an associated linear parameter varying (LPV) model, derived from a nonlinear global model. To achieve this, the global model will be approximated using neural networks, chosen for their high approximation capability and successful applications in modeling for predictive engine control[26, 20, 27]. This paper investigated and compared the performance of three different network models: the multi-layer perceptron (MLP) [28], radial basis function (RBF) [29], and the Elman network [30]. Based on the experimental results, the RBF network emerges as the most suitable choice for representing the SI engine in the context of DFLS AMPC. By deriving an associated linear parameter varying (LPV) model from the global network model, we depart from the conventional practice of employing the nonlinear model network as a prediction model or identifying a linear model online. This shift reduces runtime computational demands and memory usage while simultaneously bolstering DFLS control robustness. In this paper, the network model undergoes training using the input and output data generated by a mean value engine model (MVEM)[31], and the ducted fan model of the DFLS will be derived using the method of theoretical design and evaluation[32]. This study focuses on the basic thrust control of the DFLS, while the thrust fine-tuning and direction are controlled by the exit louvers configured at the outlets of the duct[33]. Specifically, the control strategy of the DFLS optimizes and constrains the working state of the engine by establishing a desired thrust baseline engaged by the engine output power and a desired AFR. In addition, based on the multistep-ahead prediction of both the aerodynamic thrust and the AFR, the optimal control to track and maintain the desired value is obtained as the engine state changes.
This paper presents the following **main contributions**: 1) This study investigates and compares the MLP, Elman, and RBF network models, and finds that the RBF model is more accurate in its predictions and more robust in the DFLS application. 2) The study proposes an innovative AMPC method employing a novel associated LPV model directly derived from an off-line trained RBF global model, to efficiently update the prediction model state. 3) The proposed RBF model-based AMPC is applied to the nonlinear control of a DFLS, demonstrating the practicality of the method.
The remainder of the paper is structured as follows: Section 2 introduces the DFLS dynamic model consisting of an MVEM and a theoretical ducted fan model. In Section 3, three types of engine neural network models
are established, trained, and compared. Section 4 proposes the AMPC controller with an LPV model generated from the model network trained in Section 3. Section 5 concludes the paper and outlines future research directions.
## 2 The ducted fan lift system
The DFLS dynamic model is comprised of the SI engine and ducted fan modules. As the propulsion system, the SI engine utilizes the transmission mechanism to finally rotate the fan shaft and generate lift force. Figure1 depicts the control test bench scheme of the DFLS demonstrator, illustrating the assembly relationship between the components.
### The engine's dynamics
For the development of engine controllers and their validation with simulations, mathematical modeling of engine dynamics is essential[34]. The MVEM is a widely used mathematical engine model that has achieved many successes in real-time simulation and control of automobile and ship engines[35; 36; 37]. The general MVEM uses empirical equations to construct engine sub-models, which reduces engine modeling time and computational cost by a significant margin. The implementation of the MVEM combines quasi-static and volumetric models, dividing the two-stroke engine into four independent volumetric control units[31]: the dynamics of intake manifold, the crankcase and cylinder module, the dynamics of fuel injection, and the dynamics of the crankshaft.
The equation for the crankshaft speed state in the general MVEM is[31]
\[\dot{\omega}=\frac{H_{u}\eta_{i}(1-k_{f})\dot{m}_{f}(t-\tau_{d})}{I\omega}- \frac{P_{f}+P_{b}}{I\omega}, \tag{1}\]
where \(\dot{\omega}\) is the angular acceleration of the crankshaft, \(H_{u}\) is the lower heating value of the fuel, \(\eta_{i}\) is the thermal efficiency, \(t\) is the time, \(\tau_{d}\) is the injection-torque time delay, and \(k_{f}\) is the proportionality coefficient constant for fuel loss resulting from short-circuiting and overflow losses during the scavenging process. \(I\) is the inertia of the engine, \(P_{f}\) is the friction loss, and \(P_{b}\) is the load power of the engine. The torque of the engine can be expressed as[31]
\[Q_{eng}=\frac{H_{u}\eta_{i}(1-k_{f})\dot{m}_{f}(t-\tau_{d})}{\omega}-\frac{P_ {f}}{\omega}. \tag{2}\]
The intake airflow is mixed with fuel during the engine's intake process, and the normalized air-fuel ratio is
\[\lambda=\frac{\dot{m}_{as}}{\dot{m}_{f}L_{th}}, \tag{3}\]
where \(L_{th}\) is the stoichiometric air-fuel ratio.
Figure 1: The DFLS control test bench scheme (1: ducted fan, 2: installation rack for the ducted fan, 3: torque and rotational speed measuring unit, 4: belt pulley, 5: forcemeter, 6: sliding rail, 7: pulley belt, 8: SI engine, 9: belt pulley, 10: test bench base).
### Ducted fan dynamics
Due to the complexity of the geometry, it is difficult to analyze the aerodynamic properties of ducted fans. This paper adopts a modeling method for the DFLS based on the blade element theory (BET) and the momentum theory[32]. According to the BET, propellers are made up of minuscule elements in the shape of airfoils along the radius of each blade[38]. The resulting velocity for each element can be decomposed into rotational and translational components. The resulting aerodynamic force can be decomposed into the drag and lift. When decomposed in the plane of rotation, it produces thrust and the torque-producing force.
The total thrust \(T_{UDF}\) can be expressed as[32]
\[T_{UDF}=\frac{1}{2}\rho V_{trans}{}^{2}B\int_{0}^{R}T_{c}\cdot dr, \tag{4}\]
where \(\rho\) is the air density, \(V_{trans}\) is the translational speed component, \(B\) is the blade number corrective factor, \(R\) is the blade radius, \(T_{c}\) is the thrust coefficient and \(dr\) represents the infinitesimal airfoils along the blade radius. Similarly, the total torque \(Q_{UDF}\) can be expressed as:
\[Q_{UDF}=\frac{1}{2}\rho V_{trans}{}^{2}B\int_{0}^{R}Q_{c}\cdot dr, \tag{5}\]
where \(Q_{c}\) is the torque coefficient. The rate of energy supplied by the engine matches its power \(P_{UDF}\):
\[P_{UDF}=2\pi nQ_{UDF}. \tag{6}\]
Using the momentum theorem, the static thrust ratio between the ducted fan and the unducted fan with the same absorbed power[39] can be calculated as:
\[\frac{T_{DF}}{T_{UDF}}=1.26\left(\frac{S_{3}}{S_{2}}\right)^{\frac{1}{3}}, \tag{7}\]
where \(S_{2}\) is the area of the fan disc and \(S_{3}\) is the area of the duct outlet. Equations 1, 2, 4-7 can be used to calculate the thrust of the ducted fan that corresponds to the engine output in this instance. Specifically, the absorbed power of the ducted fan is determined by calculating the engine output power from the MVEM output. BET is then used to calculate the thrust of an unducted fan with the same absorption power.
Previous studies have validated the dynamic models of the DFLS[31; 32]. Both models are explicit mathematical representations, allowing for flexible combination and application. In this study, the established DFLS dynamic model will be utilized as the plant for control research, and the MVEM data will be employed to create a neural network engine model intended for the network-based controller in the following section.
## 3 Neural network model of the engine
As stated previously, the engine network model is established using MLP, Elman, and RBF neural networks in this study. After comparing the accuracy and robustness of these three network models, the best one will be utilized in the development of the AMPC controller. Figure 2 depicts the expanded engine model, which has four inputs, fuel injection rate \(\dot{m}_{fi}\), throttle position \(TPS\), engine speed \(n\), and normalized AFR \(\lambda\), and generates three outputs, normalized AFR \(\lambda\), engine speed \(n\), and engine output torque \(Q_{eng}\) at next instance.
The engine network model developed will not be employed as an adaptive prediction model for the MPC directly, but rather to facilitate the development of an LPV prediction model generated from the trained network parameters.
Figure 2: Expanded engine model structure
### MLP network model
Based on the gradient descent algorithm, the MLP network is a supervised learning technique[40]. The MLP network structure is depicted in Figure 3 and the mathematical expression of the MLP network output is
\[\mathbf{h}=f_{2}(LW\cdot f_{1}(IW\cdot\mathbf{p}^{T}+\mathbf{b}_{1})+\mathbf{b}_{2}), \tag{8}\]
where \(\mathbf{p}\) represents the input vector, \(IW\) represents the weight matrix from the input layer to the hidden layer, and \(LW\) represents the weight matrix from the hidden layer to the output layer. The bias vectors for the hidden layer and output layer are denoted by \(\mathbf{b}_{1}\) and \(\mathbf{b}_{2}\), respectively.
The primary learning phases of MLP networks are mode forward transmission process and error back propagation (BP). In the back propagation process, when the output does not achieve the desired value, the error signal transfers back along the original path. Adjustment of general functions based on cumulative value and domain is expressed as[41]:
\[W(k+1)=W(k)-\alpha\frac{\partial E_{k}}{\partial W(k)}, \tag{9}\]
\[\mathbf{b}(k+1)=\mathbf{b}(k)-\beta\frac{\partial E_{k}}{\partial b(k)}, \tag{10}\]
where \(W(k+1)\) and \(W(k)\) are the weights at times \(k+1\) and \(k\) respectively, and \(\mathbf{b}(k+1)\) and \(\mathbf{b}(k)\) are the biases at times \(k+1\) and \(k\) respectively. The error function is denoted by \(E_{k}\), and \(\alpha\) and \(\beta\) are the learning rates.
For the MLP network, different number of hidden layer nodes have been used in training experiments and a structure with a single hidden layer containing 26 nodes is chosen as shown in Figure 2, which gives the minimum prediction error. The activation function of the hidden layer \(f_{1}\) is hyperbolic tangent,and the activation function of the output layer \(f_{2}\) is linear transfers. The learning rates \(\alpha\) and \(\beta\) are set as 0.1 in this MLP network to ensure the convergence of network training. In this paper, the training accuracy target is set to 0.0001.
### Elman network model
Elman network is a well-known partial recurrent network, which lies between a traditional feed-forward perception network and a pure recurrent network[30]. As shown in Figure 4, a back-forward loop is included in an Elman network that is sensitive to input data history[42].
Figure 4: The structure of the Elman network.
Figure 3: The concise structure of the MLP network.
The Elman network is also trained using the dynamic BP algorithm, and its hidden layer activation function \(f_{1}\) utilizes the hyperbolic tangent function, and the output layer activation function \(f_{2}\) utilizes the linear transfer function. Due to the additional backforward loop, the Elman network's input-output relations differ from those of the MLP network, as illustrated below:
\[\mathbf{a}(k)=f_{1}(IW\cdot\mathbf{p}+LW_{1}\cdot\mathbf{a}(k-1)+\mathbf{b}_{1 }), \tag{11}\]
\[\mathbf{h}(k)=f_{2}(LW_{2}\cdot\mathbf{a}(k)+\mathbf{b}_{2}). \tag{12}\]
The Elman network selects a structure with a single hidden layer which contains 12 nodes, after the training experiments with different number of hidden layer nodes. The learning rate is set to be 0.01, and the training accuracy target is set to 0.0001. The maximum number of training epochs is set as 1000.
### RBF network model
RBF neural networks consist of an input layer, a single hidden layer with a radial basis activation function, and a linear output layer[42]. Gaussian is the most frequently used activation function:
\[\phi(\boldsymbol{p},\boldsymbol{c})=e^{-\frac{\left[\boldsymbol{p}-\boldsymbol {c}\right]^{2}}{s^{2}}}, \tag{13}\]
where \(\boldsymbol{c}\) is the center of the Gaussian function and \(s\) is the radius, which gives a measure of the spread of the Gaussian curve. Figure 5 depicts the architecture of the RBF network. The distance between the input vector \(\boldsymbol{p}\) and the center vector \(\boldsymbol{c}\) is denoted by \(\|dist\|\), and the RBF network output is:
\[\boldsymbol{h}=LW\cdot e^{-\frac{\left[\boldsymbol{p}-\boldsymbol{c}\right]^{ 2}}{s^{2}}}. \tag{14}\]
The learning process of the RBF network can be divided into three stages: in the first stage, based on the distribution of the input sample, the centers and radius values of each node in the hidden layer are determined. In the second stage, the output layer weights are calculated using least-square methods. Thirdly, the parameters of the hidden layer and output layer are simultaneously adjusted based on the sample signal to improve the network precision. Using the Poggio[43] method and the r-nearest neighborhood heuristic, the authors determined the centers \(c\) and the radius \(s\) of the hidden layer nodes in the RBF network. For training the network weights, the least-mean-square (LMS) algorithm is used. In this study, various numbers of hidden layer nodes have been tested, and a hidden layer with 25 centers is selected for the RBF network.
### Training and comparison of neural network models
In the engine data collection stage, training data must be representative of typical plant behaviors in order to evaluate the performance of various engine models under practical driving conditions. Consequently, the sampled data should adequately represent the state space of the system to be controlled. The time scale of the MVEM is just sufficient to accurately describe the changes in the engine variables that change the most rapidly, which is advantageous for engine control applications [44]. The trained neural network model will therefore have adequate transient and steady-state performance.
A total of 1000 data samples are generated and separated into two groups: 950 samples for training and 50 samples for validation. Before the training and validation, all input and output data were normalized to the range
Figure 5: The concise structure of the RBF network.
of \([-1,1]\). The Gaussian white noise (GWN) is added to the neural network training samples and its signal-to-noise ratio (SNR) is 5 db. Figures 6-8 compare the prediction results and the corresponding proportional error (PE) over the validation data after training with 950 samples.
The throttle position is constrained between 5% and 90%, and the fuel injection rate is between 0.0011 and 0.0055 kg/s. The sampling interval has been set to 0.1 s. Table 1 displays the mean absolute percentage error (MAPE) of the prediction results from the three network models.
The comparison results reveal that the prediction of the network models correspond well to the engine output during the model validation phase, where the maximum MAPE is 1.92%. The RBF network model has the highest accuracy with the MAPEs of 0.49%, 0.20% and 1.13%. This is followed by the MLP model and the Elman model, in order of accuracy. Due to the accuracy and robustness advantages of the RBF network model, it will be used in the development of the AMPC in the following section.
Figure 8: Comparison of predicted engine AFR.
Figure 6: Comparison of predicted engine torque.
Figure 7: Comparison of predicted engine speed.
## 4 RBF-based adaptive model predictive control
Adapting the prediction model to changing operating conditions allows the AMPC to address highly nonlinear control problems. As previously mentioned, updating the prediction model states with an off-line LPV model is expected to enable efficient and robust AMPC strategies. Consequently, this section establishes an LPV model based on the parameters of the previously selected RBF network model, facilitating the development of AMPC for the DFLS. Specifically, the AMPC system structure is introduced first, followed by the presentation of the LPV model constructed through a network associated with the RBF model. Finally, simulation studies on the RBF-based AMPC for DFLS control are conducted.
### Control system structure
The idea of adaptive model predictive control has been introduced in details in the literature[20]. Figure 9 shows the block diagram of the AMPC structure. The nonlinear DFLS model is consisting of the MVEM and the ducted fan modules in the control simulations. At each time step, the LPV model associated with the RBF network will generate a constant linear prediction model based on the current state and input.
The tracking control target is to minimize the error between the system output \(\mathbf{y}=[T_{DF},\lambda]^{T}\) and the reference \(\mathbf{y}^{ref}\) over a specified time horizon. The AMPC controller will be used to optimize two DFLS inputs, \(\mathbf{\mu}=[TPS,\hat{m}_{fi}]^{T}\), to achieve precise dynamic tracking of the desired DFLS thrust \(T_{DF}{}^{ref}\) and the engine AFR \(\lambda^{ref}\). A cost function \(Z\) is therefore defined as a quadratic function in terms of tracking errors and input increments:
\[\begin{split} Z(k)&=\epsilon\sum_{j=k+N_{1}}^{k+N_{ 2}}\{[\lambda^{ref}(j)-\hat{\lambda}(j)]^{2}+[T_{DF}{}^{ref}(j)-\hat{T}_{DF}(j )]^{2}\}\\ &+\xi\sum_{j=k}^{k+N_{c}}\{[\hat{m}_{fi}(j)-\hat{m}_{fi}(j-1)]^{ 2}+[TPS(j)-TPS(j-1)]^{2}\},\end{split} \tag{15}\]
where \(k\) represents the current time instance, \(\hat{\lambda}\) is the predicted AFR, \(\hat{T}_{DF}\) is the predicted thrust of the DFLS, and \(\epsilon\) and \(\xi\) are control weighting factors that penalize the excessive modification of control inputs. Future horizon \(N_{1}\) and \(N_{2}\), also known as the prediction horizon, specifies the number of upcoming samples. Likewise, the control horizon \(N_{c}\) specifies the number of samples from which optimal inputs are calculated. At each sampling instant, the optimization yields a sequence of input signals, but only the first input is applied to the plant[45; 46] resulting in a receding horizon approach[47].
\begin{table}
\begin{tabular}{l c c c} \hline \hline & MLP & RBF & Elman \\ \hline Engine torque & 0.81\% & 0.49\% & 1.44\% \\ Engine speed & 1.08\% & 0.20\% & 1.62\% \\ AFR & 0.95\% & 1.13\% & 1.92\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: The MAPE of the prediction results.
Figure 9: The block diagram of the AMPC structure.
### The LPV model
The estimation of the state and the update of the prediction model are indispensable to the effectiveness, precision, and robustness of an AMPC[26]. The AMPC has currently adopted three categories of model updating strategies: online parameter estimation, successive linearization, and linear parameter varying (LPV) modeling. The online parameter estimation method estimates model parameters from real-time plant measurements and updates the prediction model. However, it can be computationally intensive, and suitable for control applications with longer control intervals and sufficient resources. The successive linearization method builds the plant model using nonlinear equations and derives a linear approximation to update the model parameters. But dealing with highly nonlinear systems using successive linearization often requires many iterations and computations. An LPV system is a linear state-space model whose dynamics vary as a function of certain time-varying parameters. Using an off-line LPV model has the advantage of supporting rapid batch linearization to obtain a variety of plant models at the desired operating points for model updating. This is advantageous for the computation efficiency and stability of the DFLS AMPC. However, LPV models of nonlinear mechanical systems are typically constructed based on the system's dynamic model, which is challenging for SI engines.
This paper proposes to directly utilize the trained RBF network to construct the LPV model for the AMPC prediction step. Specifically, an associated network can be derived from the RBF model off-line[48] and outputs directly the Jacobian matrix of the RBF model, which contains the LPV model parameters. The LPV model of the DFLS can be expressed in the discrete linear form with time-varying parameters:
\[\Delta\textbf{x}(t+1)=A(t)\Delta\textbf{x}(t)+B(t)\Delta\textbf{u}(t), \tag{16}\]
\[\Delta\textbf{y}(t)=C(t)\Delta\textbf{x}(t)+D(t)\Delta\textbf{u}(t), \tag{17}\]
where the states, inputs, and outputs of the DFLS are
\[\textbf{x}(t)=[Q_{eng}(t),n(t),\lambda(t)], \tag{18}\]
\[\textbf{u}(t)=[TPS(t),\dot{m}_{fi}(t)],and \tag{19}\]
\[\textbf{y}(t)=[T_{DF}(t),\lambda(t)]. \tag{20}\]
Because the current state \(Q_{eng}(t)\) has no effect on the subsequent system states, the first column of the system matrix \(A(t)\) contains only zeros. And the linearized system's matrices can be written as follows:
\[A(t)=\frac{\partial\textbf{x}(t+1)}{\partial\textbf{x}(t)}=\begin{bmatrix}0& \frac{\partial Q_{eng}(t+1)}{\partial\textbf{u}(t)}&\frac{\partial Q_{eng}(t+1) }{\partial A(t)}\\ 0&\frac{\partial n(t+1)}{\partial\textbf{u}(t)}&\frac{\partial n(t+1)}{ \partial A(t)}\\ 0&\frac{\partial\lambda(t+1)}{\partial\textbf{u}(t)}&\frac{\partial A(t+1)}{ \partial A(t)}\\ 0&\frac{\partial\lambda(t+1)}{\partial\textbf{u}(t)}&\frac{\partial A(t+1)}{ \partial A(t)}\\ \end{bmatrix}, \tag{21}\]
\[B(t)=\frac{\partial\textbf{x}(t+1)}{\partial\textbf{u}(t)}=\begin{bmatrix} \frac{\partial Q_{eng}(t+1)}{\partial TPS(t)}&\frac{\partial Q_{eng}(t+1)}{ \partial\textbf{u}(t)}\\ \frac{\partial n(t+1)}{\partial TPS(t)}&\frac{\partial n(t+1)}{\partial \textbf{u}(t)}\\ \frac{\partial A(t+1)}{\partial TPS(t)}&\frac{\partial A(t+1)}{\partial \textbf{u}(t)}\\ \end{bmatrix}, \tag{22}\]
\[C(t)=\frac{\partial\textbf{y}(t+1)}{\partial\textbf{x}(t+1)}=\begin{bmatrix} \frac{\partial T_{DF}(t+1)}{\partial Q_{eng}(t+1)}&\frac{\partial T_{DF}(t+1) }{\partial\textbf{u}(t+1)}&0\\ 0&0&1\\ \end{bmatrix}, \tag{23}\]
\[D(t)=\frac{\partial\textbf{y}(t+1)}{\partial\textbf{u}(t+1)}=\begin{bmatrix} 0&0\\ 0&0\\ \end{bmatrix}. \tag{24}\]
The output matrix \(C\) can be calculated explicitly using equations 4-7 from the ducted fan model. And the system matrix \(A\) and the control effectiveness matrix \(B\) are associated with the SI engine, the elements of which can be approximated by the partial derivatives of the trained RBF network inputs \(\textbf{h}=[TPS(t),\dot{m}_{fi}(t),n(t),\lambda(t)]\) with respect to the network outputs \(\textbf{p}=[Q_{eng}(t+1),n(t+1),\lambda(t+1)]\).
To determine the Jacobian matrix of the RBF model \(\mathcal{J}\), we can rewrite the \(j\)th Gaussian radial function in the RBF network from equation 13 as below:
\[\dot{\phi}_{j}(p)=e^{-(||\textbf{p}-\textbf{e}_{j}||/s_{j})^{2}}. \tag{25}\]
where \(c_{j}\) is the \(j\)th center point and \(s_{j}\) is its radius. The associated network is linear to the parameters \(LW\), which are weights connecting the output of each radial function to the output of the network, when \(c_{j}\) and \(s_{j}\) are fixed. Derivatives can therefore be explicitly calculated as a function of the network input \(\mathbf{p}\) and network weights \(LW\):
\[\mathcal{J}(\mathbf{p},LW)=LW^{T}\Phi(\mathbf{p})=LW^{T}\begin{bmatrix}\phi_{1}(\mathbf{p} )\frac{-2}{s_{1}^{2}}(\mathbf{p}-\mathbf{c}_{1})\\ \vdots\\ \phi_{j}(\mathbf{p})\frac{-2}{s_{j}^{2}}(\mathbf{p}-\mathbf{c}_{J})\end{bmatrix}. \tag{26}\]
And this function can be constructed as another network associated with the RBF model network as depicted in Figure 10, with \(J\) activation functions, each of which is the partial derivative of the \(j\)th Gaussian radial function output with respect to the critic input:
\[\Phi_{j}(\mathbf{p}) =\frac{\phi_{j}(\mathbf{p})}{\partial\mathbf{p}}=e^{-(||\mathbf{p}-\mathbf{c}_{j} ||/s_{j})^{2}}\frac{-2}{s_{j}^{2}}(\mathbf{p}-\mathbf{c}_{j}) \tag{27}\] \[=\phi_{j}(\mathbf{p})\frac{-2}{s_{j}^{2}}(\mathbf{p}-\mathbf{c}_{j}),\]
where \(\Phi_{j}\) represents the \(j\)th row vector of the derivatives \(\Phi(\mathbf{p})=\partial\phi(\mathbf{p})/\partial\mathbf{p}\).
This associated network model outputs directly the Jacobian matrix of the RBF model \(\mathcal{J}\), whose elements are the partial derivatives of the model outputs with respect to the model inputs, and which can be used to construct the LPV model matrices \(A\) and \(B\) in equations 21 and 22. Note that this concise mathematical relationship is derived from the linear-in-parameter property of the system model [48], which will effectively enhance the AMPC prediction model's update speed. Moreover, employing a linear-in-parameter system model can aid in avoiding the local minimum trap. The system model network is intended to be nonlinear in inputs but linear in parameters, and the associated network can be derived explicitly and precisely from the system model network. This characteristic is indispensable for the computational efficiency of the RBF-based AMPC.
### AMPC simulation of the DFLS
To validate the proposed AMPC, the control simulation of the DFLS during takeoff thrust preparation is implemented. During the vertical take-off power preparation procedure, the anticipated thrust of the DFLS \(T_{DF}{}^{ref}\) gradually increases from the idling rating (\(10\ kgf\)) to the hovering thrust (\(80\ kgf\)). During this procedure, the engine gradually increases its throttle to a relatively stable state with a low expected normalized AFR (\(0.82\)), ensuring adequate power output, before adjusting the expected normalized AFR (\(1.0\)) to an efficient mode. In the simulation study, the throttle position is constrained between \(5\%\) and \(90\%\), and the fuel injection rate is between
Figure 10: The system model network with RBF activation functions and its associated network. These two networks share the same set of centers, denoted by \(c_{j}\), and the to-be-determined parameters \(LW\).[48]
0.0011 and 0.0055 kg/s. The thrust of the DFLS is constrained between 0 and 150 kgf, and the normalized AFR is between 0.68 and 1.26.
In order to compare the AMPC against a traditional MPC, a linear MPC is designed for the same control process for the DFLS. The linear MPC and the AMPC are designed with the same cost function, as illustrated in equation 15, and same parameters. The nonlinear optimization parameters were set as follows: \(N_{1}=1\), \(N_{2}=8\), \(N_{c}=3\), \(\epsilon=0.8\), and \(\xi=0.5\). In the control simulation, the normalized AFR measurement and the DFLS thrust measurement were subjected to Gaussian noise \(\mathcal{N}\)(0, 0.005). Figures 11-12 depict the simulation tracking results for the DFLS's desired thrust and AFR. Before time step 100, the desired thrust is continuously increasing, the tracing control results of the DFLS thrust and the engine AFR with the linear MPC are divergent. After the expected thrust has been stabilized, the linear MPC can only precisely track the expected thrust, while the engine AFR control result is inconsistent with the desired value. As expected, the linear MPC's tracking and regulating performances are unacceptable.
The AMPC parameters are manually adjusted to determine the appropriate values by comparing the control performance. Figure 13 depicts the tracking control results for the desired thrust. Figure 14 depicts the corresponding simulation results of the tracking curve for the desired normalized AFR. Figure 15 illustrates the AMPC-optimized \(TPS\) and \(\dot{m}_{fi}\) engine results, respectively. Figure 16 depicts the corresponding simulation results for the engine speed and output torque.
Figure 11: The DFLS thrust tracking performance with a linear MPC.
Figure 12: The AFR tracking performance with a linear MPC.
Figure 14: The AFR tracking performance with the proposed AMPC.
Figure 13: The DPLS thrust tracking performance with the proposed AMPC.
Figure 16: The engine crankshaft speed \(n\) and the engine crankshaft torque \(Q_{mig}\) during the control simulation.
Figure 15: The control inputs of the engine (the TPS and the fuel injection mass flow \(\dot{u}_{ff1}\)) optimized by the proposed AMPC.
The simulation results indicate that the developed AMPC can accurately control the thrust and AFR during the vertical take-off power preparation process. As illustrated in Figures 13,14, the relative error ranges for DFLS thrust and SI engine AFR tracking control are -3.5% to 1.0% and -2.1% to 2.2%, respectively. The aforementioned outcomes validated the effectiveness and robustness of the proposed RBF-based AMPC.
## 5 Conclusions
This paper presents a novel adaptive model predictive control (AMPC) approach for controlling the thrust of an engine-driven ducted fan lift system (DFLS). The proposed method is based on an off-line linear parameter varying (LPV) model, whose parameter updating laws is derived from a radial basis function (RBF) network. The use of the RBF network is motivated by its superior prediction accuracy and robustness compared to other network models, such as multi-Layer perceptron and Elman. This LPV model enables real-time updates of the AMPC prediction model across the full operating envelope, making it an effective solution for handling the highly nonlinear dynamics of DFLS. The proposed AMPC receives and updates its LPV parameters from an associated network without massive online operations, which enhances the control effectiveness and avoids model errors resulting from issues such as delays, noise, and insufficient excitation. This concise mathematical relationship is derived from the linear-in-parameter property of the system model, avoiding local minimum traps. The DFLS AMPC was validated in numerical simulations, demonstrating its ability to achieve precise control of thrust and air-fuel ratio during the vertical take-off preparation process. The control strategy is designed to track the desired thrust by controlling the engine output power while ensuring reliability and efficiency through synchronized air-fuel ratio control.
The RBF model-based AMPC approach proposed in this paper is efficient and practical. The validation results show its potential for wider application to other nonlinear industrial control problems. However, the LPV model obtained off-line may lack sufficient robustness to handle stochastic and uncertain plant model changes. Further research is needed to explore the combination of the proposed method with online AMPC to preserve control efficiency and enhance its adaptability. In conclusion, this study presents a promising new method for controlling the thrust of engine-driven ducted fan lift systems and opens up avenues for further improvement and refinement.
## Acknowledgement
The corresponding author would like to thank Malaysian Ministry of Higher Education (MOHE) for providing the Fundamental Research Grant Scheme (FRGS): FRGS/1/2020/TK0/USM/03/11.
|
2310.00035 | LoRA ensembles for large language model fine-tuning | Finetuned LLMs often exhibit poor uncertainty quantification, manifesting as
overconfidence, poor calibration, and unreliable prediction results on test
data or out-of-distribution samples. One approach commonly used in vision for
alleviating this issue is a deep ensemble, which constructs an ensemble by
training the same model multiple times using different random initializations.
However, there is a huge challenge to ensembling LLMs: the most effective LLMs
are very, very large. Keeping a single LLM in memory is already challenging
enough: keeping an ensemble of e.g. 5 LLMs in memory is impossible in many
settings. To address these issues, we propose an ensemble approach using
Low-Rank Adapters (LoRA), a parameter-efficient fine-tuning technique.
Critically, these low-rank adapters represent a very small number of
parameters, orders of magnitude less than the underlying pre-trained model.
Thus, it is possible to construct large ensembles of LoRA adapters with almost
the same computational overhead as using the original model. We find that LoRA
ensembles, applied on its own or on top of pre-existing regularization
techniques, gives consistent improvements in predictive accuracy and
uncertainty quantification. | Xi Wang, Laurence Aitchison, Maja Rudolph | 2023-09-29T16:38:38Z | http://arxiv.org/abs/2310.00035v2 | # LoRA ensembles for large language model fine-tuning
###### Abstract
Fine-tuned LLMs often exhibit poor uncertainty quantification, manifesting as overconfidence, poor calibration, and unreliable prediction results on test data or out-of-distribution samples. One approach commonly used in vision for alleviating this issue is a deep ensemble, which constructs an ensemble by training the same model multiple times using different random initializations. However, there is a huge challenge to ensembling LLMs: the most effective LLMs are very, very large. Keeping a single LLM in memory is already challenging enough: keeping an ensemble of e.g. 5 LLMs in memory is impossible in many settings. To address this issue, we propose an ensemble approach using Low-Rank Adapters (LoRA), a parameter-efficient fine-tuning technique. Critically, these low-rank adapters require a very small number of parameters, orders of magnitude less than the underlying pre-trained model. Thus, it is possible to construct large ensembles of LoRA adapters with almost the same computational overhead as using the original model. We find that LoRA ensembles, applied on its own or on top of pre-existing regularization techniques, gives consistent improvements in predictive accuracy and uncertainty quantification.
## 1 Introduction
LLMs have demonstrated state-of-art performance in many natural language processing tasks (Radford et al., 2019; Touvron et al., 2023; Brown et al., 2020; Chung et al., 2022; Kojima et al., 2022; OpenAI, 2023). With additional fine-tuning a pre-trained LLM can be adapted to downstream applications or data. However, fine-tuned LLMs can overfit to training data and often exhibit _overconfidence_ (as visualized in Fig.1a). Specifically, these models may yield overly certain predictions, especially on incorrectly predicted samples or those from different domains. Ideally, a model should exhibit low confidence when its predictions are likely to be incorrect; otherwise, the outcomes could be dangerously misleading in safety-critical contexts such as medical diagnosis(Singhal et al., 2023), finance (Yang et al., 2023), or decision-making processes (Li et al., 2022).
A widely adopted approach for mitigating overconfidence in deep learning is to make predictions using an _ensemble_ of neural networks rather than a single model. There are many approaches for constructing an ensemble of networks, such as training multiple networks with different random initializations (Lakshminarayanan et al., 2017), different hyperparameters (Wenzel et al., 2020). However, there are two barriers to applying these approaches for fine-tuning LLMs. First, ensembles require storing multiple copies of the model weights and loading them into GPU at test time. This is not practical for modern LLMs. A single LLaMA-13b (Touvron et al., 2023) stored at 16-bit precision, is \(25\,\mathrm{GB}\) on disk, and loading it to the GPU takes around 6 seconds. In addition, random initialization has been noted to play a crucial role in deep ensembles (Lakshminarayanan et al., 2017; Fort et al., 2019). However, starting the fine-tuning of the individual LLMs with the same initialization - the pre-trained weights - eliminates an important source of randomness and may cause a lack of diversity across the ensemble, thereby potentially reducing its benefits.
Work by Gleave & Irving (2022) and Sun et al. (2022) has attempted building ensembles of fine-tuned LLMs but due to the limitations above, their method is restricted to smaller models such as GPT-2 (Radford et al., 2019) with only 1.5 billion parameters. In this paper, we build on recent advances in efficient LLM fine-tuning with low-rank adapters (LoRA) (Hu et al., 2021) and propose an ensemble
method for LLM fine-tuning that scales to models with 13 billion parameters and beyond. We propose LoRA ensembles. LoRA ensembles solves the two aforementioned issues: LoRA requires orders of magnitude less storage than the original model: a low-rank adapter for LLaMA-13b is only 30Mb on disk and takes 0.1 seconds to load onto GPU. In addition, the random initialization of the adapter provides the necessary randomness for variability across the ensemble components.
Our empirical results on several commonsense reasoning tasks show that LoRA ensembles improve accuracy and calibration over naive LoRA fine-tuning and produces better ensembles than alternatives based on last-layer fine-tuning (Du et al., 2021) or Monte Carlo dropout (Gal and Ghahramani, 2016). As an additional contribution we study regularized LoRA ensembles. Classical theory (Breiman, 2001) suggests that the generalization performance of ensembling depends on the diversity of individual components. While no comparable results exist for neural networks it is believed that this intuition still holds for deep ensembles (Lakshminarayanan et al., 2017; Fort et al., 2019). Initialization of the LoRA ensemble components around the same pre-trained weights already introduces a strong correlation between the ensemble components and regularization can further strengthen this effect. Yet we find in an extensive empirical study of LoRA ensembles in combination with different regularization strategies that LoRA ensembles are compatible with regularization and their combination typically further improves prediction and calibration accuracy.
## 2 Related work
**Robust fine-tuning of language models.** A body of work has proposed regularization methods to improve generalization and calibration during fine-tuning of language models. For instance, He et al. (2022) explores a mix of KL and L2 regularization on the extracted features to retain the calibration of pre-trained masked language models (MLM). Park and Caragea (2022) introduces mixup (Zhang et al., 2017) into MLM fine-tuning, showcasing enhanced calibration at test time. Our approach complements these methods: we can ensemble fine-tuning with any of these techniques (if compatible with LoRA adapters), and we will discuss such strategies extensively in later sections.
**Ensembling of Neural Networks.** Deep ensembles enhance the robustness and reliability of deep learning models (Ovadia et al., 2019). Typically, they are constructed by training the same model with varied initializations (Lee et al., 2015; Lakshminarayanan et al., 2017) or hyper-parameter settings (Wenzel et al., 2020; Zaidi et al., 2020). Some methods use checkpoints from along the optimization trajectory (Huang et al., 2017) or the Bayesian posterior (Neal, 2012; Zhang et al., 2020; Wenzel et al., 2020; Izmailov et al., 2021). However, naively applying these methods in the LLM setting requires us to store and load complete model checkpoints which is impractical in many settings due to the very large memory requirements for storing multiple copies of an LLM.
**Ensembling in LLMs.** Two recent papers study ensembling for LLM fine-tuning (Gleave and Irving, 2022; Sun et al., 2022). However, these papers only consider full fine-tuning, optimizing all the
Figure 1: **LoRA ensembles with strong weight decay regularization, is more accurate and better calibrated than fine-tuning a single LoRA component on multiple-choice QA problems such as in Fig. 0(b).** Fig 0(a), shows a KDE of the confidence with which a pre-trained LLaMA-13b in the few-shot setting (purple line), a fine-tuned LoRA model (blue line), and our proposed LoRA ensembles (yellow dashed line) make wrong predictions on the cqa dataset. The few-shot approach is well-calibrated but often wrong, while LoRA (M=1) is more accurate but overconfident in its wrong predictions. Our approach provides improvements in both accuracy and calibration in terms of ECE.
weights, which requires them to store \(M\) copies of the model, where \(M\) is the number of ensemble components. This is impractical for modern LLMs, so instead they are forced to work with smaller models; in particular, they work with GPT-2 (Radford et al., 2019) with only 1.5 billion parameters. Hewitt et al. (2021) consider an ensemble of LLMs consisting of two components: a model trained with full fine-tuning, and a model trained with LoRA. In contrast, we consider ensembling with a large number of LoRA components (e.g. 20) to improve accuracy and calibration. There also exists an efficient ensemble method BatchEnsemble (Wen et al., 2020), where the ensemble components share a base model that is modified multiplicative by component-specific parameter-efficient rank-1 matrices, leading to reasonable storage demands comparable to our proposed LoRA ensembles. It has been applied on LLMs by Tran et al. (2022) but for pre-training rather than fine-tuning. While it may be possible to adapt BatchEnsemble to the fine-tuning setting, this has not, to our knowledge, yet been considered. Indeed, we believe that such an adaptation is a non-trivial exercise that may require careful consideration of e.g. how to initialize and train the multiplicative adapters to avoid "overpowering" the pre-trained weights.
**Calibration and uncertainty quantification of LLMs.** Pre-trained LLMs already show reasonably good calibration (OpenAI, 2023). Nonetheless, there are several recent papers that seek to further enhance _pre-trained_ LLMs' calibration and uncertainty quantification ability in open-ended generation tasks. In particular, Lin et al. (2022); Kuhn et al. (2022) propose to use prompts to guide models to provide linguistically calibrated answers. Zhou et al. (2023) studies how LLMs express uncertainty in the natural language form. Our work is very different in that it focuses on mitigating very poor calibration that can emerge from fine-tuning.
## 3 Background
### Fine-tuning large language models
Fine-tuning assumes access to a pre-trained LLM, denoted by by \(\mathbf{W}^{*}\), usually an auto-regressive model based on the transformer architecture. In the tasks we consider, the fine-tuning data consists of prompts, \(\mathbf{X}=\{\mathbf{x}_{n}\}_{n=1}^{N}\), and answers \(\mathbf{y}=\{y_{n}\}_{n=1}^{N}\), where the prompt can, e. g., describe a multiple
Figure 2: **LoRA ensembles improve both accuracy and calibration under different regularization techniques. Arrows link the performance of a single LoRA model (arrow tail) to the corresponding ensemble with 5 LoRA components (arrowhead), where the x-axis denotes validation accuracy and the y-axis expected calibration error. Arrow colors indicate regularization methods and opacity reflects regularization strength. The majority of arrows are pointing toward the right bottom corner, suggesting that ensembling benefits both accuracy and calibration error measured by ECE.**
choice QA problem (Fig. 0(b)), and the answer can be in the label set \(\mathcal{T}=\{``a"\), \(``b"\), \(``c"\), \(``d"\}\), which is a subset of all tokens \(\mathcal{V}\) the LLM can generate. Given this data, fine-tuning entails initializing the parameters at \(\mathbf{W}=\mathbf{W}^{*}\) and minimizing the loss \(-\log p(\mathbf{y}\mid\mathbf{X};\mathbf{W})\). In this paper, we consider tasks, where the label set \(\mathcal{T}\subset\mathcal{V}\) of possible answers consists of single tokens.1 Typically, there will be tokens \(v\notin\mathcal{T}\) with nonzero probability under the LLM. To study calibration accuracy and predictive uncertainty of LLM fine-tuning, we introduce the normalized task distribution
Footnote 1: Our method, LoRA ensembles also applies to open-ended generation tasks.
\[p_{\mathcal{T}}(y\mid x_{n};\mathbf{W})=\begin{cases}p(y\mid x_{n};\mathbf{W}) /Z_{\mathbf{W}}&\text{if }y\in\mathcal{T}\\ 0&\text{otherwise,}\end{cases}\quad\text{where}\quad Z_{\mathbf{W}}=\sum_{ \mathbf{y}\in\mathcal{T}}p(y\mid x_{n};\mathbf{W}). \tag{1}\]
The normalized task distribution allows us to study the quality of predictions beyond accuracy.
### Deep ensembles
A popular tool for improving the predictive uncertainty of deep learning methods are ensembles. Deep ensembles (Lakshminarayanan et al., 2017) offer a practical alternative for the fully Bayesian treatment of Bayesian neural networks (Neal, 2012) or Monte-Carlo Dropout (Gal & Ghahramani, 2016). They simply average the predictions of \(M\) networks which have been trained separately using different random initializations,
\[p_{\text{ens}}(y\mid\mathbf{x}_{n})=\frac{1}{M}\sum_{m=1}^{M}p(y\mid\mathbf{x }_{n};\mathbf{W}_{m}). \tag{2}\]
### Efficient Fine Tuning low-rank adapters (LoRA)
LoRA (Hu et al., 2021) is a parameter-efficient fine-tuning technique. Instead of fine-tuning all the model weights, it learns additive correction terms, called _adapters_, whose low-rank structure greatly reduces the number of trainable parameters. Each adapter \(\Delta W=\alpha BA\) consists of trainable _low-rank_ matrices \(B\in\mathbb{R}^{d\times r},A\in\mathbb{R}^{r\times k}\) of rank \(r\) and a constant scaling factor \(\alpha\in\mathbb{R}^{+}\), which is usually fixed. During fine-tuning, we fix \(\mathbf{W}^{*}\), and only optimize \(\Delta W\):
\[\mathcal{L}(\Delta W)=\min_{\Delta W}\sum_{n=1}^{N}-\log p(y_{n}\mid\mathbf{x }_{n};\mathbf{W}^{*}+\Delta W). \tag{3}\]
Critically, \(\Delta W\) represents far fewer parameters, so is much easier to fine-tune in constrained compute environments. As suggested by (Hu et al., 2021), it is common to initialize \(A\) randomly and \(B\) as zero, in that way, we can have \(\mathbf{W}^{*}+\Delta W=\mathbf{W}^{*}\) at the beginning of the optimization, i.e. the fine-tuned model starts from the pre-trained model.
### Regularization
**Output space regularization via KL regularization.** In LLM fine-tuning, it is common to include a KL regularization (Schulman et al., 2017; Bai et al., 2022; Ouyang et al., 2022; Korbak et al., 2022; He et al., 2022) to make the output distribution of the fine-tuned model close to that of the pre-trained model. In our setting, we consider the following KL regularization objective
\[\beta D_{\text{KL}}(p(\mathbf{y}\mid\mathbf{X};W,\Delta W)\mid\mid p(\mathbf{ y}\mid\mathbf{X};W)), \tag{4}\]
which is added to Eq. (3) during optimization where \(\beta\) controls the strength of the regularization.
**Implicit regularization via early stopping.** Another commonly adopted regularization method is early stopping. Early stopping halts the optimization when certain criteria are met such that the model is not over-optimized. The fewer epochs used, the stronger the regularization is.
## 4 Method
We propose LoRA ensembles, an ensemble of LLMs where each of the ensemble components is fine-tuned with LoRA. Remember that LoRA learns only low-rank additive correction terms, called
adapters_, with a very small number of parameters. This makes LoRA a practical choice for LLM ensembles. Each ensemble component will use the same pre-trained weights \(\mathbf{W}^{*}\), but will have its own adapter term \(\Delta W_{m}\). This introduces strong parameter sharing between the component-specific weights \(\mathbf{W}_{m}=\mathbf{W}^{*}+\Delta W_{m}\), and the adapter initialization becomes a source of diversity. After fine-tuning, LoRA facilitates efficient storage and retrieval, empowering us to swiftly execute ensembles at the prediction stage. Crucially, at test time, the large base model \(\mathbf{W}^{*}\) is loaded only once, while the low-rank adapters can be loaded and unloaded with negligible overhead.
Importantly, LoRA ensembles can be applied on top of modified fine-tuning protocols, that include regularization. For example, these regularization methods might keep all the fine-tuned models in the ensemble close to the pre-trained model, which may additionally improve calibration over just the improvement from ensembling. In extensive experiments (Sec. 5.2), we find that ensembling can offer additional improvements in accuracy and calibration over those offered by regularization alone. Finally, we consider an additional form of regularization that we were not able to find explicitly discussed in prior work. In particular, we consider penalizing the LoRA \(B\) matrix by including a very large weight decay term in AdamW (Loshchilov and Hutter, 2019). At the \(t_{th}\) time step, the AdamW update for \(B\) is
\[B_{t}\gets B_{t-1}-\gamma(g_{t-1}-\lambda B_{t-1}). \tag{5}\]
Here, \(\gamma\) is the step size, \(g_{t-1}\) is the normalized gradient acquired from standard Adam (Kingma and Ba, 2014) with no regularization and \(\lambda\) adjusts the strength of regularization. Although weight decay is a very standard regularization technique, we find that the usual setup with a \(\lambda\) of \(1\mathrm{e}{-2}\) (the default setting from PyTorch's AdamW, denoted as "None" in our paper) barely helps resolve the overconfidence issue. Instead, we adopt an extreme large value of \(\lambda\) ranging from \(1\mathrm{e}{2}\) to \(1\mathrm{e}{3}\). In addition, we adopt weight decay only on the \(B\) matrix of \(\Delta W\), which we notice shows the best performance (Appendix. B).
## 5 Empirical Study of LoRA Ensembles
In this section, we evaluate LoRA ensembles on a collection of datasets to show its benefits in various QA settings (described in Sec. 5.1), and find that it leads to better predictive and calibration accuracy than baseline approaches (Sec. 5.2). In addition, we study the effect of regularization in LoRA ensembles, where we find that LoRA ensembles is complementary to many regularization techniques. Finally, we conduct ablation studies, to better understand the effect of our modeling choices on ensemble diversity and LoRA ensembles's performance (Sec. 5.3).
### Experimental Set-up
**Multiple-Choice Question Answering Datasets.** For our experiments, we choose six popular multiple-choice QA datasets for evaluation: CommonsenseQA (cqa, Talmor et al., 2019), OpenBook (obqa, Mihaylov et al., 2018), social sciences (mmlu ss.) and STEM (mmlu stem) subset from MMLU (Hendrycks et al., 2021), ARC-easy (race) and ARC-challenge (arcc) from AI2 Reasoning Challenge (Clark et al., 2018). Questions in cqa have 5 options while the others all have 4 options. We provide details for the training and validation set of each task in Table 2 in the appendix, we provide example questions for each task in Appendix C.
**Evaluation metrics.** For all 6 tasks, we first measure the accuracy (Acc.) on the validation set. However, problems such as bad calibration or lack of uncertainty quantification can not be reflected through accuracy. Therefore we incorporate negative log-likelihood (NLL.), which measures the model uncertainty on held-out validation datasets, and expected calibration error (ECE., Guo et al., 2017) which assesses the alignment between predicted probabilities and actual empirical accuracy. Since safe deployment in real-world applications requires models to behave predictably when the data comes from another domain, we also study OOD performance. In particular, we test models fine-tuned on cqa on test samples from mmlu as OOD and we test models fine-tuned on a subset of mmlu with test samples from other mmlu subcategories. We then compute the accuracy, NLL., ECE., and additionally, the OOD detection performance measured by AUROC on the OOD test samples using negative maximum softmax probability (Hendrycks and Gimpel, 2016) as the score.
**Implementation Details of LoRA Ensembles.** We build LoRA ensembles by fine-tuning LLaMA-13b (Touvron et al., 2023) which has 13 billion parameters, where we use Transformers (Wolf et al.,
2020) and Huggingface PEFT (Mangrulkar et al., 2022) for model and LoRA implementations. In most experiments, we build ensembles with \(M=5\) components, though our ablation study also contains larger ensembles. As in Hu et al. (2021), we apply the adapter \(\Delta W=\alpha BA\) only on the query and value matrices of the self-attention modules of LLaMA-13b and we fix \(\alpha=32\). With rank \(r=8\), each ensemble component has 6 million trainable parameters. The adapter matrices \(B\) are initialized to be zero, while the entries of \(A\) are randomly initialized using Kaiming Uniform (He et al., 2015). We use AdamW for all experiments and run optimizations for 20 epochs with a fixed step size of \(5\mathrm{e}{-5}\). We use a batch size of 32 for cqa, 16 for obqa, arcc, and arce, and 8 for mmlu ss. and stem. Half-precision is used for all the forward and backward passes after which we convert the output logits to single precision when computing metrics. For all datasets, we experiment with four variations of LoRA ensembles: Default AdamW configuration with \(\gamma=0.01\), denoted as None in the figures; KL regularization from Eq. (4) with \(\beta\in\{0.01,0.05,0.1\}\); Early stopping after \(\{1,2,3\}\) epochs, and very large weight decay on \(B\) from Eq. (5) with \(\gamma\in\{1\mathrm{e}2,5\mathrm{e}2,1\mathrm{e}3\}\).
We consider the following approaches as baselines
* **LoRA (M=1)** For all variations of LoRA ensembles, we report the averaged performance of the _single_ ensemble members. We represent the results with solid lines in trace figures, in contrary to dashed lines for the ensembled versions.
* **Few shot** For each question in the validation set, we append \((\mathbf{X},\mathbf{y})\) pairs from the training set in front of the prompts as "demonstration" to perform few shot learning (Brown et al., 2020). In our experiments, we randomly draw 3 pairs of \((\mathbf{X},\mathbf{y})\) without replacement and evaluate the performance through the average of 10 random draws. We perform few shot experiments only on the pre-trained model without any fine-tuning.
* **Last-layer ensembles** Last-layer fine-tuning, also known as linear probing (Du et al., 2021; Wu et al., 2020), refers to freezing the model but only fine-tuning the last linear layer. We fine-tuned the rows in the linear head that correspond to the token for the options. multiple times starting from the pre-trained weights under different random seeds to construct an ensemble.
* **Monte Carlo (MC) dropout.** When dropout is employed at training time, we can use MC dropout (Gal and Ghahramani, 2016) to perform ensembling: Instead of training multiple LoRA adapters, we can train a single one and keep Dropout on at test time and perform multiple forward passes with nodes randomly shut down. MC dropout has previously been adopted in masked language models for incorporating uncertainty estimation (Sankararaman et al., 2022; Vazhentsev et al., 2022). We combine dropout with _standard_ LoRA fine-tuning by adding dropout on the input of the LoRA adapter following the implementation of Mangrulkar et al. (2022).
### Results
In Fig. 2, we present the validation accuracy and ECE. of different LoRA ensembles fine-tuning methods after 20 epochs. Critically, ensembling usually gives considerable improvements in accuracy and calibration compared with the single-component versions regardless of the regularization method or strength. LoRA ensembles also shows significantly improved accuracy compared with few shot learning (purple "x"), confirming the value of fine-tuning. Fig. 2 also shows that regularization usually improves calibration, as measured by ECE. However, the effect of regularization on accuracy is more inconsistent: stronger regularization often reduces accuracy (e.g. stronger regularization in race reduces accuracy) though sometimes increases accuracy (mmlu stem; early stopping).
Next, we look at the behavior of (regularized) LoRA ensembles across training (Fig. 3). This reinforces the results from Fig. 2. In particular, LoRA ensembles consistently improves both calibration and accuracy, whether applied with or without regularization (here, weight decay). However, weight decay has conflicting effects: it seems to usually reduce accuracy while improving calibration. Interestingly, the NLL metric seems to become very large for several of the datasets (e.g. mmlu ss. and stem). This is likely because the NLL heavily penalizes overconfidence: assigning very, very low probability to the right answer. Interestingly, ensembling on its own was not sufficient to prevent this dramatic increase in NLL, while weight decay was sufficient to prevent the dramatic increase (though weight decay in combination with ensembling consistently gave the best NLL).
We present the performance of MC dropout in Fig. 4. We find that MC dropout shows a marginal improvement over the performance of a single model while LoRA ensembles gives dramatically larger improvements in terms of both accuracy and calibration. The performance of last-layer ensembles is
presented in Table 1, which also shows worse accuracy than LoRA ensembles. Fine-tuning only the linear head might not be expressive enough for downstream tasks adaptation. Lastly, we find that ensembling is also helpful in the OOD setting (Fig. 5). While it fails to resolve catastrophic forgetting in cqa v.s. mmlu, it shows improvements both in terms of NLL and ECE., providing more reliable predictions on unseen domains.
### Additional Ablations
In this section, we perform extra ablations to gain a better understanding of the effect of the number of ensemble components and the randomness sources on the performance LoRA ensembles.
**Number of ensemble components.** To start with, we study the effects of ensemble component number \(M\), we show the results in Fig. 6 where we experiment with LoRA ensembles under different numbers of ensemble components. We collect 20 ensemble components in total and we report the average results of 5 random draws from them for each \(M\). We notice that increasing \(M\) improves all metrics however the marginal benefit of increasing components diminishes as \(M\) becomes larger.
**Ensemble diversity under different regularization strengths.** In Fig. 2, high strength of KL regularization and early stopping could cause the performance gain of ensembling to vanish. This is not surprising in that KL regularization directly forces all ensemble components to make predictions similar to the pre-trained model while early stopping prevents different ensemble models from further moving away from the pre-trained model. Weight decay suffers the least from this problem, we suspect that this is caused by the complicated relationship between the weight space and the output space as well as we are performing optimization long enough for ensemble members to diverge.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Acc. \(\uparrow\)} & \multicolumn{2}{c}{NLL. \(\downarrow\)} & \multicolumn{2}{c}{ECE. \(\downarrow\)} \\ & LoRA Ens. & Last-layer Ens. & LoRA Ens. & Last-layer Ens. & LoRA Ens. & Last-layer Ens. \\ \hline cqa & 0.83 & 0.52 & 0.58 & 1.25 & 0.06 & 0.06 \\ obqa & 0.85 & 0.48 & 0.58 & 1.27 & 0.07 & 0.12 \\ arce & 0.86 & 0.72 & 0.92 & 0.79 & 0.09 & 0.06 \\ arce & 0.69 & 0.48 & 1.46 & 1.30 & 0.19 & 0.15 \\ mmlu ss. & 0.60 & 0.46 & 2.72 & 1.47 & 0.28 & 0.19 \\ mmlu stem & 0.41 & 0.32 & 4.04 & 1.72 & 0.40 & 0.25 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of last-layer Ensembles. Last-layer Ensembles underfit the data, showing accuracy significantly worse than LoRA ensembles.
Figure 3: **LoRA ensembles improves accuracy while regularization prevents NLL from blowing up.** For all ensemble results we use \(M=5\) components. We use \(\lambda=1\mathrm{e}3\) for mmlu subsets and \(\lambda=1\mathrm{e}2\) for others for weight decay.
Ensemble diversity under different sources of randomness.Next, we study the source of randomness in LoRA ensembles. As discussed in previous sections, the diversity of LoRA ensembles comes from two sources: The random initialization and the randomness from dataset shuffling (i.e. SGD noise). It is often observed that random initialization contributes mostly to the diversity of ensemble (Fort et al., 2019). However, it is unclear whether this is the case for LoRA ensembles. To investigate, we conduct experiments on cqa under three settings: Dataset shuffling with fixed initialization, random initialization with fixed dataset shuffling, and no randomness (both fixed). For each setting, we conduct experiments with 5 independent trials and the results are presented in Fig. 7. We observe that LoRA ensembles can work with either source of randomness alone, while randomness from dataset shuffling (i.e. SGD noise) contributes more to the ensemble performance.
Figure 4: **Ensemble of LoRA significantly outperforms MC dropout under the same number of ensemble members. When employing dropout during the fine-tuning, an alternative ensemble strategy becomes available: Keeping dropout on at test time to implement Monte Carlo (MC) dropout. However, MC dropout offers only marginal performance gains compared to a standalone model, outperformed by ensembles of independently trained LoRA models when both methods employ the same number of ensemble members (chosen as 5 in our experiments).**
Figure 5: **Ensembles offer benefits for accuracy and calibration over regularized and unregularized fine-tuning approaches in OOD settings. Note that all methods show AUROC around or lower than \(0.5\) on the second and third row, we suspect the models would fail to _detect_ OOD samples if they can _generalize_ to them, as the accuracy increases throughout fine-tuning.**
However, it is hard to decompose exactly the contribution of each source of randomness, and in practice, one should incorporate both random initialization and dataset shuffling for better diversity.
## 6 Discussion
In this paper, we develop a new method for LLM fine-tuning: LoRA ensembles. Our empirical results on 6 datasets demonstrate that our proposed method improves both the accuracy and calibration of fine-tuning a single model. In addition, we propose to combine regularization techniques together with ensembling for better calibration. Broadly, LLMs have demonstrated their power in a variety of scenarios, but their safety issues have started to draw more and more attention (Wei et al., 2023; Jones and Steinhardt, 2022; Perez et al., 2022): Real-world applications require not only the LLM to be accurate but also reliable. Our method provides a key ingredient towards addressing these concerns, in that it helps LLMs to make not only precise but also calibrated predictions.
Figure 6: **Increasing the number of ensembles improves accuracy and calibration. With LoRA ensembles, we can efficiently ensemble with a large number of components, however, the performance gains from increasing the number of components become less substantial with larger \(M\).**
Figure 7: **Randomness from initialization and dataset shuffling both contribute to the diversity of LoRA ensembles. The diversity of ensembles can be reflected by the gap between the dashed lines and solid lines. When regularization is used, SGD noise from dataset shuffling alone is more beneficial than random initialization alone.** |
2310.20483 | Measuring multidimensional heterogeneity in emergent social phenomena | Measuring inequalities in a multidimensional framework is a challenging
problem which is common to most field of science and engineering. Nevertheless,
despite the enormous amount of researches illustrating the fields of
application of inequality indices, and of the Gini index in particular, very
few consider the case of a multidimensional variable. In this paper, we
consider in some details a new inequality index, based on the Fourier
transform, that can be fruitfully applied to measure the degree of
inhomogeneity of multivariate probability distributions. This index exhibits a
number of interesting properties that make it very promising in quantifying the
degree of inequality in data sets of complex and multifaceted social phenomena. | Giuseppe Toscani | 2023-10-31T14:20:45Z | http://arxiv.org/abs/2310.20483v1 | # Measuring multidimensional heterogeneity in emergent social phenomena
###### Abstract.
Measuring inequalities in a multidimensional framework is a challenging problem which is common to most field of science and engineering. Nevertheless, despite the enormous amount of researches illustrating the fields of application of inequality indices, and of the Gini index in particular, very few consider the case of a multidimensional variable. In this paper, we consider in some details a new inequality index, based on the Fourier transform, that can be fruitfully applied to measure the degree of inhomogeneity of multivariate probability distributions. This index exhibits a number of interesting properties that make it very promising in quantifying the degree of inequality in data sets of complex and multifaceted social phenomena.
**Keywords.** Multivariate inequality measures; Gini index; \(T\)-index; Lorenz zonoid; Fourier transforms.
## 1. Introduction
Among other approaches, the description of social phenomena in a multi-agent system can be successfully obtained by resorting to statistical physics, and, in particular, to methods borrowed from kinetic theory of rarefied gases. The main goal of the mathematical modeling is to construct master equations of Boltzmann type, usually referred to as kinetic equations, suitable to describe the time-evolution of some _social_ characteristic of the agents, like wealth, opinion, knowledge, or others [11, 27, 28].
The building block of kinetic theory is represented by the details of microscopic interactions, which, similarly to binary interactions between particles velocities in the classical kinetic theory of rarefied gases, describe the elementary variation law of the selected agent's traits. Then, the kinetic description consequent to the microscopic law of variation is able to capture both the time evolution of the number density and the steady profile, an important equilibrium distribution that should resume at best the characteristics of the phenomenon under investigation.
Once the emergent steady profile relative to a social phenomenon has been identified, various features allow to have a more precise measurement of its social characteristics, to better understand in this way the macroscopic effect of the microscopic behavioral interactions of agents.
Among the various features that can be introduced to measure properties of equilibria emerging from kinetic equations modeling social phenomena, a relevant importance has been assumed by inequality indices, quantitative scores that take values in the unit interval, with the zero score characterizing perfect equality.
To better clarify the point, we refer to a classical example provided by the kinetic description of wealth distribution in a western society. Among the kinetic models introduced in recent years to study the evolution of wealth distribution in a multi-agent society [28], a Fokker-Planck type equation assumed a leading role. This equation, that reads
\[\frac{\partial f}{\partial t}=\frac{\sigma}{2}\frac{\partial^{2}}{\partial w^{2 }}\left(w^{2}f\right)+\lambda\frac{\partial}{\partial w}\left((w-1)f\right), \tag{1.1}\]
describes the evolution of the wealth distribution density \(f(w,t)\) towards a steady state. In (1.1) \(\lambda\) and \(\sigma\) denote two positive constants related to essential properties of the trade rules of the agents, linked to the saving propensity and, respectively, the risk. Equation (1.1) has been first derived by Bouchaud and Mezard [9] through a mean field limit procedure applied to a stochastic dynamical equation for the wealth density. The same equation was subsequently obtained by the present authors with Cordier and Pareschi [13] via an asymptotic procedure from a Boltzmann-type kinetic model for trading agents.
The unique stationary solution of unit mass of (1.1) is given by the inverse Gamma distribution [9, 13]
\[f_{\infty}(w)=\frac{(\mu-1)^{\mu}}{\Gamma(\mu)}\frac{\exp\left(-\frac{\mu-1} {w}\right)}{w^{1+\mu}}, \tag{1.2}\]
where
\[\mu=1+2\frac{\lambda}{\sigma}>1.\]
This stationary distribution, as predicted by the analysis of the italian economist Villedo Pareto [30], exhibits a power-law tail for large values of the wealth variable.
In this context, the classical feature is to quantify the degree of economic _inequality_ contained in the wealth distribution associated to this equilibrium shape in terms of the parameters \(\lambda\) and \(\sigma\), a quantification that is usually done by resorting to the Gini index, a well-known measure of inequality first proposed by the Italian statistician Corrado Gini more than a century ago [20, 21].
In economics, inequality indices quantify the socio-economic divergence of a given wealth measures from the state of perfect equality. Their relevance is certified by the fact that, in addition to Gini index, many other inequality indices have been proposed to classify wealth measures [8, 14, 15, 23].
However, as recently discussed in [7, 17, 18], the challenge of measuring the statistical heterogeneity of measures is not limited to economics, but arises in most fields of science and engineering, and it is one of the fundamental features of data analysis.
A marked limitation in this type of analysis is that the inequality indices mainly used in the literature work well for one-dimensional features, while their extension to many dimensions presents several difficulties. This is in contrast with the fact that in several problems arising from socio-economic phenomena the prevailing interest is related to understanding multidimensional phenomena.
Remaining in the field of the kinetic description of socio-economic phenomena, we quote here some examples in which the social aspects of the society under study are intimately connected, and have been treated resorting to a kinetic framework that naturally give
rise to multivariate equilibria. The first one refers a kinetic equation for the evolution of the probability distribution of two goods among a huge population of agents [32], where binary exchanges are characterized by Cobb-Douglas utility functions and the Edgeworth box for the description of the common exchange area in which utility is increasing for both agents. This leads to a drift-diffusion equation of Fokker-Planck type in two dimensions for the joint distribution of the two goods.
The second example is related to a deep understanding of the joint action of knowledge and wealth in the formation of stationary wealth profiles [29]. There, the underlying Fokker-Planck equation drives the system towards a steady profile which depends of both the knowledge and wealth variables.
The last example refers to a fully coupled mathematical model in which knowledge and social status of individuals in a western society influence each other [16]. Also in this case, one has to deal with a bivariate equilibrium profile from which one would extract global informations without resorting to one-dimensional inequality measures applied to the marginal distributions.
Measuring inequalities in a multidimensional framework is a question which is nowadays a priority also in the European agenda [3, 4, 25]. Indeed, as outlined in introduction to this action, "the pursuit of a more equal and fairer Europe requires extensive knowledge on prevailing inequalities across multiple life domains. Inequality is a complex and multifaceted phenomenon, and every attempt to assess multidimensional inequalities comes with a number of conceptual and empirical challenges. For example, inequality and poverty do not necessarily move in the same direction: low poverty levels in a society may be combined with high inequality due to large differences between those at the top and those in the middle of the distribution. Against this backdrop, the EU Multidimensional Inequality Monitoring Framework aims to contribute to the measurement, monitoring and analysis of a wide range of different aspects of inequality".1
Footnote 1: [https://composite-indicators.jrc.ec.europa.eu/multidimensional-inequality](https://composite-indicators.jrc.ec.europa.eu/multidimensional-inequality)
Currently, the literature on multidimensional inequality measures is vast, as can be seen by taking a look at the extensive references of some recent contributions [1, 5]. However, as listed below, most of these approaches to multidimensional indices are based on classical arguments, derived by classical economical indices.
An indispensable tool to build inequality indices is the Lorenz function and its graphical representation, the Lorenz curve [26]. The Lorenz curve plots the percentage of total income earned by the various sectors of the population, ordered by the increasing size of their incomes. The Lorenz curve is typically represented as a curve in the unit square of opposite vertices in the origin of the axes and the point \((1,1)\), starting from the origin and ending at the point \((1,1)\).
The diagonal of the square exiting the origin is the line of perfect equality, representing a situation in which all individuals have the same income. Since the diagonal is the line of perfect equality, the closer the Lorenz curve is to the diagonal, the more equal is the distribution of income.
This idea of _closeness_ between the line of perfect equality and the Lorenz curve can be expressed in many ways, each of which gives rise to a possible measure of inequality. Thus, starting from the Lorenz curve, several indices of inequality can be
defined, including the Gini index [20, 21]. Various indices were obtained by looking at the maximal distance between the line of perfect equality and the Lorenz curve, either horizontally or vertically, or alternatively parallel to the other diagonal of the unit square [12, 17].
Starting from this framework, different multivariate indices have been proposed in the pertinent literature. Most of them are based on the notion of Lorenz zonoid [24] a multi-dimensional generalization of the Lorenz curve.
Despite the enormous amount of research illustrating the fields of application of inequality indices, the use of arguments based on Fourier transforms appears rather limited. In particular, although the Gini index can be easily expressed in terms of the Fourier transform, its expression in Fourier seems not considered at all in applications.
The importance of expressing inequality measures in terms of the Fourier transform of measures has been recently outlined in [34], not only by expressing well-known one-dimensional inequality measures in terms of the Fourier transform, but introducing and studying a novel inequality index directly expressed in terms of the Fourier transform.
In the rest of the paper, we will show how this new inequality measure can be easily generalized to cover multivariate probability distributions, by enlightening its main properties. We restrict the forthcoming analysis to theoretical considerations only, referring the interested reader to its application in the field of multivariate statistics [22].
## 2. A new inequality index for multivariate distributions
The goal of this Section is to present in some details a novel inequality index which can be applied to measure the heterogeneity of multivariate distributions [22]. This index is obtained by suitably generalizing a new one-dimensional index introduced in [34].
In what follows, for a given \(1\leq n\in\mathbb{N}\), we denote by \(P_{s}(\mathbb{R}^{n})\), \(s\geq 1\), the class of all probability measures \(F=F(\mathbf{x})\) on the Borel subsets of \(\mathbb{R}^{n}\) such that
\[m_{s}(F)=\int_{\mathbb{R}}|\mathbf{x}|^{s}dF(\mathbf{x})<+\infty,\]
where, for a given column vector \(\mathbf{v}\) of dimension \(n\), \(\mathbf{v}^{T}=(x_{1},x_{2},\ldots,x_{n})=\mathbf{x}\) is a point in \(\mathbb{R}^{n}\), and \(|\mathbf{v}|=|\mathbf{x}|=\sqrt{\mathbf{x}\mathbf{v}}\) is the modulus of the vector, i.e. the distance of the point \(\mathbf{x}\) from the origin of the cartesian axes.
Further, we denote by \(\tilde{P}_{s}(\mathbb{R}^{n})\) the class of probability measures \(F\in P_{s}(\mathbb{R}^{n})\) which possess a mean value vector \(\mathbf{m}\) with positive components \(m_{k}\), \(k=1,2,\ldots,n\), i.e.
\[m_{k}(F)=\int_{\mathbb{R}^{n}}x_{k}\,dF(\mathbf{x})>0,\quad k=1,2,\ldots,n,\]
and with \(P_{s}^{+}(\mathbb{R}^{n})\) the subset of probability measures \(F\in P_{s}(\mathbb{R}^{n})\) such that \(F(\mathbf{x})=0\) if at least one component \(x_{k}\leq 0,k=1,2,\ldots,n\).
On the set \(\tilde{P}_{s}(\mathbb{R}^{n})\) of probability measures, we consider the set \(\mathcal{F}_{s}^{n}\) of their \(n\)-dimensional Fourier transforms, where, for \(F=F(\mathbf{x})\in\tilde{P}_{s}(\mathbb{R}^{n})\),
\[\widehat{f}(\boldsymbol{\xi})=\int_{\mathbb{R}^{n}}e^{-i\mathbf{x}\boldsymbol {\xi}}\,dF(\mathbf{x}). \tag{2.1}\]
In (2.1) we denoted by \(\boldsymbol{\xi}\) the \(n\)-dimensional column vector of components \(\xi_{k},k=1,2,\ldots,n\).
When \(n=1\), in alternative to well-known inequality indices, for a given distribution \(F\in P_{s}(\mathbb{R})\), the following measure of heterogeneity was proposed in [34]
\[T(F)=\frac{1}{2m}\sup_{\xi\in\mathbb{R}}\left|\left.\frac{d\widehat{f}(\xi)}{d \xi}\right|_{\xi=0}\widehat{f}(\xi)-\frac{d\widehat{f}(\xi)}{d\xi}\right|. \tag{2.2}\]
In definition (2.2), \(m>0\) denotes the mean value of the distribution \(F\).
Apparently, the index (2.2) is completely disconnected from others most used indices, including the well-known Gini index [20, 21], strongly related to Lorenz curve [17]. However, looking at Gini index from ther Fourier transform side, an interesting relationship appears.
Let us consider a probability measure \(F\in P_{s}^{+}(\mathbb{R})\), of mean value \(m>0\). As shown in [34], the classical Gini index
\[G(F)=1-\frac{1}{m}\int_{\mathbb{R}_{+}}(1-F(x))^{2}\,dx. \tag{2.3}\]
can be expressed in terms of one-dimensional Fourier transform as follows:
\[G(F)=1-\frac{1}{2\pi m}\int_{\mathbb{R}}\frac{|1-\widehat{f}(\xi)|^{2}}{|\xi| ^{2}}\,d\xi. \tag{2.4}\]
Expression (2.4) clarifies that the Fourier expression of the classical Gini index is a function of a certain distance between probability measures \(F\) and \(G\)[33], namely
\[d_{2}(F,G)=\int_{\mathbb{R}}\frac{|\widehat{f}(\xi)-\widehat{g}(\xi)|^{2}}{| \xi|^{2}}\,d\xi.\]
Resorting to this analogy, in [34] new inequality measures have been introduced, some of them related to the supremum distance
\[d_{\infty}(F,G)=\sup_{\xi\in\mathbb{R}}\frac{|\widehat{f}(\xi)-\widehat{g}( \xi)|}{|\xi|}. \tag{2.5}\]
This type of metrics have been extensively studied in connection with the convergence to equilibrium of kinetic equations, as alternatives to more classical entropies [19, 10, 33].
Let \(X\) be a random variable, of mean value \(m>0\) characterized by a differentiable probability measure \(F\in P_{s}^{+}(\mathbb{R})\), and denote by \(f(x)=dF(x)/dx\) its probability density. In this case, in addition to the classic expression (2.3), Gini index can be expressed in alternative forms, one of which is particularly interesting to enlighten contact points with the \(T\)-index defined by (2.2). This alternative expression reads
\[G(F)=2\int_{\mathbb{R}_{+}}(1-F(x))\left(f(x)-\frac{x}{m}f(x)\right)\,dx. \tag{2.6}\]
Indeed,
\[\int_{\mathbb{R}_{+}}(1-F(x)f(x)\,dx=-\frac{1}{2}\int_{\mathbb{R}_{+}}\frac{ d}{dx}(1-F(x)^{2}\,dx=-\frac{1}{2}\left.(1-F(x)^{2}\right|_{0}^{\infty}=\frac{1} {2},\]
while, integrating by parts
\[\int_{\mathbb{R}_{+}}(1-F(x)f(x)\frac{x}{m}\,dx=-\frac{1}{2}\int_{\mathbb{R}_{+} }\frac{x}{m}\,\frac{d}{dx}(1-F(x)^{2}\,dx=\frac{1}{2m}\int_{\mathbb{R}_{+}}(1-F (x)^{2}\,dx.\]
Consequently, if we set \(H(x)=1-F(x)\), thanks to Plancherel identity we obtain
\[\begin{split} G(F)=& 2\int_{\mathbb{R}_{+}}(1-F(x)) \left(f(x)-\frac{x}{m}f(x)\right)\,dx=\\ &\frac{1}{m\pi}\int_{\mathbb{R}}\overline{\widehat{H}(\xi)} \left(\left.\frac{d\widehat{f}(\xi)}{d\xi}\right|_{\xi=0}\widehat{f}(\xi)- \frac{d\widehat{f}(\xi)}{d\xi}\right),\end{split} \tag{2.7}\]
where \(\overline{\widehat{H}(\xi)}\) is the complex coniugate of the Fourier transform of \(1-F(x)\), given by
\[H(\xi)=\frac{1-\widehat{f}(\xi)}{i\xi}.\]
Consequently, the value of the Gini index depends on the product of two different quantities, working in opposite directions. Indeed, according to (2.5), the term \(\overline{\widehat{H}(\xi)}\) quantifies the distance of \(\widehat{f}(\xi)\) from the state of perfect inequality, represented by the value \(1\), the Fourier transform of a Dirac delta function located in \(x=0\). On the contrary, the term
\[\left.\frac{d\widehat{f}(\xi)}{d\xi}\right|_{\xi=0}\widehat{f}(\xi)-\frac{d \widehat{f}(\xi)}{d\xi},\]
that vanishes in correspondence to \(e^{-im\xi}\), the Fourier transform of a Dirac delta function localized in the mean value \(m\) of \(f(x)\), quantifies the distance of \(\widehat{f}(\xi)\) from the state of perfect equality. This clarifies both the nonlinearity of Gini index, and the advantages of the choice of the inequality index (2.2) as alternative measure of the heterogeneity of the distribution. A further advantage is represented by the possibility to easily extend the measure (2.2) to higher dimensions.
Following [22], we introduce on \(\mathcal{F}_{s}^{n}\) the multivariate inequality index \(T_{n}(F)\), expressed by the formula
\[T_{n}(F)=\frac{1}{2|\mathbf{m}|}\sup_{\boldsymbol{\xi}\in\mathbb{R}^{n}}\left| \nabla\widehat{f}(\boldsymbol{\xi}=\mathbf{0})\widehat{f}(\boldsymbol{\xi})- \nabla\widehat{f}(\boldsymbol{\xi})\right|. \tag{2.8}\]
In definition (2.8), \(\nabla\widehat{f}(\boldsymbol{\xi})\) denotes the gradient of the scalar function \(\widehat{f}(\boldsymbol{\xi})\). Indeed, \(F\in P_{s}(\mathbb{R}^{n})\) implies that \(\widehat{f}(\boldsymbol{\xi})\) is continuously differentiable.
It is immediate to show that the functional \(T_{n}(F)\) is invariant with respect to the scaling (dilation)
\[F(\mathbf{x})\to F(c\mathbf{x}),\quad c>0.\]
moreover, as shown in [34] for the one-dimensional index, \(T_{n}\) is bounded from above by \(1\). Indeed, since for any given \(F\in P_{s}^{+}(\mathbb{R}^{n})\) it holds \(|\widehat{f}(\boldsymbol{\xi})|\leq\widehat{f}(\boldsymbol{0})=1\), and
\[\frac{\partial\widehat{f}(\boldsymbol{\xi})}{\partial\xi_{k}}=-i\int_{( \mathbb{R}_{+})^{n}}x_{k}e^{-i\boldsymbol{\xi}}\,dF(\mathbf{x}),\quad k=1,2, \ldots,n,\]
one obtains the bound
\[\left|\frac{\partial\widehat{f}(\boldsymbol{\xi})}{\partial\xi_{k}}\right| \leq\int_{(\mathbb{R}_{+})^{n}}x_{k}\left|e^{-i\boldsymbol{\xi}}\right|\,dF( \mathbf{x})=m_{k}, \tag{2.9}\]
which implies \(|\nabla\widehat{f}(\boldsymbol{\xi})|\leq|\nabla\widehat{f}(\boldsymbol{\xi} =\boldsymbol{0})|=|\mathbf{m}|\).
Hence, by the triangular inequality one concludes that \(T_{n}(F)\) satisfies the usual bounds
\[0\leq T_{n}(F)\leq 1, \tag{2.10}\]
and \(T_{n}(F)=0\) if and only if \(\widehat{f}(\boldsymbol{\xi})\) satisfies the differential equations
\[\frac{\partial\widehat{f}(\boldsymbol{\xi})}{\partial\xi_{k}}=\left.\frac{ \partial\widehat{f}(\boldsymbol{\xi})}{\partial\xi_{k}}\right|_{\boldsymbol{ \xi}=\boldsymbol{0}}\widehat{f}(\boldsymbol{\xi}),\quad k=1,2,\ldots,n,\]
with \(\widehat{f}(\boldsymbol{0})=1\).
Thus, as in the one-dimensional case, \(T_{n}(F)\) vanishes if and only if \(\widehat{f}(\boldsymbol{\xi})=e^{-i\mathbf{m}\boldsymbol{\xi}}\), namely if \(\widehat{f}(\boldsymbol{\xi})\) is the Fourier transform of a Dirac delta function located in the point \(\mathbf{x}=\mathbf{m}^{T}(F)\). Note however that, even if the functional \(F\) is defined in the whole class \(\tilde{P}_{s}(\mathbb{R}^{n})\), the upper bound is lost if the probability measure \(F\notin P_{s}^{+}(\mathbb{R}^{n})\), since in this case inequality (2.9) is no more valid.
It is remarkable that the functional \(T_{n}(F)\) defines a measure of inequality for multivariate densities which satisfies most of the properties satisfied by its one-dimensional version \(T(F)\).
Proceeding as in the one-dimensional case, we can check that the upper bound in (2.10) is reached simply by evaluating the value of the multivariate index in correspondence to a multivariate random variable \(\mathbf{X}\) taking only the two values \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{n})\) and \(\mathbf{b}=(b_{1},b_{2},\ldots,b_{n})\) in \(\mathbb{R}^{n}\) with probabilities \(1-p\) and, respectively \(p\), where \(0<p<1\). As we will see in Section 3, this example clarifies the advantages in measuring multidimensional heterogeneity by means of this new index, with respect to the use of existing generalizations of Gini index to the multidimensional setting [5, 6]. For this reason, we will give it into details.
The Fourier transform of the distribution \(F\) of \(\mathbf{X}\) is given by
\[\widehat{f}(\boldsymbol{\xi})=(1-p)e^{-i\mathbf{a}\boldsymbol{\xi}}+pe^{-i \mathbf{b}\boldsymbol{\xi}}. \tag{2.11}\]
Consequently
\[\nabla\widehat{f}(\boldsymbol{\xi})=-i\left[(1-p)\mathbf{a}^{T}e^{-i\mathbf{a }\boldsymbol{\xi}}+p\mathbf{b}^{T}e^{-i\mathbf{b}\boldsymbol{\xi}}\right]\]
and
\[\nabla\widehat{f}(\boldsymbol{\xi}=\boldsymbol{0})=-i\left[(1-p)\mathbf{a}^{ T}+p\mathbf{b}^{T}\right],\]
so that
\[\nabla\widehat{f}(\boldsymbol{\xi}=\boldsymbol{0})\widehat{f}(\boldsymbol{ \xi})-\nabla\widehat{f}(\boldsymbol{\xi})=i\,p(1-p)(\mathbf{b}^{T}-\mathbf{a} ^{T})\left[e^{-i\mathbf{a}\boldsymbol{\xi}}-e^{-i\mathbf{b}\boldsymbol{\xi}} \right].\]
Therefore
\[T_{n}(F)= \frac{1}{2|\mathbf{m}|}p(1-p)|\mathbf{b}^{T}-\mathbf{a}^{T}|\sup_{ \boldsymbol{\xi}\in\mathbb{R}^{n}}\big{|}e^{-i\mathbf{a}\boldsymbol{\xi}}-e^{-i \mathbf{b}\boldsymbol{\xi}}\big{|}=\] \[\frac{1}{2|\mathbf{m}|}p(1-p)|\mathbf{b}^{T}-\mathbf{a}^{T}|\sup_{ \boldsymbol{\xi}\in\mathbb{R}^{n}}\big{|}1-e^{-i(\mathbf{b}-\mathbf{a}) \boldsymbol{\xi}}\big{|}=\frac{1}{|\mathbf{m}|}p(1-p)|\mathbf{b}^{T}-\mathbf{a }^{T}|.\]
Hence, expanding the value of the mean \(\mathbf{m}\) we get the formula
\[T_{n}(F)=\frac{p(1-p)|\mathbf{b}^{T}-\mathbf{a}^{T}|}{|(1-p)\mathbf{a}^{T}+p \mathbf{b}^{T}|}. \tag{2.12}\]
This expression has a structure which does not differ from the the one-dimensional formula computed in [34], that reads
\[T(F)=T_{1}(F)=\frac{p(1-p)|b-a|}{(1-p)a+pb}.\]
In fact, when the mean is fixed, the value of the index does not depend on the positions of the two points \(\mathbf{a}\) and \(\mathbf{b}\), but only on their distance.
To show that formula (2.12) can be used to reach the upper bound, let us now consider the case in which, for a given positive constant \(\epsilon\ll 1\), \(p=\epsilon\), the point \(\mathbf{a}=\mathbf{0}\), while \(\mathbf{b}=\mathbf{m}/\epsilon\) is located far away, but leaving the mean value \(\mathbf{m}\) unchanged. In this case \(T_{n}(F)=1-\epsilon\), a value which, as \(\epsilon\to 0\) converges to the upper bound expressed by the value \(1\).
As its one-dimensional version, we can further show that the inequality index \(T_{n}\) satisfies a number of properties [22].
Let \(F,G\in\tilde{P}_{s}(\mathbb{R}^{n})\) two probability measures with the same mean value, say \(\mathbf{m}\). Then, for any given \(\tau\in(0,1)\) it holds
\[T_{n}(\tau F+(1-\tau)G)\leq\tau\,T_{n}(F)+(1-\tau)T_{n}(G). \tag{2.13}\]
Inequality (2.13) shows the convexity of the functional \(T_{n}\) on the set of probability measures with the same mean.
Another interesting property characterizing the inequality index \(T_{n}\) is linked to its behavior when evaluated on convolutions. Let \(\mathbf{X}\) and \(\mathbf{Y}\) independent multivariate random variables with probability measures in \(\tilde{P}_{s}(\mathbb{R}^{n})\), and mean values \(\mathbf{m}_{X}\) (respectively \(\mathbf{m}_{Y}\)). Then if \(\widehat{f}(\boldsymbol{\xi})\) and \(\widehat{g}(\boldsymbol{\xi})\) denote the Fourier transforms of their respective probability measures, the Fourier transform \(\widehat{h}(\boldsymbol{\xi})\) of the distribution measure of the sum \(\mathbf{X}+\mathbf{Y}\) is equal to the product \(\widehat{f}(\boldsymbol{\xi})\widehat{g}(\boldsymbol{\xi})\).
Then, it can be shown that [22]
\[T_{n}(\mathbf{X}+\mathbf{Y})\leq\frac{|\mathbf{m}_{\mathbf{X}}|}{|\mathbf{m}_{ \mathbf{X}}+\mathbf{m}_{\mathbf{Y}}|}\mathbf{T}_{\mathbf{n}}(\mathbf{X})+\frac {|\mathbf{m}_{\mathbf{Y}}|}{|\mathbf{m}_{\mathbf{X}}+\mathbf{m}_{\mathbf{Y}}|} \mathbf{T}_{\mathbf{n}}(\mathbf{Y}). \tag{2.14}\]
It is remarkable that, at difference with the one-dimensional case, in (2.14) the sum of the coefficients in front of the inequality indices \(T_{n}(\mathbf{X})\) and \(T_{n}(\mathbf{Y})\) is always greater that one. Nevertheless, one can extract from (2.14) some useful consequences.
In particular, if \(\mathbf{X}\) and \(\mathbf{Y}\) belong to \(P_{s}^{+}(\mathbb{R}^{n})\) and \(\mathbf{Y}\) is a random variable that takes the value \(\mathbf{m}\) with positive components with probability \(1\) (so that \(\widehat{g}(\boldsymbol{\xi})=e^{-i\mathbf{m}\boldsymbol{\xi}}\) and
\(T(\mathbf{Y})=0\)),
\[T_{n}(\mathbf{X}+\mathbf{Y})=\frac{|\mathbf{m}_{X}|}{|\mathbf{m}_{X}+\mathbf{m}_ {Y}|}T_{n}(\mathbf{X})<T_{n}(\mathbf{X}), \tag{2.15}\]
since in this case the length of the sum of two vectors with positive components is bigger than the length of both. It is remarkable that the same result holds even if only one component of the \(\mathbf{Y}\) variable is bigger than zero, while the others are not. The meaning of inequality (2.15) is clear. Since in this case \(\mathbf{X}+\mathbf{Y}\) is nothing but \(\mathbf{X}+\mathbf{m}\), which corresponds to adding constant values \(m_{k}\) to \(X_{k}\), for \(k=1,2,\ldots,n\), this property asserts that adding a positive constant value to one or more components to each agent decreases inequality.
Also, if the independent random variables \(\mathbf{X}_{1}\) and \(\mathbf{X}_{2}\) are distributed with the same law of \(\mathbf{X}\), so that their mean values are equal, thanks to the scale property
\[T_{n}\left(\frac{\mathbf{X}_{1}+\mathbf{X}_{2}}{2}\right)=T_{n}\left(\mathbf{ X}_{1}+\mathbf{X}_{2}\right)\leq T_{n}(\mathbf{X}), \tag{2.16}\]
while the mean of \((\mathbf{X}_{1}+\mathbf{X}_{2})/2\) is equal to the mean of \(\mathbf{X}\).
A third important consequence of inequality (2.14) is related to the situation in which the random variable \(\mathbf{Y}=\mathbf{N}\) represents a noise (of mean value \(\mathbf{m}>0\)) that is present when measuring the inequality index of \(\mathbf{X}\). The classical choice is that the additive noise is represented by a Gaussian variable of mean \(\mathbf{m}\) and covariance matrix \(\Sigma\).
We have in this case
\[T_{n}(\mathbf{X}+\mathbf{N})\leq\frac{|\mathbf{m}_{X}|}{|\mathbf{m}_{X}+ \mathbf{m}|}T_{n}(\mathbf{X})+\frac{|\mathbf{m}|}{|\mathbf{m}_{X}+\mathbf{m}| }T_{n}(\mathbf{N}). \tag{2.17}\]
Hence, a precise upper bound can be obtained once we know the explicit value of the inequality index \(T_{n}(\mathbf{N})\). This leads to the interesting question relative to the (explicit) evaluation of the inequality index \(T_{n}\) of a random multivariate Gaussian variable.
This evaluation has been done in [22]. The Fourier transform of the distribution function \(F\) of a multivariate Gaussian variable \(\mathbf{N}=(N_{1},N_{2},\ldots,N_{n})\) in \(\mathbb{R}^{n}\), \(n>1\), is given by the expression
\[\widehat{f}(\boldsymbol{\xi})=\exp\left\{-i\mathbf{m}^{T}\boldsymbol{\xi}- \frac{1}{2}\boldsymbol{\xi}^{T}\Sigma\boldsymbol{\xi}\right\}, \tag{2.18}\]
where \(\mathbf{m}\) is the vector of the mean values \(\langle N_{k}\rangle\), and \(\Sigma\) is the \(n\times n\) covariance matrix, with elements
\[\sigma_{ij}=\langle(N_{i}-m_{i})(N_{j}-m_{j})\rangle.\]
Then, for the multivariate Gaussian one obtains the expression
\[T_{n}(N)=\frac{1}{2\sqrt{e}}\frac{1}{|\mathbf{m}|}\sqrt{\frac{\sum_{k=1}^{n} \lambda_{k}^{2}}{\sum_{k=1}^{n}\lambda_{k}}}, \tag{2.19}\]
where the \(\lambda_{k}\)'s are the positive eigenvalues of the covariance matrix of the multivariate Gaussian distribution.
Going back to formula (2.17), in presence of a additive noise represented by a Gaussian variable of mean \(\mathbf{m}\) and covariance matrix \(\Sigma\) one has the bound
\[T_{n}(\mathbf{X}+\mathbf{N})\leq\frac{|\mathbf{m}_{X}|}{|\mathbf{m}_{X}+ \mathbf{m}|}T_{n}(\mathbf{X})+\frac{1}{2\sqrt{e}|\mathbf{m}_{X}+\mathbf{m}|} \sqrt{\frac{\sum_{k=1}^{n}\lambda_{k}^{2}}{\sum_{k=1}^{n}\lambda_{k}}}. \tag{2.20}\]
It is interesting to remark that formula (2.20) continues to hold in presence of a centered Gaussian random noise of covariance matrix \(\Sigma\), and in this case
\[T_{n}(\mathbf{X}+\mathbf{N})\leq T_{n}(\mathbf{X})+\frac{1}{2\sqrt{e}| \mathbf{m}_{X}|}\sqrt{\frac{\sum_{k=1}^{n}\lambda_{k}^{2}}{\sum_{k=1}^{n} \lambda_{k}}}. \tag{2.21}\]
## 3. About the Gini-type index for multivariate distributions
Sections 2 has been devoted to the definition of a new multivariate inequality index, to its main properties, and to its evaluation in correspondence to a multivariate Gaussian distribution. This analysis takes a great advantage from the possibility to express the index in terms of a multidimensional Fourier transform.
It is therefore fair to ask whether the use of the Fourier transform can also bring advantage in the definition of a multivariate Gini index. As a matter of fact, the extension of the classical Gini index to measure inequality in multivariate distributions has shown numerous attempts, as certified by the references of the recent paper [6], in which the author is motivated by the objective of designing a multidimensional Gini index of inequality, to quantify of standard of living, that would satisfy all of a number of reasonable properties. This in reason of the fact that, as noticed in the introduction of [6], many existing multidimensional inequality indices of Gini type proposed by economists from time to time have remained elusive in this respect.
To better understand the difficulties that appear when trying to build a multidimensional generalization of the Gini index, following the line considered in this paper, we will build a multivariate version of the index obtained by resorting to its Fourier one-dimensional transform. Indeed, the Fourier expression of the one-dimensional Gini index considered in [34] appears ready to be extended to higher dimensions, still preserving scale invariance.
As shown in Section 2, for any probability measure \(F\in P_{s}^{+}(\mathbb{R})\) of mean value \(m>0\), Gini index has a simple expression in Fourier transform, given by (2.4).
Considering that the value zero in (2.4) is obtained when \(\widehat{f}(\xi)=e^{-im\xi}\), Gini index can be fruitfully rewritten as
\[G(F)=\frac{1}{2\pi m}\left[\int_{\mathbb{R}}\frac{|1-e^{-im\xi}|^{2}}{\xi^{2} }\,d\xi-\int_{\mathbb{R}}\frac{|1-\widehat{f}(\xi)|^{2}}{\xi^{2}}\,d\xi\right]. \tag{3.1}\]
Taking into account that inequality measures are scale invariant, expression (3.1) can be easily extended to measure the inequality of a multivariate distribution \(F\in P_{s}^{+}(\mathbb{R}^{n})\), \(n>1\) by setting
\[G_{n}(F)=\frac{\mu_{n}}{|\mathbf{m}|}\left[\int_{\mathbb{R}^{n}}\frac{|1-e^{- i\mathbf{m}\xi}|^{2}}{|\boldsymbol{\xi}|^{n+1}}\,d\boldsymbol{\xi}-\int_{ \mathbb{R}^{n}}\frac{|1-\widehat{f}(\boldsymbol{\xi})|^{2}}{|\boldsymbol{\xi }|^{n+1}}\,d\boldsymbol{\xi}\right], \tag{3.2}\]
where the constant \(\mu_{n}\) is such that
\[\frac{1}{\mu_{n}}=\frac{1}{|\mathbf{m}|}\int_{\mathbb{R}^{n}}\frac{|1-e^{-i \mathbf{m}\boldsymbol{\xi}}|^{2}}{|\boldsymbol{\xi}|^{n+1}}\,d\boldsymbol{\xi}. \tag{3.3}\]
Evaluating the integral on the right-hand side by resorting to a \(n\)-dimensional spherical coordinate system, one realizes that the value of the constant \(\mu\) does not depend on the vector \(\mathbf{m}\), and
\[\mu_{n}=\Gamma\left(\frac{n-1}{2}+1\right)\sqrt{(2\pi)^{n-1}},\]
where \(\Gamma(\cdot)\) denotes as usual the Gamma function. Formula (3.2) is valid for all values of \(n\in\mathbb{N}\), including \(n=1\), that consistently gives \(\mu_{1}=1\).
Resorting to (3.3), we can express the multivariate Gini-type index (3.2) in the (simpler) form
\[G_{n}(F)=1-\frac{\mu_{n}}{|\mathbf{m}|}\int_{\mathbb{R}^{n}}\frac{|1-\widehat{ f}(\boldsymbol{\xi})|^{2}}{|\boldsymbol{\xi}|^{n+1}}\,d\boldsymbol{\xi}. \tag{3.4}\]
The same idea can be applied to recover an expression for a multivariate Pietra index [31, 34]. However, despite their eventual theoretical interest, does not seem that this type of expressions, if compared to the multivariate \(T_{n}\) index considered in this paper, share good properties.
The problems which appear when passing from the one-dimensional version (2.4) to its natural multivariate version (3.4) can be easily understood by evaluating the value of the Gini index \(G_{n}\), \(n>1\), in correspondence to the multivariate random variable \(\mathbf{X}\) taking only two values, introduced in Section 2. This variable is characterized by the Fourier transform (2.11), so that, to compute the value of Gini index, as expressed by formula (3.4), we need to evaluate the integral
\[I_{n}(F)=\frac{\mu_{n}}{|\mathbf{m}|}\int_{\mathbb{R}^{n}}\frac{|1-(1-p)e^{-i \mathbf{a}\boldsymbol{\xi}}+pe^{-i\mathbf{b}\boldsymbol{\xi}}|^{2}}{| \boldsymbol{\xi}|^{n+1}}\,d\boldsymbol{\xi},\]
where \(|\mathbf{m}|=|(1-p)\mathbf{a}^{T}+p\mathbf{b}^{T}|\). It is immediate to show that the integral \(I_{n}\) can be split into three terms, i.e.
\[\begin{split} I_{n}(F)&=\frac{\mu_{n}}{|\mathbf{m}|} \int_{\mathbb{R}^{n}}\frac{2(1-p)(1-\cos\mathbf{a}\boldsymbol{\xi})}{| \boldsymbol{\xi}|^{n+1}}\,d\boldsymbol{\xi}+\\ &\frac{\mu_{n}}{|\mathbf{m}|}\int_{\mathbb{R}^{n}}\frac{2p(1- \cos\mathbf{b}\boldsymbol{\xi})}{|\boldsymbol{\xi}|^{n+1}}\,d\boldsymbol{\xi} +\frac{\mu_{n}}{|\mathbf{m}|}\int_{\mathbb{R}^{n}}\frac{2p(1-p)(1-\cos( \mathbf{b}-\mathbf{a})\boldsymbol{\xi})}{|\boldsymbol{\xi}|^{n+1}}\,d \boldsymbol{\xi}.\end{split} \tag{3.5}\]
The three integrals on the right-hand side of (3.5) can be easily evaluated by resorting to a \(n\)-dimensional spherical coordinate system to give
\[I_{n}(F)=\frac{1}{|\mathbf{m}|}\left[(1-p)|\mathbf{a}^{T}|+p|\mathbf{b}^{T}|- p(1-p)|(\mathbf{b}^{T}-\mathbf{a}^{T}|\right],\]
so that
\[\begin{split}& G_{n}(F)=1-\frac{1}{|\mathbf{m}|}\left[(1-p)|\mathbf{ a}^{T}|+p|\mathbf{b}^{T}|-p(1-p)|(\mathbf{b}^{T}-\mathbf{a}^{T}|\right]=\\ &\frac{|(1-p)\mathbf{a}^{T}+p\mathbf{b}^{T}|-\left[(1-p)|\mathbf{ a}^{T}|+p|\mathbf{b}^{T}|\right]+p(1-p)|(\mathbf{b}^{T}-\mathbf{a}^{T}|}{|(1-p) \mathbf{a}^{T}+p\mathbf{b}^{T}|}\leq\\ &\frac{p(1-p)|(\mathbf{b}^{T}-\mathbf{a}^{T}|}{|(1-p)\mathbf{a}^ {T}+p\mathbf{b}^{T}|}=T_{n}(F).\end{split} \tag{3.6}\]
Hence, at difference with the \(T_{n}\) multivariate index, the multivariate Gini index \(G_{n}\) does not coincide, even in the case of a simple two-valued distribution, with the \(n\)-dimensional extension of the univariate index. In other words, while the unidimensional indices depend only on the modulus of the difference between the the two values assumed by the random variable, in the multivariate case only the \(T_{n}\) index retains this property, while the multivariate Gini index \(G_{n}\), apparently derived from a natural extension of the one-dimensional index by maintaining the scaling property, does not. In fact the additional term appearing on the numerator of formula (3.6), given by
\[|(1-p)\mathbf{a}^{T}+p\mathbf{b}^{T}|-\left[(1-p)|\mathbf{a}^{T}|+p|\mathbf{ b}^{T}|\right],\]
even in presence of two vectors with positive components, in dimension \(n>1\) is dependent on the position of the points \(\mathbf{a}\) and \(\mathbf{b}\) on the space \(\mathbb{R}^{n}\), and it is equal to zero if and only if the two vectors \(\mathbf{a}\) and \(\mathbf{b}\) are parallel. This unpleasant fact shows that even the passage to Fourier transform does not allow a simple extension of the Gini index to multivariate distributions.
On the contrary, the presence of heavy difficulties in defining a easy to treat inequality index able to measure multivariate distributions characterizes the \(T_{n}\) index introduced in this paper as a good candidate for future applications.
## 4. Conclusions
The description of social phenomena in a multi-agent system by means of kinetic equations often leads to the identification of multidimensional universal steady profiles, equilibrium distributions of paramount importance that should resume at best the characteristics of the phenomenon under investigation, dependent in general on several factors. Among the various features considered to have a more precise measurement of the social characteristics of the steady profile, multivariate inequality indices represent a primary tool [1, 3, 4, 5].
In this paper, we enlightened various properties of a new inequality index \(T_{n}\), considered in [22], characterized in terms of the multidimensional Fourier transform, which appears to have a number of good properties in the general case of multivariate distributions. The interest in applications of the index \(T_{n}\), is amplified by the fact that the Fourier transform natural generalization of the one-dimensional Gini index to multivariate distribution does not lead to a definition which satisfies the basic properties required to inequality indices.
## Acknowledgements
This work has been written within the activities of GNFM (Gruppo Nazionale per la Fisica Matematica) of INdAM (Istituto Nazionale di Alta Matematica), Italy. The research was partially supported by the Italian Ministry of Education, University and Research (MIUR) through the "Dipartimenti di Eccellenza" Programme (2018-2022) - Department of Mathematics "F. Casorati", University of Pavia.
|
2309.16052 | OceanChat: Piloting Autonomous Underwater Vehicles in Natural Language | In the trending research of fusing Large Language Models (LLMs) and robotics,
we aim to pave the way for innovative development of AI systems that can enable
Autonomous Underwater Vehicles (AUVs) to seamlessly interact with humans in an
intuitive manner. We propose OceanChat, a system that leverages a closed-loop
LLM-guided task and motion planning framework to tackle AUV missions in the
wild. LLMs translate an abstract human command into a high-level goal, while a
task planner further grounds the goal into a task sequence with logical
constraints. To assist the AUV with understanding the task sequence, we utilize
a motion planner to incorporate real-time Lagrangian data streams received by
the AUV, thus mapping the task sequence into an executable motion plan.
Considering the highly dynamic and partially known nature of the underwater
environment, an event-triggered replanning scheme is developed to enhance the
system's robustness towards uncertainty. We also build a simulation platform
HoloEco that generates photo-realistic simulation for a wide range of AUV
applications. Experimental evaluation verifies that the proposed system can
achieve improved performance in terms of both success rate and computation
time. Project website: \url{https://sites.google.com/view/oceanchat} | Ruochu Yang, Mengxue Hou, Junkai Wang, Fumin Zhang | 2023-09-27T22:16:56Z | http://arxiv.org/abs/2309.16052v1 | # OceanChat: Piloting Autonomous Underwater Vehicles in Natural Language
###### Abstract
In the trending research of fusing Large Language Models (LLMs) and robotics, we aim to pave the way for innovative development of AI systems that can enable Autonomous Underwater Vehicles (AUVs) to seamlessly interact with humans in an intuitive manner. We propose OceanChat, a system that leverages a closed-loop LLM-guided task and motion planning framework to tackle AUV missions in the wild. LLMs translate an abstract human command into a high-level goal, while a task planner further grounds the goal into a task sequence with logical constraints. To assist the AUV with understanding the task sequence, we utilize a motion planner to incorporate real-time Lagrangian data streams received by the AUV, thus mapping the task sequence into an executable motion plan. Considering the highly dynamic and partially known nature of the underwater environment, an event-triggered replanning scheme is developed to enhance the system's robustness towards uncertainty. We also build a simulation platform HolcoEo that generates photo-realistic simulation for a wide range of AUV applications. Experimental evaluation verifies that the proposed system can achieve improved performance in terms of both success rate and computation time. Project website: [https://sites.google.com/view/oceanchat](https://sites.google.com/view/oceanchat)
## I Introduction
AUVs have been widely used in ocean engineering for a range of applications, including algal bloom monitoring, hurricane prediction, underwater acoustics, and ocean observation systems [1, 2, 3]. However, piloting AUVs during real-world missions is usually laborious with high demand of mechanical manuals, mission configuration files, and terminal commands. From the perspective of a operator, it would be ease to simplify AUV piloting processes by abstracting technical complexities with natural language. Moreover, it is fairly intriguing for us to yearn the possibility of making AUVs handful for everyone. Rather than requiring a specialized engineer to control the AUV, our blueprint is to have a non-technical user on the loop, i.e., deploying underwater missions through natural language. The emergence of LLMs offers a promising avenue to achieve this vision, as they can learn to project real-world concepts into the language space. While LLMs are believed to dig out open-world knowledge in the text form, it remains a critical aspect of using such knowledge to enable robots to physically act in the real world [4, 5, 6]. The question then arises: how can we ground abstract language instructions into AUVs' physical actions? For example, given an AUV pilot's command "go through the canyon", how can LLMs trigger basic AUV controllers and sensors, such as moving forward and taking photos, to accomplish the overarching goal?
We propose a system OceanChat, which is able to pilot AUVs in natural language as shown in Fig. 1. The proposed system leverages a closed-loop LLM-guided task and motion planning framework to perform challenging AUV missions and replan in case of execution failure. The main contributions of this paper are summarized as follows.
* We develop OceanChat, a simulator and planner for using natural language to control a calibrated AUV agent EcoMapper.
* We establish a closed-loop LLM-guided task and motion planning framework to endow OceanChat with executable robotic control.
This paper is organized as follows. Section II outlines related works. Section III presents the HoloEo simulation platform along with the refined AUV model EcoMapper. Section IV illustrates the OceanChat system featuring the closed-loop LLM-task-motion planning framework. Section V evaluates OceanChat in terms of achieving underwater missions given human commands. Section VI provides conclusion and future works.
## II Related Works
LLMs have exhibited considerable capabilities of deciphering natural language within the context of real-world scenarios, including language understanding, sentiment analysis, text completion, etc. Recently, harnessing LLMs for robotic applications has emerged as a rapidly evolving field of research. One notable significance of connecting LLMs with robotics is to allow robotic agents to interact with the world and humans in a natural way. A large body of works aim to integrate LLMs into a planning and reasoning pipeline for robotic execution. One commonly affordable approach relies on prompting strategies for LLMs to derive a sequential plan aimed at achieving a user prompt [7, 8, 9] maps LLM-guided steps to pre-defined robot skills by listing a collection of high-level functions in the prompt. [10] uses LLMs to score a candidate skill with the highest probability of completing the overall instruction. [11, 12] utilizes LLMs to create code for undefined functions, thus generalizing to unseen instructions in different robotic scenarios. Some works also encourage LLMs to demonstrate their chain of thought [13] or supplement plan explanations in a structured JSON format [14], which enables LLMs to perform step-by-step reasoning and allows users to rectify infeasible LLM responses. In the robotics realm, the conventional open-loop
system pertains to executing tasks without actively sensing the environment or responding to possible failure. While this approach offers advantages of simplicity and speed, it is accompanied by several limitations including the absence of error correction mechanisms and vulnerability to disturbances [15]. Closed-loop planning serves as a suitable solution of these challenges, spanning from directly re-prompting LLMs with corrective instructions [16, 17], involving real-time environmental feedback [18], to integrating multiple modalities such as vision and touch [19, 20, 21]. However, a significant barrier still lies in building an intrinsic connection between the LLM model and the AUV. To bridge this gap between open-ended human commands and executable robotic actions, we resort to a well-established planning track in the robotics discipline.
Task and Motion Planning (TAMP) is a vastly investigated problem in the robotics community [22, 23, 24]. Classical works address TAMP in deterministic and fully observable space, branching out into topics like pick-place planning [25, 26], manipulation planning [27, 28], navigation [29, 30, 31], and rearrangement planning [32, 33, 34, 35] efficiently incorporate geometric or kinematic constraints with the heuristic plan search. [36, 37] present a regression-based framework by generating goal regression and pre-images in a reversible logical chain. The Planning Domain Definition Language (PDDL) [38, 39] standardizes formulations of AI planning, thus providing a universal interface of TAMP planners regardless of domains. Furthermore, it is a fundamental extension to consider inevitable uncertainty in real-world planning, which derives from stochasticity or partial observability of object states [40, 41]. Generally, belief-space planning needs to address two types of uncertainty: future-state uncertainty [42, 43], and current-state uncertainty [44, 45]. One distinguished approach of solving TAMP in belief space is to temporally decompose long-horizon problems into a sequence of short horizons in an interleaved manner [46, 47]. On basis of this approach, [48] develops a bi-level algorithm leveraging the Depth First Search to achieve lower computation cost and guaranteed optimality.
It should be noted that conducting empirical algorithms with real-world AUVs is impractical, as this could cause AUVs to drift to unexpected areas or even totally abort. Due to huge expense of AUV field trials, the imperative for a high-fidelity underwater robotics simulator becomes evident, serving as an algorithm testing tool. Numerous underwater simulators [49, 50] have actively come into view, some of which equipped with various agents, sensors, and communication models [51, 52]. MarineSIM [53] focuses on multi-agent acoustic communications. UUV Simulator [54] possesses accurate dynamic models and easy set-up. UWSim [55] is an open-source simulator with multi-agent and sonar support. HoloOcean [56] is an mature one with full support of underwater robotics, light package dependencies, and ease of adding new assets.
## III HoloEco Simulation Platform
We establish an all-encompassing ocean simulation platform HoloEco upon HoloOcean [56], which offers a rich environment of diverse underwater activities. Included within it are a myriad of objects such as coral reefs, gliders, warships, underwater mountains and canyons, etc. The primary motivation behind developing HoloEco is to prevent AUVs from being exposed to empirical setbacks in the real-world ocean. Thus, it is indispensable for HoloEco to create a simulated environment where we can continually enhance quality of AUV solutions. We make available a suite of underwater applications in which a real-world AUV might engage, such as surveying coral reefs or navigating through an unknown canyon. These provisions contribute to the platform's realistic simulation of underwater missions.
Fig. 1: Our proposed system **OceanChat** can decompose **natural language commands** into a task sequence of **controlling EcoMapper** in the HoloEco simulation platform. We establish a three-level framework of LLM-guided task and motion planner. The event-triggered replanning module is designed to withstand unpredictable error during execution.
Additionally, our simulator involves a finely tuned AUV model EcoMapper [57] that closely aligns with its real-world counterpart. EcoMapper is meticulously crafted with efficient controllers and multimodal sensors like moving forward and grabbing depth maps. This level of detailed functionalities can present a solid claim of how AUVs can be similarly controlled in real-world missions.
## IV Method
To accomplish AUV missions specified by human commands, we develop for OceanChat a three-level framework of closed-loop LLM-guided task and motion planning. Starting from a human command, the framework operates in three stages: LLMs interpreting the command, task planners sequencing tasks, and motion planners controlling EcoMapper in the ocean environment.
### _Hierarchy of Closed-loop LLM-task-motion Planning_
Numerous gaps must be bridged all the way down from natural language commands to real robotic execution. For example, LLMs may generate an infeasible goal such as directly traversing an unknown canyon in a straight line. Task planners may propose an incorrect sequence of tasks such as moving forward a long distance before detecting the unexplored surroundings. Motion planners might compose roundabout actions that waste significant time circling inside the canyon. We demonstrate that the proposed three-level planning framework can bridge these gaps in a hierarchical manner by enhancing representations of AUV and environment dynamics level by level. The high-level LLM planner interprets the human command to offer the middle-level task planner an overall goal. The task planner then generates a feasible task sequence for the goal, which the low-level motion planner follows to enable robotic control over EcoMapper. The whole framework is shown in Fig. 2.
Specifically, the high-level LLM planner translates the human command into a _goal_ formatted as a starting state, a ending state, and a set of tasks for EcoMapper to achieve. By incorporating EcoMapper's capabilities, underwater objects, and abstraction of environmental maps, we tailor prompts to the LLM planner so that it can gain a symbolic understanding of the underwater scene, i.e., an abstracted representation of AUV and environment dynamics. The LLM planner can also seek clarification for ambiguous queries and outline intermediate steps by going through chain of thought. It can also achieve generalized interpretation of unseen human commands through few-shot prompting.
Although the LLM planner is filled with semantic information, it still lacks information about the physical environment. It remains unresolved whether the goal can be actually achieved and how it should be translated into a feasible task sequence (plan) for EcoMapper in the uncertain underwater environment. Given the spatial and temporal scale of AUV applications, a task sequence often extends across long planning horizons. Simultaneously, the rich action space of AUV introduces computational complexity when the task planner searches for an effective plan. Therefore, to ground the LLM-generated goal into an executable plan, we propose a middle-level task planner that refines the goal into an abstracted _task sequence_. At this middle level, we represent tasks by logical representations, i.e., preconditions and effects that can be evaluated as true or false. In this way, the middle-level task planner supplements the LLM-guided goal with a task sequence based on predicate transitions learned from AUV dynamics and sensor observation.
Finally, the low-level motion planner solves for a _motion plan_ to accomplish the task sequence given by the middle-level task planner. On the representation side, the motion planner supports the task planner by generating physical representations of AUV and environment dynamics from prior knowledge or real-time collected Lagrangian data streams. On the planning side, the motion planner optimizes continuous robotic control by taking physical dynamics and environmental observation into consideration, thus adding real-world constraints to the middle-level task sequence.
Despite meticulous planning, EcoMapper may still fall into unexpected consequences owing to observation and action uncertainties in the underwater environment. Therefore, it is crucial to incorporate an event-triggered replanning module to address execution error with real-time feedback. Triggered by a logical state transition, the replanning module assesses EcoMapper's status. If the status deviates from the expected outcome, the task planner instructs the motion planner to execute corrective actions.
### _Design of Middle-level Task Planner_
Considering that AUVs cover hundreds of kilometers distances with limited sensors and unpredictable environmental factors, we must account for uncertainties in the real-world ocean to literally fulfill the LLM-interpreted goal. On one side of validity, while LLMs possess some grade of planning capability, merely relying on them as task planners will raise issues. Specifically, LLMs struggle to determine proper transitional timing for a series of specific tasks due to limited knowledge about real-world dynamics. We refer to this pipeline as the LLM planning method. On the other side of efficiency, if we let the LLM planner directly guide the motion planner at the level of physical representations, the LLM planner will likely generate redundant and invalid attempts of combining primitive actions to navigate through the canyon. Evaluating such a tremendous number of motion trajectories is time-consuming in terms of real-time operation. We refer to this pipeline as the LLM-motion planning method. In order to achieve a good balance between validity and efficiency, we turn to establishing a middle-level task planner as a pivot to connect the LLM planner with the motion planner. The task planner can provide additional logical representations of the world to produce a feasible plan over the goal's extended horizons and pass the plan to low-level robotic controllers or sensors. This pipeline is our proposed LLM-task-motion planning method.
We formulate a generic TAMP problem \(\Pi=<\Omega,\mathcal{A},\mathbf{x}_{0},S_{g}>\) by a state space \(\Omega\), an action space \(\mathcal{A}\), an
initial state \(\textbf{x}_{0}\in\Omega\) and a set of goal states \(S_{g}\subseteq\Omega\). Let \(\textbf{x}_{k}\in\Omega\) and \(\textbf{u}_{k}\in\mathcal{A}\) denote the system state and action at timestamp \(k\), respectively. The goal of TAMP is to drive the state \(\textbf{x}_{k}\) to be inside the goal set \(S_{g}\) under a plan of sequential actions \(\pi=[\textbf{u}_{0},...,\textbf{u}_{N}]\). Since there may be an infinite number of feasible plans, we minimize the cost function \(J\) of moving along the trajectory as follows:
\[\pi^{*}=\arg\min_{\pi}J=\sum_{k=0}^{N}L(\textbf{x}_{k},\textbf{u}_{k}), \tag{1}\]
where \(L\) is a one-step cost function, and the optimal plan \(\pi^{*}\) should yield the minimum cost \(J^{*}\). To solve the TAMP problem 1, the task planner generates a task sequence to decompose a long-horizon planning problem into short horizons, and the motion planner then optimizes the specific actions of each task.
#### Iv-B1 HTN Task Planner
The task planner should decide a viable sequence out of pre-defined tasks. The key aspect of designing such a task planner is to construct logical representations that include preconditions necessary to activate the tasks and effects of executing the tasks. In this work, we employ the HTN task planner to solve for the logical constraints generated by preconditions and effects of these tasks in order to achieve a target goal. In a general HTN planner, tasks are defined as higher level actions composed of primitive actions that alter Boolean-valued predicates representing the state space. Through this abstraction, the HTN planner is able to construct a task sequence that solves a complex goal without deep search.
#### Iv-B2 A* Motion Planner
Within a given task sequence, the motion planner aims to find the optimal way of executing the tasks with minimum cost. Since EcoMapper mostly maintains a constant depth during operation, we introduce its dynamics as a unicycle model:
\[\begin{split}\dot{x}&=v\cos(\theta)\\ \dot{y}&=v\sin(\theta)\\ \dot{\theta}&=\omega,\end{split} \tag{2}\]
where \(x\) and \(y\) are EcoMapper's Cartesian coordinates, \(\theta\) is the orientation, \(v\) is the forward velocity, and \(\omega\) is the turning rate. Thus, the system state is defined as \(\textbf{x}=(x,y,\theta)^{T}\), and the control input is defined as \(\textbf{u}=(v,\omega)^{T}\). We represent the dynamics model 2 in a discrete way as
\[\textbf{x}_{k+1}=f(\textbf{x}_{k},\textbf{u}_{k}). \tag{3}\]
Let \(R=\{r_{1},r_{2},...,r_{M}\}\) denote the regions that EcoMapper travels through. Given the EcoMapper position \(\textbf{x}_{0}\) in the current region \(r_{i},\forall i=1,2,...,M\), the dynamics model, and the region's environment map \(V\), we can formulate the motion planning problem as an optimization problem as follows:
\[\begin{split}\min_{[\textbf{u}_{0},...,\textbf{u}_{N}]}J=\sum_{k= 0}^{N}L(\textbf{x}_{k},\textbf{u}_{k})\\ \text{s.t.}&\textbf{x}_{k+1}=f(\textbf{x}_{k}, \textbf{u}_{k}),k=0,1,...,N-1\\ V,&\textbf{x}_{0}\in r_{i},i=1,2,...,M.\end{split} \tag{4}\]
In this work, we leverage the A* algorithm to solve the problem 4 for computing the best action.
## V Experimental Evaluation
In order to validate our system's ability of executing human commands upon AUVs, we conduct experimental evaluation in the simulated underwater world. We deliberately design a comprehensive mission of EcoMapper autonomously navigating through an unknown canyon. We know the canyon's starting and ending positions, but we lack information of the tomography inside the canyon. Due to unavailability of HD maps and GPS localization in the ocean, we develop multimodal perception consisting of depth maps and laser range sensors as shown in Fig. 3. Since the UDepth network [58] offers fast depth prediction on power-limited AUVs, it is employed to generate depth maps for coarse environmental perception. For precise distance measurement, we install laser range sensors on EcoMapper in eight directions: forward, down, 30-degree left, 45-degree
Fig. 2: Framework of closed-loop LLM-guided task and motion planning. The high-level LLM planner composes an overall goal by precisely comprehending the human command in the context of underwater scenes. The middle-level task planner generates a logical sequence of tasks to achieve the overall goal. The low-level motion planner maps the task sequence to EcoMapper robotic control by utilizing perception and optimization models. In case of execution error, the replanning module makes real-time adjustments in a closed-loop fashion.
left, 60-degree left, 30-degree right, 45-degree right, and 60-degree right. The results of successfully navigating through the canyon are shown in Fig. 6. A full video is available on the project website.
Given prompts including a set of available actions, objects, and the underwater environment, the LLM planner translates the human command into a goal to be achieved by the task and motion planners. As shown in Fig. 4, the LLM planner leverages its semantic understanding to formulate the goal and select the actions for accomplishing the goal as \(perception()\), \(turn\_left(angle)\), \(turn\_right(angle)\), \(move\_forward(distance)\), and \(check()\). In this work, we use GPT-4 (OpenAI) as the LLM planner.
The task planner should decide a viable task sequence out of the actions provided by the LLM planner. Given predicates representing the state space, the task planner will enhance each action with its corresponding preconditions and effects as shown in Fig. 5. Equipped with this logical representation, the task planner can ground the LLM-generated goal into a reasonable task sequence from the starting state to the ending state. Meanwhile, the task planner can guide the motion planner with estimated future cost represented in the depth map and laser ranges. The task planner will plan only necessary perception actions to detect the environment, so that EcoMapper moves and senses step by step to explore more regions of the unknown canyon. The observed region will only be used in the current motion planning horizon, and any part of the unobserved region is marked as an obstacle. The \(check()\) action plays the role of event-triggered replanning with real-time environmental feedback. Specifically, it uses laser range sensors to check if EcoMapper is collision free. If not, the task planner regenerates a new plan for the motion planner to perform corrective actions.
The motion planner will optimize the specific motion plan of executing the task sequence, i.e., which turning angle or moving distance is best? To leverage the A* planner, we select the heuristic cost as combination of the moving distance and the distance from the rollout position to the canyon's ending position. In this way, the motion planner can perform optimal actions to make EcoMapper travel far enough distance while not deviate too much from the canyon's ending position.
The LLM planner is unable to generate an executable plan because of limited real-world knowledge, while the motion planner spends excessive time delving a trajectory over long and uncertain horizons. We claim that our method can compose a valid plan and save computation time by incorporating the task planner to refine the LLM-guided goal and the motion plan according to the task logical chain and the physical dynamics, respectively. To support our claim, we compare three methods of LLM planning, LLM-motion planning and the proposed LLM-motion-task planning introduced in Section IV-B. The quantitative results averaging over 10 simulation runs are shown in Fig. 7. Since the LLM planning method directly composes a task sequence, it takes least computation time to achieve the goal. However, the solution quality is usually compromised, because LLMs can only consider high-level semantic representations of the tasks without any constraints. The LLM-motion planning method relies on the LLM planner to guide the motion planner for execution in the physical world. Although it can try out a solution, it takes much more computation
Fig. 4: Goal formulated by the LLM planner based on its semantic understating of how to finish the human command ”navigating through the canyon”.
Fig. 5: Actions with predictions and effects to achieve the goal of canyon navigation.
Fig. 3: Multimodal perception consisting of depth maps and laser range sensors. **Top right**: a photo in agentiven of EcoMapper. **Top left**: an inferred depth map from the photo. **Middle**: green lines emitted by the laser sensors.
time to exhaustively search the solution space. The proposed method accomplishes the goal with comparable success rate as the LLM-motion planning method, but with only 30% computation time. Comparing to the LLM planning method, the proposed method takes more computation time, but it can guarantee an executable task plan.
## VI Conclusion and Future Works
In this paper, we present OceanChat, an AI system that utilizes LLMs' knowledge and robotic planning schemes to complete AUV missions given a human command. We incorporate three levels of LLM planning, task planning, and motion planning in a hierarchical framework. Starting with the human command, a sequential process unfolds that LLMs supply a contextually proper goal, the task planner generates a task sequence, and the motion planner seeks the optimal motion plan within the given sequence. Meanwhile an event-triggered replanning module is designed to manage unexpected execution failure. We assess the proposed system across a comprehensive AUV mission of autonomously navigating through an unknown canyon in the simulated ocean environment.
There are several promising avenues for future works. First, domain-independent PDDLStream algorithms [59, 60] are a powerful alternative to solve TAMP by modeling a determinized version of stochastic shortest path problems (SSPPs) [61, 62, 63]. Second, after combining as inputs language instructions, environmental observation, and robot states, reinforcement learning [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74] can directly guide optimization of robotic actions in an end-to-end way. Third, our system assumes the flow effect is negligible, while zero-shot generalization [75, 76, 77, 78] can benefit from online interactions to learn flow dynamics, effectively adapting to environmental nuances.
Fig. 6: _Canyon Navigation_: OceanChat interprets the human command and autonomously navigates EcoMapper through the unknown canyon. The bottom figures are the viewport capture of the HoloEco scenes. The top left figures are the agentview capture of the EcoMapper. The top right figures are the depth map generated from the agentview capture. The green lines are the laser emitted by the range sensors.
Fig. 7: Quantitative results of comparing the proposed method with two other methods. The comparison is performed in terms of both the success rate and the computation time of finishing the overall goal. |
2302.14537 | Signatures of quark deconfinement through the r-modes of twin stars | The observation and distinction of two compact stars with an identical mass
but a different radius would be a clear sign of hadron-quark phase transition
in nuclear matter. Motivated by studies searching for significant deviations in
the observables of twin stars, we investigate the differences that manifest in
their r-mode instability windows and spin-down evolution. Firstly, we obtain a
set of hybrid equations of state (which predict the existence of a third stable
branch of compact objects) by employing the well-known Maxwell construction
within the phenomenological framework of constant speed of sound
parametrization. Then, we systematically study the influence of certain
parameters, such as the energy density jump (in the resulting hybrid equation
of state) and the crust elasticity, on the deviations appearing in the r-mode
instability windows and spin-down evolution of twin stars. We conclude that two
stars with an identical mass and fairly similar spin frequency and temperature,
may behave differently with respect to r-modes. Thus, the future possible
detection of gravitational waves (due to unstable r-modes) from a star laying
in the stable region of the frequency-temperature plane would be a strong
indication for the existence of twin stars. Furthermore, we consider current
data for the spin frequencies and temperatures of observed pulsars and compare
them to the predictions made from equations of state employed in this study. We
find that, depending on the transition density and the rigidness of the crust,
hybrid equations of state may be a viable solution for the explanation of
existing data. | P. Laskos-Patkos, Ch. C. Moustakidis | 2023-02-28T12:47:42Z | http://arxiv.org/abs/2302.14537v2 | # Signatures of quark deconfinement through the r-modes of twin stars
###### Abstract
The observation and distinction of two compact stars with identical mass but different radius would be a clear sign of hadron-quark phase transition in nuclear matter. Motivated by studies searching for significant deviations in the observables of twin stars, we investigate the differences that manifest in their r-mode instability windows and spin-down evolution. Firstly, we obtain a set of hybrid equations of state (which predict the existence of a third stable branch of compact objects) by employing the well-known Maxwell construction, within the phenomenological framework of constant speed of sound parametrization. Then, we systematically study the influence of certain parameters, such as the energy density jump (in the resulting hybrid equation of state) and the crust elasticity, on the deviations between the r-mode instability windows and spin-down evolution of twin stars. We conclude that two stars with identical mass and fairly similar spin frequency and temperature, may behave differently with respect to r-modes. Thus, the future possible detection of gravitational waves (due to unstable r-modes) from a star laying in the stable region of the frequency-temperature plane would be a strong indication for the existence of twin stars. Furthermore, we consider current data for the spin frequencies and temperatures of observed pulsars and compare them to the predictions made from equations of state employed in this study. We find that, depending on the transition density and the rigidness of the crust, hybrid equations of state may be a viable solution for the explanation of existing data.
Neutron stars, Phase transitions, Strange quark matter, r-modes
## I Introduction
Compact stars serve as excellent astrophysical laboratories for the study of dense nuclear matter [1; 2; 3; 4; 5]. The systematic study of pulsars and the detection of gravitational waves (GW) have already yielded significant constraints on the nuclear equation of state (EOS) [6; 7; 8; 9; 10; 11; 12; 13; 14]. A question that still remains unanswered concerns the relevant degrees of freedom up to densities appearing in neutron star cores [15; 16]. Compact stars could be purely hadronic, but the very dense environment indicates the possible existence of exotic forms of matter such as deconfined quarks. The latter opens up new scenarios that predict strange quark stars, composed purely of strange quark matter, or hybrid stars where a quark core is surrounded by a mantle of hadronic matter. In practice, the distinction between neutron, strange and hybrid stars is not an easy task as their radius around the observed mass region of 1.4 \(M_{\odot}\) is rather similar. Alternative approaches that may assist identifying the phases of nuclear matter within compact stars include the study of their thermal evolution [17; 18], binary neutron star mergers [19; 20; 21] and phenomena related to vibration or rotation [22; 23; 24; 25; 26; 27; 28; 29].
The construction of hybrid EOSs often requires to describe the hadronic and quark phases separately. Depending on the dynamics of the phase transition and mainly on the speed of sound structure in quark matter a third family of compact objects may appear in the mass-radius plane. The aforementioned family of compact stars gives rise to the existence of twin stars, i.e. stars with identical mass but fairly different radius [30; 31; 32; 33; 34]. Recently, the scenario of twin stars drew a lot attention, mainly because of the discovery of GW and thus the possibility of detecting them [35; 36; 37; 38; 39]. Note that identifying twins would be smoking gun evidence of hadron-quark phase transtion in compact stars. In a recent study, Lyra et al. [18] investigated the impact of compactness on the cooling of twin pairs, finding that only stars with significantly different radius exhibit considerable deviations in their thermal evolution. Furthermore, Landry and Chakravarti [40] argued on the possibility of distinguishing twins with next-generation GW detectors, through their tidal deformabilities. In the present work we study for the first time the deviation of the r-mode instability windows of twin stars and hence the differences that appear in their rotational limits.
It is well-established that relativistic stars may suffer a number of different instabilities. Among them, the r-mode instability (rotational mode) has been proposed as an explanation for the fact that neutron stars do not spin up to the theoretically allowed limit known as Kepler frequency [41; 42; 43; 44; 45; 46; 47; 48]. The r-modes are oscillations appearing in rotating stars, and their restoring force is the Coriolis force. In principle, the r-mode instability can only take place if the gravitational-radiation driving timescale is shorter compared to the timescales of the various dissipation mechanisms that may occur in the neutron star interior. By equating the driving and damping timescales one obtains the so-called r-mode instability window, which defines a critical frequency (maximum spin frequency for stable r-modes) as a function of temperature.
In the past decades there has been an extensive study of the r-modes (and numerous other types of oscillation) due to the possible detection of their GW [49; 50]. There are several studies predicting that accreting stars in low
mass X-ray binaries (LMXBs) may be subject to long-lasting r-modes. In particular, compact stars containing exotic matter, such as deconfined quarks or hyperons, may be persistent sources of GW emission [51; 52]. In addition, some authors [53] argue for the existence of a large unobserved population of quiescent (post-accretion) LMXBs characterised by long-lived (\(\sim\)10\({}^{9}\) yr) r-mode emission. Specifically, Chugunov et al. [53] suggested the existence of a new class of neutron stars, the so-called HorNARS (HOt and Fast Non Accreting Rotators). Such stars retain a high temperature due to heating associated with unstable r-modes. Following the discovery of gravitational radiation from binary neutron star mergers, the search for GW signals associated with r-modes has started [54; 55; 56]. It is notable that, the absence of a detection so far has provided the opportunity to set upper limits on the GW emission and the r-mode saturation amplitude [55; 56].
It has been shown that, the r-mode instability window of purely neutron stars is very wide to be compatible with current LMXBs data (assuming that all observed stars are stable with respect to r-modes, i.e. there are no HorNARS). Specifically, a very strong dissipation mechanism, such as a perfectly rigid crust, is essential for the stabilization of r-modes. Numerous studies have attempted to treat this problem by considering the presence of exotic degrees of freedom in compact star cores [57; 58; 59; 60; 61; 62]. In particular, it has been shown that the bulk viscosity of hyperon or deconfined quark matter may be sufficient to stabilize r-modes for the frequencies and temperatures of the observed pulsars. However, it is important to comment that hyperons are expected to appear in densities of 2-3 \(n_{0}\) (where \(n_{0}=0.16\) fm\({}^{-3}\) is the nuclear saturation density). Thus, the fraction of the core where hyperons are present, and hence the effective damping due to their viscosity, is limited in low mass neutron stars. Subsequently, the fastest rotating pulsars can only be explained if they are massive enough [58; 59]. Similarly, it has been shown that the width of the r-mode instability window of hybrid stars is determined mainly by the amount of quark matter in the core [63; 26].
In Ref. [64], the authors employed a set of analytical solutions of the Tolman-Oppenheimer-Volkov (TOV) equations in order to study the influence of neutron star bulk properties on the r-modes. They found that the instability window is quite sensitive to the radius of a star [64]. The latter leads to the conclusion that if twin stars do exist, their instability windows would deviate due to their radius difference. In addition, taking into account that the relevant degrees of freedom are different in the center of the two twins, the damping mechanisms that suppress the growth of the r-mode instability (bulk and shear viscosities) are going to be different as well [22]. This opens up a new intriguing scenario where two stars with identical mass, and similar rotational frequency and temperature profiles, may behave differently with respect to r-modes. In particular, if we assume that a star having angular velocity \(\Omega_{i}\) and temperature \(T_{i}\) is stable with respect to the r-modes, then any other (same mass) star with similar temperature and \(\Omega\leq\Omega_{i}\) should be stable as well. However, this is not necessarily the case if the two stars are twins since their instability windows are expected to be different. Thus, the future detection of GW due to unstable r-modes, from multiple sources, may allow us to identify a third family of compact objects.
The motivation of the present study is twofold. Firstly, we wish to systematically study the parameters (energy density gap, crust elasticity, transition density) that affect the deviation between the r-mode instability windows of twin stars. In addition, we wish to clarify how these parameters affect the differences that appear in the spin down evolution (due to unstable r-modes) of twins. Secondly, we wish to examine if EOSs that predict a third family of compact objects are a viable solution for the explanation of current LMXBs data.
This paper is organized as it follows. Section II is devoted to the presentation of the hadronic models employed in this work and the construction of hybrid EOSs that predict twin star configurations. In Section III we discuss in detail the r-mode instability formalism, while in Section IV we present a simplified model for the spin-down of compact stars (due to unstable r-modes). In Section V we present our results and discuss their implications. Section VI contains a summary of our findings.
## II Hadron-Quark Phase Transition
A hybrid EOS often results from the combination of a low density hadronic model and a high density quark EOS. The key ingredient for the construction is the matching process between the two phases. In particular, there are two widely employed methods in order to obtain hybrid EOSs: a) the Maxwell construction and b) the Gibbs construction. The main difference of the aforementioned approaches is the number of charges that are globally conserved in the system [65]. In the former case, the phase transition is abrupt (i.e. the two phases are separate), while in the latter scenario a mixed phase is present.
In the present work we adopt the Maxwell construction for the description of the phase transition. This particular approach is the favored one, in the scenario where the surface tension \(\sigma_{s}\) in the hadron-quark crossover, is larger than the critical value of \(\sim 40\) MeV fm\({}^{-3}\) and less than the highest allowed one of \(\sim 100\) MeV fm\({}^{-3}\), according to lattice QCD calculations [66]. In this case the phase transition is sharp resulting in a discontinuity in the energy density. Specifically, the energy density reads [35; 36; 37; 38]
\[\mathcal{E}(P)=\begin{cases}\mathcal{E}_{\rm{HADRON}}(P),&P\leq P_{\rm{tr}}\\ \mathcal{E}(P_{\rm{tr}})+\Delta\mathcal{E}+(c_{s}/c)^{-2}(P-P_{\rm{tr}}),&P>P_ {\rm{tr}}.\end{cases} \tag{1}\]
where \(P\) stands for the pressure, \(c_{s}\) is the speed of sound and \(c\) is the speed of light. Furthermore, \(P_{\rm{tr}}\) and \(\Delta\mathcal{E}\) de
note the transition pressure and the energy density jump, respectively. It is important to comment that the first line of Eq. (1) refers to the hadronic phase while the second one to the quark model. We treat the quark phase using a phenomenological approach known as constant speed of sound (CSS) parametrization [35; 38]. More precisely, the second line of Eq. (1) can be though as a first order Taylor expansion of the energy density around the transition pressure. Even though such a treatment lacks a rigorous theoretical basis it is widely employed as it is mimics the dynamics of the phase transition and it also allows an easy construction of EOSs predicting twin star configurations. In the present work the speed of sound is set equal to the speed of light in order to obtain EOSs consistent with the \(2\)\(M_{\odot}\) constraint [38; 39].
A first order phase transition, between hadronic and quark matter, is not sufficient by itself for the appearance of a third family of compact objects. In particular, the appearance of twin stars requires the existence of an unstable region in the \(M\)-\(R\) plane where the mass decreases with increasing central pressure. The condition that needs to be satisfied in order to obtain a third family was first studied by Seidov [67] and it is formulated as it follows
\[3P_{\rm tr}+3\mathcal{E}_{1}-2\mathcal{E}_{2}<0, \tag{2}\]
where \(\mathcal{E}_{1}\equiv\mathcal{E}(P_{\rm tr})\) and \(\mathcal{E}_{2}\equiv\mathcal{E}(P_{\rm tr})+\Delta\mathcal{E}\). Thus, by reorganising Eq. (2) one obtains the minimum energy density jump for the existence of twin star configurations, which is written as
\[\Delta\mathcal{E}_{\rm cr}=\frac{1}{2}\mathcal{E}_{\rm tr}+\frac{3}{2}P_{\rm tr}. \tag{3}\]
For EOSs that predict \(\Delta\mathcal{E}\geq\Delta\mathcal{E}_{\rm cr}\) two distinct stable branches appear in the \(M\)-\(R\) plane.
The resulting hybrid EOSs ought to be consistent with neutron star observations. For example, if one assumes that the \(\sim 1.4\)\(M_{\odot}\) compact stars involved in GW170817 [12] or in PSR J0030+0451 [68] are purely hadronic, then the low density sector of the EOS has to satisfy tight constraints (\(\Lambda_{1.4}=190^{+390}_{-120}\) and \(R_{1.4}\leq 14\) km, where \(\Lambda\) denotes the dimensionless tidal deformability) [40]. On the other hand, if these compact objects are hybrid stars, the aforementioned constraints are lifted from the hadronic part of the EOS. In the present work we adopt the GRDF-DD2 (simply DD2 from now on for practical purposes) [69] and the NL3 [70] EOSs for the description of the low density phase. It is worth commenting that both of these EOSs have been previously employed in the study of twin stars [71; 39]. Finally, for the description of the outer crust (in the case of the NL3 model) the well-known EOS of Baym et al. [72] is employed.
## III R-mode instability formalism
Thermodynamics and the influence of various dissipative processes define the time evolution of the _r_-modes according to the law \(e^{i\omega t-t/\tau}\), where \(\omega\) is the real part of the frequency, given by
\[\omega=-\frac{(l-1)(l+2)}{l+1}\Omega. \tag{4}\]
In Eq. (4), \(\Omega\) is the angular velocity of the unperturbed star [73] and \(l\) defines the kind of mode. In the present study we will consider the case \(l=2\). The imaginary part \(1/\tau\) is related to the effects of gravitational radiation and the various kinds of viscosity (shear, bulk, etc.) [73; 74; 75]. We consider the case of small-amplitude limit where a mode is a driven, damped harmonic oscillator and the exponential damping time scale is given by
\[\frac{1}{\tau(\Omega,T)} = \frac{1}{\tau_{{}_{GR}}(\Omega)}+\frac{1}{\tau_{{}_{EL}}(\Omega,T )}+\frac{1}{\tau_{{}_{BV}}(\Omega,T)} \tag{5}\] \[+ \frac{1}{\tau_{{}_{SV}}(\Omega,T)},\]
where \(\tau_{{}_{GR}}\), \(\tau_{{}_{EL}}\), \(\tau_{{}_{BV}}\) and \(\tau_{{}_{SV}}\) are the gravitational radiation time scale, the damping time scale due to viscous dissipation at the boundary layer of the rigid crust and fluid core and the bulk and shear viscosity dissipation times scales respectively. It is notable that there is a battle between the gravitational radiation which tends to drive the r-mode unstable and the stabilization induced by the various dissipation mechanisms. The critical angular velocity \(\Omega_{\rm c}\) (or critical spin frequency \(f_{\rm c}=\Omega_{\rm c}/2\pi\)), corresponds to the velocity at which the two mechanisms (amplification and damping) are balanced and it is found through the equation \(1/\tau(\Omega_{c})=0\).
The contribution of gravitational radiation to the imaginary part of the frequency of the mode \(1/\tau_{{}_{GR}}\) is given by the expression [73; 74]
\[\frac{1}{\tau_{{}_{GR}}} = -\frac{32\pi G\Omega^{2l+2}}{c^{2l+3}}\frac{(l-1)^{2l}}{[(2l+1)! ]^{2}}\left(\frac{l+2}{l+1}\right)^{2l+2} \tag{6}\] \[\times \int_{0}^{R}\rho(r)r^{2l+2}dr\quad\left({\rm s}^{-1}\right),\]
where \(\rho(r)\) is the mass density profile of a star.
The bulk viscosity \(\xi_{{}_{BV}}\) is the dominant damping mechanism at high temperatures. It originated from the variations of pressure and density due to the pulsation modes and in nucleonic matter it is given by the formula [73]
\[\xi_{{}_{BV}}^{H} = 6.0\times 10^{-59}\left(\frac{l+1}{2}\right)^{2}\left(\frac{{\rm Hz }}{\Omega}\right)^{2} \tag{7}\] \[\times \left(\frac{\rho}{{\rm gr~{}cm}^{-3}}\right)^{2}\left(\frac{T}{{ \rm K}}\right)^{6}\quad({\rm gr~{}cm}^{-1}~{}{\rm s}^{-1}).\]
For quark matter, the bulk viscosity is mainly determined by the weak process \(d+s\leftrightarrow u+s\)[22]. Following the discussion of Refs. [22; 26] we will use an approximate expression which is appropriate for small oscillations of the fluid and when \(2\pi T\gg\delta\mu=\mu_{s}-\mu_{d}\). Specifically,
\[\xi_{{}_{BV}}^{Q}=\frac{\alpha T^{2}}{\omega^{2}+\beta T^{4}}\quad({\rm g~{}cm }^{-1}~{}{\rm s}^{-1}), \tag{8}\]
where
\[\alpha T^{2}=6.66\times 10^{20}\left(\frac{\mu_{d}}{\rm MeV}\right)^{3}\left(\frac{m _{s}}{\rm MeV}\right)^{4}T_{9}^{2}\quad({\rm g~{}cm^{-1}~{}s^{-3}}),\]
\[\beta T^{4}=3.57\times 10^{-8}\left(\frac{\mu_{d}}{\rm MeV}\right)^{6}\left(1+ \frac{m_{s}^{2}}{4\mu_{d}^{2}}\right)^{2}T_{9}^{4}\quad({\rm s^{-2}}),\]
where \(T_{9}=T/(10^{9}\rm K)\), \(\mu_{d}\) is the chemical potential of the down quark and \(m_{s}\) is the mass of the strange quark. Since our model for quark matter does not provide information about the chemical potential profiles we will rely on the approximate expression \(\mu_{d}=235\) MeV \((\rho/\rho_{0})^{1/3}\)[22], which has been employed in numerous r-mode studies [22; 24; 51; 61; 62; 76]. For the strange quark mass we assume that \(m_{s}=100\) MeV. Finally, the bulk viscosity timescale is given by [73; 77]
\[\frac{1}{\tau_{{}_{BV}}} =\frac{4\pi}{690}\left(\frac{\Omega}{\Omega_{0}}\right)^{4}R^{2l -2}\left(\int_{0}^{R}\rho(r)r^{2l+2}dr\right)^{-1}\] \[\times\int_{0}^{R}\xi_{{}_{BV}}\left(\frac{r}{R}\right)^{6}\left[ 1+0.86\left(\frac{r}{R}\right)^{2}\right]r^{2}dr, \tag{9}\]
where \(\Omega_{0}=\sqrt{\pi G\overline{\rho}}\) and \(\overline{\rho}=3M/4\pi R^{3}\) is the mean density of the star.
The shear viscosity is the dominant mechanism at low temperature. This mechanism is due to the momentum transport when particle-particle scattering processes take place. In particular, the viscosity associated with the neutron-neutron scattering and the electron-electron scattering are given respectively by [74]
\[\eta_{nn}=347\left(\frac{\rho}{\rm gr~{}cm^{-3}}\right)^{9/4}\left(\frac{T}{ \rm K}\right)^{-2}\quad({\rm g~{}cm^{-1}~{}s^{-1}}), \tag{10}\]
\[\eta_{ce}=6.0\cdot 10^{6}\left(\frac{\rho}{\rm gr~{}cm^{-3}}\right)^{2}\left( \frac{T}{\rm K}\right)^{-2}\quad({\rm g~{}cm^{-1}~{}s^{-1}}). \tag{11}\]
For quark matter the shear viscosity is dominated by quark-quark scattering in QCD. Following Ref. [26] we have
\[\eta_{q}=5\times 10^{15}\left(\frac{0.1}{\alpha_{s}}\right)^{3/2}\left(\frac{ \rho}{\rho_{0}}\right)^{14/9}T_{9}^{-5/3}\quad({\rm g~{}cm^{-1}~{}s^{-1}}), \tag{12}\]
where \(a_{s}\) is the coupling constant for the strong interaction. In the present work we will use a typical value of \(a_{s}=0.1\). The dissipation time scale due to the shear viscosity is given by [73]
\[\frac{1}{\tau_{{}_{SV}}} =(l-1)(2l+1)\left(\int_{0}^{R}\rho(r)r^{2l+2}dr\right)^{-1}\] \[\times\int_{0}^{R}\eta_{{}_{SV}}r^{2l}dr,\quad({\rm s^{-1}}). \tag{13}\]
In the special case where the dissipation effect due to the crust has been included, the corresponding time scale is given by [74]
\[\tau_{{}_{EL}} =\frac{1}{2\Omega}\frac{2^{l+3/2}(l+1)!}{l(2l+1)!\Omega_{l}}\sqrt {\frac{2\Omega R_{c}^{2}\rho_{cr}}{\eta_{cr}}}\] \[\times\int_{0}^{R_{c}}\frac{\rho(r)}{\rho_{cr}}\left(\frac{r}{R_ {c}}\right)^{2l+2}\frac{dr}{R_{c}}\quad({\rm s}). \tag{14}\]
In Eq. (14), \(R_{c}\) is the core's radius, while \(\eta_{cr}\) and \(\rho_{cr}\) are the viscosity and density of the fluid at the outer edge of the core, respectively. The factor \(\mathcal{C}_{l}\), for \(l=2\), takes the value \(\mathcal{C}_{2}=0.080411\). The expression (14) refers to the case where the crust is rigid and consequently static in the rotating frame. However, in a more realistic case, the motion of the crust (due to the mechanical coupling with the core) induces an increase of the timescale \(\tau_{{}_{EL}}\) by a factor of \(1/\mathcal{S}^{2}\), where \(\mathcal{S}\) is the slippage factor defined as \(\mathcal{S}=\Delta v/v\). In particular, \(v\) denotes the velocity of the core and \(\Delta v\) is the difference between the velocities in the inner edge of the crust and the outer edge of the core [78].
It has been shown that the critical frequency \(\Omega_{c}\) is quite sensitive to the radius of a star [64]. More precisely, it has been found that for relatively low and high values of temperature, \(\Omega_{c}\) scales with the radius as \(\Omega_{c}\sim 1/R^{3/2}\) and \(\Omega_{c}\sim 1/R^{3/4}\), respectively. The latter leads to the conclusion that the r-mode instability windows of twin stars are going to be different. Furthermore, if one takes into account that the damping mechanisms in quark matter are, in principal, stronger than those in hadronic matter, then the instability window of a hybrid star is expected to be shifted to larger \(\Omega_{\rm cr}\) compared to the one of its hadronic twin. Thus, there are two mechanisms which act additively and may drastically affect the instability windows of the two different branches. The above findings are essentially a strong motivation for investigating the possible identification of twin stars due to the implications of their different instability windows.
Another crucial issue is the limitation of the instability window, at high frequencies, from the corresponding Kepler angular velocity \(\Omega_{\rm K}\) (the maximum rotation frequency of the star). To a very good approximation, the Kepler velocity is given by \(\Omega_{\rm K}=\frac{2}{3}\Omega_{0}\). It is interesting that in the case of twin stars the Kepler frequency of the hybrid branch may be even 20% higher compared to the hadronic branch, due to the different radius values. This apparent differentiation at the upper limit of the instability window can by itself be a criterion for separating the two branches. Connecting the analysis presented above with the fact that newly born compact stars are expected to rotate close to their mass shedding limit, we conclude that the spin-down evolution paths of twin stars are going to exhibit distinct deviations.
Spin-down and cooling
We are now going to present a simplified model to describe the spin-down (due to unstable r-modes) of a hadronic or hybrid star simultaneously with its cooling. During the phase that the angular momentum is radiated away to infinity by gravitational waves, the angular velocity of a star evolves as follows [75]
\[\frac{d\Omega}{dt}=\frac{2\Omega}{\tau_{GR}}\frac{\alpha^{2}Q}{1-\alpha^{2}Q}, \tag{15}\]
where \(\alpha\) is the dimensionless r-mode amplitude parameter. This parameter strongly affects the r-mode evolution and usually takes values in the large interval \(\alpha=1-10^{-8}\). Moreover, \(\alpha\) in general depends both on the viscosity (and consequently on the temperature \(T\) and cooling process) and on time. However, following Ref. [73] we consider that \(d\alpha/dt=0\). In addition, the quantity \(Q\) related with the bulk properties of a star and is defined as \(Q=3\tilde{J}/2\tilde{I}\) where
\[\tilde{J}=\frac{1}{MR^{4}}\int_{0}^{R}\rho(r)r^{6}dr,\qquad\tilde{I}=\frac{8 \pi}{2MR^{2}}\int_{0}^{R}\rho(r)r^{4}dr. \tag{16}\]
Under the aforementioned assumptions, on can solve Eq. (15) analytically and obtain [79; 80]
\[\Omega(t)=\left(\frac{1}{\Omega_{in}^{-6}-6\mathcal{C}t}\right)^{1/6}, \tag{17}\]
where
\[\mathcal{C}=\frac{2\alpha^{2}Q}{\tilde{\tau}_{GR}(1-\alpha^{2}Q)}\frac{1}{ \Omega_{0}^{6}},\quad\tilde{\tau}_{GR}=\left(\frac{\Omega}{\Omega_{0}}\right) ^{6}\tau_{GR}. \tag{18}\]
\(\Omega_{in}\) is a free parameter which corresponds to the initial angular velocity and \(\tilde{\tau}_{GR}\) is the fiducial gravitational-radiation time scale.
In order to combine the concurrent processes of the spin-down and cooling of twin stars, we use the standard description for the cooling of hot and young neutron stars proposed by Owen et al. [75] (where is considered that the cooling primarily due to the emission of neutrinos via the modified URCA process). In this case, the temperature drops according to the law
\[T(t)=\left(\frac{t}{t_{c}}+\left(\frac{10^{9}\text{ K}}{T_{i}}\right)^{6} \right)^{-1/6}10^{9}\text{ K} \tag{19}\]
where \(T_{i}\) is the initial temperature of the star (a typical value is \(T_{i}\simeq 10^{11}\) K) and \(t_{c}\) is the cooling rate parameter (\(t_{c}\simeq 1\) year [75]).
It is worth mentioning that this is a simplified model, especially concerning the cooling process. In particular, one may expect that the two twins may cool in a different way, considering the different cooling mechanisms in hadronic and quark matter. Obviously, a more elaborated study is necessary if one is interested in an accurate quantitative description of the cooling process. However, we need to comment that according to the findings of Lyra et al. [18], the thermal evolution of twin stars is only distinct when there is a large difference in their compactness. More precisely, in the case where there is a 10 % compactness difference (which is the case for the models employed in the present study), the thermal evolution of the two twins is nearly identical [18]. Therefore, we conclude that the present model will provide, at least, a good qualitative picture for the evolution of twin stars.
From the above analysis it is obvious that young, hot and rapidly rotating twin stars follow different spin-down paths on the \(f-T\) plane. This is mainly due to the following three reasons: a) the different instability windows, b) the different Kepler frequencies (different spin frequencies at birth) and c) the different spin evolution with time.
## V Results and discussion
### Mass-Radius diagrams
In order to study the differences that manifest in the r-mode instability windows and spin evolution of twin stars we constructed a set of hybrid EOSs, using the analysis presented in Section II. In particular, the low density phase is described by the DD2 and NL3 EOSs, where for the quark matter a phenomenological constant speed of sound model is employed. The values of the energy jump are selected in order to obtain EOSs that are consistent with the constraints from astrophysical observations. For both hadronic models, the resulting EOSs predict twin stars with mass of 1.2 or 1.4 \(M_{\odot}\).
Fig. 1 depicts the mass-radius dependence for the EOSs employed in this study. In the left panel the hadronic phase is described using the DD2 EOS, while for the results of the right panel the NL3 model was employed. The solid black curves stand for the case where no phase transition occurs (i.e. the \(M\)-\(R\) diagrams for the purely hadronic EOSs). In addition, the shaded areas correspond to constraints based on the analysis of the GW170817 event [12; 13]. Finally, the horizontal lines are drawn to indicate the twin configurations with 1.2 and 1.4 \(M_{\odot}\).
As it is evident from Fig. 1, increasing the energy density jump results into a softening of the EOS. Thus, the largest values for \(\Delta\mathcal{E}\) are selected so that EOSs remain consistent with the 2 \(M_{\odot}\) constraint. Furthermore, we need to highlight that as \(\Delta\mathcal{E}\) increases the radius difference of the two twins becomes larger. The latter is expected to play a critical role concerning the deviation of r-mode instability windows [64]. It is important to note that our analysis does not include the limiting case where \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\), as in such a scenario the separation of the two twins is almost negligible. In particular, if the phase transition occurs in relatively low baryon density a third family may not even appear [39].
### Qualitative analysis
Fig. 2 presents the r-mode instability windows of 1.4 \(M_{\odot}\) twin stars for the case where \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\)+100 MeV fm\({}^{-3}\). The damping mechanism due to the presence of a solid crust is not included. The results for the hadronic and hybrid stars are indicated using dashed and solid curves, respectively. Additionally, the horizontal lines stand for the corresponding Kepler frequencies. Note that in the x-axis of the plot, one would not find the temperature \(T\), appearing in the formalism of Section III, but the so-called redshifted temperature which is given by \(T^{\infty}=T\sqrt{1-2C}\), where \(C=GM/Rc^{2}\) is the compactness of a star.
Firstly, we need to underline the sensitivity of the instability window to the employed EOS. Specifically, by comparing the instability windows of the purely hadronic configurations one finds that the predicted critical frequency is lower (in the low temperature region) when the NL3 EOS is employed. This results from the fact that the radius of a 1.4 \(M_{\odot}\) compact star is smaller when the DD2 model is used [64]. Incidentally, the radius of the hybrid star constructed using the NL3 EOS coincides with the radius of the hadronic configuration using the DD2 EOS. The latter results into an overlap of their instability windows in the low temperature regime. However, the existence of a quark core, in the hybrid star, leads to significant differences in the critical frequencies for \(T^{\infty}\geq 10^{8}\) K, where the bulk viscosity plays a crucial role.
For a qualitative comparison of the r-mode instability windows of twin pairs one can divide Fig. 2 in three representative regions. In particular, for \(T^{\infty}\leq 10^{8}\) K (where the shear viscosity is the dominant dissipation mechanism), the radius difference plays a crucial role for the apparent critical frequency deviations. For \(10^{8}\) K \(\leq\)
Figure 2: Critical spin frequency \(f_{c}\) as a function of the redshifted temperature \(T^{\infty}\) (r-mode instability windows) for 1.4 \(M_{\odot}\) twin stars for the DD2 (blue) and NL3 (red) EOSs. The dashed lines and solid lines correspond to the hadronic and hybrid twins, respectively. The horizontal lines denoted the Kepler frequency for its star. The value for the energy density gap is \(\Delta\mathcal{E}_{cr}\)+100 MeV fm\({}^{-3}\) for both EOSs.
Figure 1: Mass-Radius diagrams for the DD2 (left panel) and NL3 (right panel) EOSs. The black solid curves indicate the original EOSs. The solid (dashed) horizontal line is set to 1.4 (1.2) \(M_{\odot}\). The shaded areas correspond to the constraints from the analysis of the GW170817 event [12; 13]. Each hybrid EOS is identified from the baryon density where the phase transition occurs and by the energy density jump. The energy density gap is given in units of MeV fm\({}^{-3}\) which are omitted in the legend for simplicity.
\(T^{\infty}\leq 10^{10}\) K, the bulk viscosity is the major damping mechanism and the trend of the \(f_{c}(T^{\infty})\) curve is altered for the hybrid twin. More precisely, the critical frequency increases and then decreases with temperature leading to a local maxima. This topological difference derives from the fact that the bulk viscosity of quark matter is not a monotononic function of temperature. Finally, for \(T^{\infty}\geq 10^{10}\) K the bulk viscosity dominates and this leads to appreciable differences in \(f_{c}\) for the two twins. From an observational perspective, the differences that appear in the low temperature regime will lead to different limits on the spin up of accreting pulsars in LMXBs. On the other hand, the deviations in the large temperature region may affect the evolution of a rapidly rotating proto-neutron star. Apart from the differences appearing in the limiting spin frequencies of twin stars, due to unstable r-modes, we need to comment that the there is a \(\sim 17\) % difference in their Kepler frequencies. Subsequently, a young hybrid star can rotate much faster than its hadronic twin.
### Energy density jump and crust effects
At this point we wish to systematically study the influence of certain parameters on the instability window deviations of twin stars. In particular, we are going to vary the value of the energy density gap and examine its effects. Furthermore, up to this point the only dissipative mechanisms considered in our calculations, were the bulk and shear viscosities. Now, we are also going to include the damping mechanism due to the presence of a viscous boundary layer. It is interesting that, as the aforementioned mechanism is strong and common for both twins, the critical frequency deviations due to different viscosities are expected to be less pronounced.
Firstly, we are going to investigate the importance of the energy density jump. As we mentioned, \(\Delta\mathcal{E}\) is the regulator of the radius difference between twin configurations. Fig. 3a depicts the dependence of \(\Delta R\) on \(\Delta\mathcal{E}\) for 1.4 \(M_{\odot}\) twin stars. Surprisingly, we find that the aforementioned quantities are connected through a linear formula. Even though the exact \(\Delta R\)-\(\Delta\mathcal{E}\) relation is sensitive to the low density model, the slopes of the resulting fitted lines appear to be very similar. It is worth pointing out that, from the analysis presented in the previous section, an increment of \(\Delta\mathcal{E}\) will result into larger deviations in the instability windows due to an increase of \(\Delta R\). However, as it is evident from Fig. 3b, a larger value of \(\Delta\mathcal{E}\) also results into a hybrid twin with a larger quark core fraction \(x_{q}=R_{q}/R\) (where \(R_{q}\) is the quark core radius).
\begin{table}
\begin{tabular}{c c c} \(S\) & \(T^{\infty}\,(10^{8}\) K) & \(\Delta f_{c}\) (Hz) \\ \hline & 1 & 102.279 \\
0.2 & 5 & 199.014 \\ & 10 & 103.426 \\ \hline & 1 & 165.56 \\
1 & 5 & 169.475 \\ & 10 & 129.472 \\ \end{tabular}
\end{table}
Table 2: The difference in the critical frequencies for 1.4 \(M_{\odot}\) twin stars for different values of temperature and crust elasticity. The results were obtained using the NL3 model with \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\)+200 MeV fm\({}^{-3}\). The results for this EOS, in the case where the crust damping mechanism is not considered, can be found in Table 1.
\begin{table}
\begin{tabular}{c c c c} \(\overline{\Delta\mathcal{E}}\) (MeV fm\({}^{-3}\)) & \(\Delta R\) (km) & \(T^{\infty}\) (\(10^{8}\) K) & \(\Delta f_{c}\) (Hz) \\ \hline & & 1 & 81.93 \\ \(\Delta\mathcal{E}_{cr}\)+100 & 1.75 & 5 & 192.47 \\ & & 10 & 82.62 \\ \hline & & 1 & 136.77 \\ \(\Delta\mathcal{E}_{cr}\)+150 & 2.40 & 5 & 306.32 \\ & & 10 & 149.006 \\ \hline & & 1 & 187.07 \\ \(\Delta\mathcal{E}_{cr}\)+200 & 2.99 & 5 & 402.53 \\ & & 10 & 207.14 \\ \end{tabular}
\end{table}
Table 1: The difference in the critical frequencies for 1.4 M\({}_{\odot}\) twin stars using the NL3 model for different values of temperature and energy density jump. The damping due to a viscous boundary layer (rigid crust) is not considered for the results presented in this table.
Figure 3: Panel a: Radius difference between 1.4 \(M_{\odot}\) twin stars as a function of the energy density gap, Panel b: The quark core fraction of a 1.4 \(M_{\odot}\) hybrid star as a function of the energy density jump.
Hence, the damping due to quark matter's bulk viscosity is going to be even more effective. It is noteworthy that \(\Delta\mathcal{E}\) and \(x_{q}\) are also linearly dependent and that the slopes of the lines are, once again, not strongly sensitive to employed hadronic model. The relations presented in Fig. 3 can be added to the other correlations found in the detailed analysis of Ref. [71]. Finally, we need to underline that through the relations found above, the knowledge of the radius difference of twin stars, may provide important information concerning the phase transition and the interior of hybrid stars.
Fig. 4 depicts the dependence of critical frequency on temperature for 1.4 \(M_{\odot}\) twin stars constructed using different \(\Delta\mathcal{E}\) values. In addition, the crust elasticity \(\mathcal{S}\) is varied from 0 to 1 in order to investigate the effects of a viscous boundary layer. The circular points stand for observational data inferred from LMXBs and millisecond pulsars, while the uncertainties visualised though the error bars derive from the process of converting surface temperature measurements to estimations for the core temperature [81]. Tables 1, 2 contain numerical data of \(\Delta f_{c}\) for different energy density jump, crust elasticity and temperature values.
It is worth pointing out that, the instability window differences are more pronounced in the case where the NL3 model is employed. This results from the fact that as the NL3 model is stiffer, it allows the construction of EOSs that satisfy observational constraints even for large \(\Delta\mathcal{E}\) values. Furthermore, Fig. 4 illustrates the strong impact of \(\Delta\mathcal{E}\) on the resulting r-mode instability window of the hybrid twin. In particular, in the case of \(\Delta\mathcal{E}_{cr}\) + 200, the spin frequency difference for the two twins may reach values of \(\sim\) 400 Hz (see Table 2).
The most important effect when the damping due to a solid crust is included, is that the peak appearing in the instability window of the hybrid star (in a tempera
Figure 4: The effect of energy density gap in the deviation of the r-mode instability windows of 1.4 \(M_{\odot}\) twin stars for increasing crust elasticity values. Panel a) DD2 EOS and \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\)+75 MeV fm\({}^{-3}\), b) DD2 EOS and \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\)+100 MeV fm\({}^{-3}\), c) NL3 EOS and \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\)+100 MeV fm\({}^{-3}\), d) NL3 EOS and \(\Delta\mathcal{E}=\Delta\mathcal{E}_{cr}\)+100 MeV fm\({}^{-3}\). Dashed (solid) lines indicate the hadronic (hybrid) twin. The dotted points correspond to observational data taken from Ref. [81].
ture region around \(\sim\) 5\(\times\)10\({}^{8}\) K) drops down. However, depending on the selected \(\Delta\mathcal{E}\) value, large \(f_{c}\) differences for the two twins may remain. In accordance to the results present by Lyra et al. [18] we conclude that the role of the compactness is not only critical concerning the thermal evolution of twin pairs, but it also significantly affects the r-mode instability window of the hybrid configuration.
### Comparison with observational data
As previously mentioned, it is rather difficult to explain the observational data in the context of a purely hadronic star. In particular, the not realistic assumption of a perfectly rigid crust is essential [57]. For that matter, several studies have investigated the r-mode instability window of compact stars containing exotic forms of matter. In a recent work, Ofengeim et al. [58; 59] examined if the existence of hyperons in the core of compact stars can lead to results compatible with current LMXBs data. What they found is that for neutron stars with \(M\leq 1.9\)\(M_{\odot}\), the bulk viscosity of hyperonic matter leads to r-mode stabilization in the \(f-T^{\infty}\) regime where observed neutron stars appear [58; 59].
At this point we wish to examine if the hybrid EOSs constructed in this study are in accordance to the observed spin frequencies and temperatures in LMXBs. As it is evident from Fig. 4, the r-mode instability window of the hybrid twin is always narrower. In addition, in a minimal scenario where the effects of the crust are not included the explanation of the observational data is not possible. However, depending on the energy density jump, a moderate crust elasticity value would suffice for the construction of instability windows compatible with observations. Specifically, for the NL3 model (\(\Delta\mathcal{E}_{cr}\)+ 200) and a relatively small crust elasticity of 0.2, most of the observed stars lay in the stable region of the \(f-T^{\infty}\) plane. Another critical point is that, in all cases there are stars (from the dataset) that lay in the region between
Figure 5: R-mode instability windows of compact stars in the mass range 1.2\(-\)1.9 \(M_{\odot}\). The EOSs used are: a) DD2, \(n_{tr}=0.32\) fm\({}^{-3}\), \(\Delta\mathcal{E}_{cr}\)+ 150, b) DD2, \(n_{tr}=0.35\) fm\({}^{-3}\), \(\Delta\mathcal{E}_{cr}\)+ 100, c) NL3, \(n_{tr}=0.25\) fm\({}^{-3}\), \(\Delta\mathcal{E}_{cr}\)+ 150, d) NL3, \(n_{tr}=0.27\) fm\({}^{-3}\), \(\Delta\mathcal{E}_{cr}\)+ 150. The dotted points correspond to observational data and the are taken from Ref. [81]. The (h) appearing in the legend stands for the most massive purely hadronic configuration. The value for the crust elasticity is considered to be \(\mathcal{S}\) = 0.1.
the \(f_{c}(T^{\infty})\) curves for the two twins. Hence, while such stars can be considered stable with respect to r-modes in the framework of the hybrid twin, they would be unstable if they were purely hadronic. The latter comment is of most importance, as the detection of GW emission, from stars laying in a \(f-T^{\infty}\) region where r-modes are considered to be stable, would be a strong indication of hadron-quark phase transition.
In Fig. 5 we present the r-mode instability windows for compact stars in the mass range 1.2\(-\)1.9 \(M_{\odot}\) and a relatively low crust elasticity value \(\mathcal{S}=0.1\). In panels a and c the twin configurations have a mass of 1.2 \(M_{\odot}\), while in panels b and d their mass is 1.4 \(M_{\odot}\). In the first case we find that, the bulk viscosity of quark matter is sufficient to stabilize r-modes for moderately massive compacts stars (\(M\leq 1.6\)\(M_{\odot}\)) in the whole \(f-T^{\infty}\) range occupied by the observed stars in LMXBs. In latter case, where the phase transition occurs at higher baryon density, more massive compact star configurations (1.8 or 1.9 \(M_{\odot}\) depending on the hadronic EOS) are essential for the explanation of current LMXBs data.
Another observation that can be made from Fig. 5, is that right after the phase transition occurs a narrowing of the instability window is evident. Then, as the mass further increases the instability window becomes wider for low temperature values (\(T^{\infty}\leq 10^{8}\) K). The fact that higher mass configurations have wider instability windows is a known result from previous studies [82]. It is interesting that while a higher mass is necessary for the stabilization of r-modes in observed stars with \(T^{\infty}\geq 10^{8}\) K, it fails to provide an explanation for the stars appearing in a lower temperature regime. However, the low temperature region can be cover by hybrid star configurations of lower mass. In principle, if a star slightly surpasses a critical mass, after which a phase transition occurs, then its instability window will be also slightly different from the
Figure 6: Panel: (a) Spin frequency as a function of time for 1.4 \(M_{\odot}\) twin stars constructed with the DD2 EOS, (b) Spin-down rate as a function of time for 1.4 \(M_{\odot}\) twin stars constructed with the DD2 EOS, (c) Spin frequency as a function of time for 1.4 \(M_{\odot}\) twin stars constructed with the DD2 EOS, (d) Spin-down rate as a function of time for 1.4 \(M_{\odot}\) twin stars constructed with the NL3 EOS. In all panels two different values for \(\Delta\mathcal{E}\) were used (see legends).
one of the most massive purely hadronic configuration. In contrast, if the structure of the phase transition predicts the existence of a third family, then stars with mass equal or slightly larger than the aforementioned critical mass are going to exhibit considerable deviations in their r-mode instability windows. The non trivial behavior of stars having narrower instability windows compared to those of lower mass stars (for low \(T^{\infty}\)), is characteristic of an EOS predicting twin configurations.
### Spin down and thermal evolution
In Fig. 6 we display the time evolution for the frequency and the corresponding spin-down rate of 1.4 \(M_{\odot}\) twin stars. The upper and lower panels contain results for the DD2 and NL3 EOSs, respectively. For comparison reasons, we consider the same initial frequency of 650 Hz for both twins. Furthermore, in accordance to previous studies [79, 82], the selected value for the r-mode saturation amplitude is \(a\) = 2\(\times\)10\({}^{-7}\). The spin-down rate is slower in hybrid stars, right after their birth. Specifically, the higher the energy density gap the lower the rate. However, after a certain amount of time the spin down rates of twin stars converge to the same value. The latter is reflected on the distinct time evolution of the frequency for the two cases. In particular, a hybrid star retains its initial rotation frequency for a longer period of time compared to its hadronic twin.
Usually, it is more convenient to study the spin down evolution of a compact star on the \(f-T^{\infty}\) plane. The latter demands the simultaneous knowledge of the spin and thermal evolution for a star. By employing the toy model for the fall of temperature, presented in Section IV, we intend to obtain the different evolution paths of twin stars on the \(f-T^{\infty}\) plane. In addition, instead of considering the same initial frequency for the two twins we set as initial condition the corresponding Kepler frequencies. The results presented in Fig. 7 were constructed using the NL3 EOS with \(n_{tr}\) = 0.27 fm\({}^{-3}\) (hence 1.4 \(M_{\odot}\) twins) and \(\Delta\mathcal{E}\) = \(\Delta\mathcal{E}_{cr}\)+ 100 MeV fm\({}^{-3}\). The latter EOS predicts twins with a \(\sim\) 13 % difference in compactness and therefore a similar thermal evolution is not an unreasonable assumption [18]. For the crust elasticity a low value of \(\mathcal{S}\) = 0.1 was chosen. Moreover, we consider three different values of the amplitude \(\alpha\) since the results are very sensitive to it. From Fig. 7 it is obvious that there are three main reasons which differentiate the time evolution of the two branches. The first one is the different Kepler velocities. The second one is connected to the spin down rates of the two twins, even though this effect is less pronounced. The third one is the deviation of the instability windows. In particular, the unstable region is more extended in the case of the hadronic branch. The latter is of most importance, as the r-mode instability window essentially sets the resulting frequency of a star as it comes
Figure 7: The spin-down evolution of 1.4 \(M_{\odot}\) twin stars (NL3 EOS, \(n_{tr}\) = 0.27 fm\({}^{-3}\) and \(\Delta\mathcal{E}\) = \(\Delta\mathcal{E}_{cr}\)+ 100 MeV fm\({}^{-3}\)) in the frequency-temperature plane for different values of the saturation amplitude. The initial frequencies for the twins are their corresponding Kepler frequencies. The blue (red) solid lines indicate the evolution for the hybrid (hadronic) twin. The blue and red dashed lines denote the r-mode instability window of the hybrid and hadronic star, respectively. The dotted points stand for observational data taken from Ref. [81].
out of the unstable region. Of course we need to stress out that, the paths presented in Fig. 7 can be improved if one considers a more realistic cooling process for the two branches. However, the general picture will not change noticeably and the main conclusions of the present study are not expected to be significantly altered.
## VI Conclusion
The present work was dedicated to the study of twin stars and their r-mode instability windows. In particular, we have conducted a detailed investigation of the parameters that affect the deviation between the instability windows of twin stars. This is of most importance as two stars with identical mass may have different rotational limits. More precisely, two stars in the same region of the frequency-temperature diagram may behave differently with respect to r-modes. Subsequently, the future detection of (r-mode) GW emission, from stars that are considered to be stable with respect to r-modes (due to existing observations), would be a clear sign for the existence of a third family and hence of hadron-quark phase transition.
Firstly, we studied the influence of the energy density jump \(\Delta\mathcal{E}\) on the deviation between the instability windows of twin stars. We found that \(\Delta\mathcal{E}\) regulates the radius difference between twin configurations. In addition, hybrid twins predicted from EOSs with higher \(\Delta\mathcal{E}\) exhibit larger quark core fractions. Thus, the differences in the critical spin frequencies of twins become more pronounced as the energy density jump increases. Secondly, we took into consideration the strong and common (for both twins) dissipation mechanism due to the presence of a viscous boundary layer. What we found is that, the characteristic peak appearing in the r-mode instability windows of hybrid stars (around \(T^{\infty}\)\(\sim\) 5 \(\times\) 10\({}^{8}\) K) flattens as the crust elasticity increases. However, depending on the selected value of \(\Delta\mathcal{E}\), considerable differences in the limiting frequency of the two twins may remain.
Furthermore, we examined if the EOSs constructed in this study (i.e. EOSs predicting a third family of compact objects) are a viable option for the explanation of current LMXBs data. We found that depending on the phase transition onset (transition density) and also the masses of stars in LMXBs, our EOSs may be compatible with the existing observational data. In particular, for EOSs that predict twin stars with 1.2 \(M_{\odot}\), the bulk viscosity of quark matter is adequate to stabilize r-modes for moderately massive stars (\(M\leq\) 1.6 \(M_{\odot}\)) in the whole \(f-T^{\infty}\) region occupied by the observed stars in LMXBs. As the critical compact star mass for the phase transition to occur increases, more massive configurations are needed for the stabilization of r-modes.
Finally, we studied the differences that manifest in the spin down evolution of twin pairs. We found that the hybrid star retains its initial spin frequency for a larger period of time and this is because its spin down rate is lower compared to its hadronic twin. Furthermore, we noticed that larger \(\Delta\mathcal{E}\) values result into lower spin down rates for hybrid stars. In addition, by employing a simplified thermal evolution model we evaluated the evolution paths of twins stars in the \(f-T^{\infty}\) plane. The resulting path differences derive from: a) the fact that the Kepler frequencies (initial conditions) of twin stars are different, b) the different spin evolution, which is dependent on the bulk properties of a star, c) the different instability windows of twin stars which essentially control when and with what frequency a star is going to pass in the r-mode stable region.
There are some other issues that a more elaborated study should take into account such as additional damping mechanisms or a more rigorous treatment of the thermal evolution. In addition, it would be interesting to explore the effects of a mixed phase (EOSs constructed with the Gibbs method). Even though such a study (already in progress) would be more complete from a quantitative point of view, we do not expect that our main conclusions will be significantly altered. Finally, we need to highlight that even though there are a few studies focusing on the r-mode instability and hybrid stars this is the first work dealing with the possible existence of two stars with identical mass and different r-mode instability windows. The future detection of GW associated with unstable r-modes may finally allow us to distinguish twin stars.
## Acknowledgements
The authors would like to thank Prof. K. Kokkotas for their useful insight and comments.
|
2309.11729 | The possibility of detecting our solar system through astrometry | Searching for exoplanets with different methods has always been the focus of
astronomers over the past few years. Among multiple planet detection
techniques, astrometry stands out for its capability to accurately determine
the orbital parameters of exoplanets. In this study, we examine the likelihood
of extraterrestrial intelligent civilizations detecting planets in our solar
system using the astrometry method. By conducting injection-recovery
simulations, we investigate the detectability of the four giant planets in our
solar system under different observing baselines and observational errors. Our
findings indicate that extraterrestrial intelligence could detect and
characterize all four giant planets, provided they are observed for a minimum
of 90 years with signal-noise ratios exceeding 1. For individual planets such
as Jupiter, Saturn, and Neptune, a baseline that surpasses half of their
orbital periods is necessary for detection. However, Uranus requires longer
observing baselines since its orbital period is roughly half of that of
Neptune. If the astrometry precision is equal to or better than 10 $\mu$as, all
8,707 stars located within 30 pcs of our solar system possess the potential to
detect the four giant planets within 100 years. Additionally, our prediction
suggests that over 300 stars positioned within 10 pcs from our solar system
could detect our Earth if they achieve an astrometry precision of 0.3 $\mu$as. | Dong-Hong Wu | 2023-09-21T02:03:05Z | http://arxiv.org/abs/2309.11729v1 | # The possibility of detecting our solar system through astrometry
###### Abstract
Searching for exoplanets with different methods has always been the focus of astronomers over the past few years. Among multiple planet detection techniques, astrometry stands out for its capability to accurately determine the orbital parameters of exoplanets. In this study, we examine the likelihood of extraterrestrial intelligent civilizations detecting planets in our solar system using the astrometry method. By conducting injection-recovery simulations, we investigate the detectability of the four giant planets in our solar system under different observing baselines and observational errors. Our findings indicate that extraterrestrial intelligence could detect and characterize all four giant planets, provided they are observed for a minimum of 90 years with signal-noise ratios exceeding 1. For individual planets such as Jupiter, Saturn, and Neptune, a baseline that surpasses half of their orbital periods is necessary for detection. However, Uranus requires longer observing baselines since its orbital period is roughly half of that of Neptune. If the astrometry precision is equal to or better than 10 \(\mu\)as, all 8,707 stars located within 30 pcs of our solar system possess the potential to detect the four giant planets within 100 years. Additionally, our prediction suggests that over 300 stars positioned within 10 pcs from our solar system could detect our Earth if they achieve an astrometry precision of 0.3 \(\mu\)as.
astrometry - planets and satellites: detection - (stars:) planetary systems (LETEX: ms2023-0223.tex; printed on September 22, 2023; 0:54) Vol. 0, 000-000
## 1 Introduction
More than 5400 exoplanets have been detected and confirmed to date (exoplanets.nasa.gov, July 2023). Earth-sized habitable-zone planets turn out to orbit about one out of ten stars (Petigura et al., 2013; Dressing & Charbonneau, 2013), and the search for life outside the Solar System has experienced substantial impetus. Whether a planet is habitable or not depends on how far it is from the central star and its composition (Kasting et al., 1993; Gomez-Leal et al., 2018). Nowadays, more than 60 planets has been found to be habitable (Jones et al., 2006; Lovis et al., 2006; Anglada-Escude et al., 2012; Robertson et al., 2014; Tuomi
et al., 2023), most of which are detected by the transit and radial velocity method. Neither the transit nor radial velocity method provides complete physical parameters of one planet, and both methods prefer to detect planets close to the central star. On the contrary, the astrometry method can provide three dimentional characterization of the orbit of one planet (Perryman et al., 2014; Wu et al., 2016) and has the advantage to detect planets far away from the host star.
To date, only one giant planet has been detected by the Astrometry method (Sahlmann et al., 2013) because of the limitation of detection precision. The detection of a habitable Earth-sized planet orbiting around a sun-like star located 10 pc away from us would require a precision of sub-\(\mu\)as, which is hardly achieved by the current astrometry observation such as Gaia (Perryman et al., 2014). However, it is very promising in the near future with the coming of a new era with high astrometry precision of \(\mu\)as (Yu et al., 2019; Ji et al., 2022; Jin et al., 2022; Tan et al., 2022).
Here we propose a probing question that supposing the extraterrestrial observers are using astrometry method and are also surveying the galaxy for habitable worlds, which of them could discover the planets in the solar system, even the Earth? Previous works has investigated the region in which the Earth will be observed transiting in front of the Sun (Heller and Pudritz, 2016; Kaltenegger and Pepper, 2020; Kaltenegger and Faherty, 2021) and the frequency the Earth will be detected by other civilisations through photometric microlensing (Suhapolthaworn et al., 2022).
In this work, we study the possibility that extraterrestrial life detect the planets in the solar system via astrometry method with different observational precisions. We describe how we simulate astrometric data in section 2. In section 3, we present how to identify planetary signals and how to fit the orbital parameters of the planets. The detection of the four giants in the solar system by nearby stars are discussed in section 4. We briefly conclude our results in section 5.
## 2 Simulation of Astrometric Data
Astrometry method measures the movements of the stars projected onto the celestial sky. Following the method described in previous works (Black and Scargle, 1982; Wu et al., 2016; Yu et al., 2019), the projected movement of the star in the right ascension (\(x\)) and declination (\(y\)) at time \(t\) can be modeled as:
\[x(t)=x_{0}+\mu_{x}(t-t_{0})-P_{x}\pi+X(t)+\sigma_{x} \tag{1}\]
and
\[y(t)=y_{0}+\mu_{y}(t-t_{0})-P_{y}\pi+Y(t)+\sigma_{y}, \tag{2}\]
where \(x_{0}\) and \(y_{0}\) are the coordinate offsets, \(\mu_{x}\) and \(\mu_{y}\) are the proper motions of the star, \(P_{x}\) and \(P_{y}\) are the parallax parameters which will be provided by the observation. \(\pi\) is the annual parallax of the star. \(X(t)\) and \(Y(t)\) are the movements of the host star around the barycenter of the system due to the planetary companions. \(\sigma_{x}\) and \(\sigma_{y}\) are single-measurement astrometric errors.
In our fiducial simulations, we made several assumptions regarding the extraterrestrial observer's location and observational parameters. As part of our simulations, we placed the observer at a distance of 10 pcs from our Sun. The observer orbits its central star in a circular orbit with a period of 1.25 years and measures
of 0.1 years and found that the results changes very little. To account for the astrometry precision, we assumed that the observer has a measurement uncertainty of 10 \(\mu\)as. Therefore, the individual coordinate uncertainties \(\sigma_{x}\) and \(\sigma_{y}\), were chosen from a Gaussian distribution with a median value of 0 and a standard deviation of 10 \(\mu\)as. For the coordinate offsets \(x_{0}\) and \(y_{0}\), we assume both to be 10 mas. Additionally, the proper motion of the Sun with respect to the observer is assumed to be 50 mas/year and -30 mas/year for the \(x\) and \(y\) directions, respectively. To model the parallax effect, we used the observer's orbit and defined the functions \(P_{x}\) and \(P_{y}\) as follows: \(P_{x}(t)=\sin(1.6\pi t+\phi)\), \(P_{y}(t)=\cos(1.6\pi t+\phi)\), where \(\phi\) represents the orbital phase of the observer. Finally, we assumed an observing baseline of 170 years for the simulations.
The movement of the Sun due to the presence of the eight planets \(X(t)\) and \(Y(t)\) are simulated using the REBOUND code(Rein & Liu, 2012). All eight planets in the Solar system are included. The orbital parameters of the planets are given by the JPL Solar System Dynamics web site1, with respect to the mean ecliptic and equinox of J2000. In our fiducial simulation, the line of sight of the extraterrestrial observer is assumed to be perpendicular to the mean ecliptic. We integrate the solar system over a duration of 170 years and record the coordinates of the Sun (\(X(t)\) and \(Y(t)\)) relative to the barycenter of the solar system every 0.2 years.
Footnote 1: [http://www.jpl.nasa.gov/](http://www.jpl.nasa.gov/)
## 3 Planetary signal identification and orbital parameter fitting
Assuming that we are extraterrestrial civilizations and we have measured the movement of the Sun for 170 years. Now we analyze the data to see if we have any detection. Although we have included the gravitational interaction between planets when we simulate the astrometric data of the host star, it is ignored when we fit the orbital parameters since it has little influence on the motion of the host star (Sozzetti et al., 2001; Casertano et al., 2008). In our parameter fitting procedure, \(X(t)\) and \(Y(t)\) are modeled as (Catanzarite, 2010):
\[X(t)=\sum_{i=1}^{i=N}(\cos E_{i}-e_{i})A_{i}+\sqrt{1-e_{i}^{2}}(\sin E_{i})F_ {i} \tag{3}\]
and
\[Y(t)=\sum_{i=1}^{i=N}(\cos E_{i}-e_{i})B_{i}+\sqrt{1-e_{i}^{2}}(\sin E_{i})G_ {i}, \tag{4}\]
where \(N\) is the number of planets orbiting around the central star, \(i\) represents the \(i_{\rm th}\) planet, \(E_{i}\) is the eccentric anomaly, \(e_{i}\) is the orbital eccentricity, \(A_{i}\), \(F_{i}\), \(B_{i}\) and \(G_{i}\) are Thiele-Innes constants, given as:
\[\begin{split} A_{i}=\alpha_{i}(\cos\Omega_{i}\cos\omega_{i}- \sin\Omega_{i}\sin\omega_{i}\cos I_{i}),\\ B_{i}=\alpha_{i}(\sin\Omega_{i}\cos\omega_{i}+\cos\Omega_{i}\sin \omega_{i}\cos I_{i}),\\ F_{i}=\alpha_{i}(-\cos\Omega_{i}\sin\omega_{i}-\sin\Omega_{i} \cos\omega_{i}\cos I_{i}),\\ G_{i}=\alpha_{i}(-\sin\Omega_{i}\sin\omega_{i}+\cos\Omega_{i} \cos\omega_{i}\cos I_{i}),\end{split} \tag{5}\]
where \(\alpha_{i}\) is the astrometric signature of the host star due to the reflex motion in the presence of the \(i_{\rm th}\) planet. \(\Omega\), \(\omega\) and \(I\) are the longitude of ascending node, arguments of pericenter and the orbital inclination of the planets, respectively.
We search for the planetary signal and then fit the orbital parameters of the planets following the procedures as we described in Wu et al. (2016). Here we briefly describe the steps.
Step 1, ignore the planetary influence on the star and use the linear least squares method to fit the five stellar parameters \(x_{0}\), \(y_{0}\), \(\mu_{x}\), \(\mu_{y}\) and \(\pi\).
Step 2, remove the coordinate offsets, stellar proper motion and parallax from the data and search for periodical signals in the residuals using the Lomb-Scargle periodogram (Black & Scargle, 1982). We calculate the periodogram of the residuals in the \(x\) and \(y\) directions and record the most significant peak in each direction. Then we choose the peak with smaller false alarm probability (FAP). If the peak has a FAP \(<10^{-4}\), we claim to have identified a certain planet signal and the corresponding orbital period is adopted as \(P_{1}\).
Step 3, fit \(x_{0}\), \(y_{0}\), \(\mu_{x}\), \(\mu_{y}\), \(\pi\), \(P_{1}\), \(e_{1}\) and \(t_{01}\). \(t_{01}\) is the perihelion moment of the planet. This is processed via the Levenberg-Marquardt (LM) algorithm (Marquardt, 1963) and the Markov Chain Monte Carlo (MCMC) fitting procedure. Given \(P_{1}\), \(e_{1}\) and \(t_{01}\), the terms \(\cos E_{1}-e_{1}\) and \(\sqrt{1-e_{1}^{2}}(\sin E_{1})\) can be determined. Then Equations 3 and 4 are easily inverted by linear least squares to yield the Thiele-Innes constants. The motion of the Sun produced by the planet are calculated using Equation 3 and Equation 4. Together with the five stellar parameters, we can calculate the fitted projected motion of the star using Equation 1 and 2. We first fit \(x_{0}\), \(y_{0}\), \(\mu_{x}\), \(\mu_{y}\), \(\pi\), \(P_{1}\), \(e_{1}\) and \(t_{01}\) using the LM method. Initial values of \(x_{0}\), \(y_{0}\), \(\mu_{x}\), \(\mu_{y}\), \(\pi\) and \(P_{1}\) are given by Step 1 and 2, while \(e_{1}\) is randomly chosen between 0 and 1, \(t_{0,1}\) is randomly chosen between 0 and \(P_{1}\). The LM fitting process is repeated for 100 times. Then we choose the best-fit parameters with the smallest reduced \(\chi^{2}\) as initial values of the following MCMC fitting procedure. We adopt the open-source Python package emcee (Goodman & Weare, 2010; Foreman-Mackey et al., 2013) to sample the parameter space and estimate the posterior distribution of parameters. We run emcee with 64 walkers for \(20000+30000\times N\) iterations (\(N\) is the number of planets identified). The initial positions of the walkers are drawn from Gaussian distributions with median values given as the best-fit parameters of the LM fitting process and standard deviations of \(10^{-3}\) to accelerate the fitting process. We conduct autocorrelation analysis and find that all chains are converged in our fitting procedure.
Step 4, remove the coordinate offsets, proper motion, parallax and stellar motion due to the planet identified in Step 2 using the best-fit parameters calculated in Step 3 from the original astrometric data. Then we continue to search for periodic signals in the new residuals. If there is one, then we fit the data with a two-planet reflex motion model.
Step 5, repeat Step 2 to Step 4 until no periodic signals are identified.
In our fitting procedure, we have a total of \(5+3\times N\) parameters to fit since we have assumed Keplerian orbits for each planet, which largely reduce the parameters to be fitted and ensure the parameter precision at the same time. The semi-major axis of the planets can be obtained using the Kepler's third law giving the orbital period of the planets, while the planetary masses are calculated via \(m_{i}a_{i}=m_{\odot}a_{\odot,i}\), where \(m_{\odot}\) is the solar mass (assumed to be precisely determined via other methods by the extraterrestrial intelligence, like the spectrometry or astroseismology) and \(a_{\odot,i}\) is the semi-major axis of the Sun when orbiting around the barycenter determined by the Sun and the \(i_{\rm th}\) planet (which is obtained in Step 3). Readers are refereed to
In Figure 1, we present the data residuals and power spectrum for each step of the fitting process in our fiducial simulations. After Step 1 and Step 2, the power spectrum of the data residuals (labeled as 1st O-C) exhibits a prominent peak at approximately 11.9 years, indicating the successful identification of Jupiter. Then we move to Step 3 and Step 4, the updated data residuals (labeled as 2ed O-C) are also shown in Figure 1, with their power spectrum peaking around 28.8 years. This peak signifies the detection of Saturn. Continuing this iterative process, we repeated the aforementioned steps, ultimately leading to the identification of Neptune and Uranus. After the detection of the four giants, no peak with FAP\(<10^{-4}\) appears in the final residuals, suggesting that none of the small planets in the Solar System is detectable in
Figure 1: Data residuals and power spectrum of the four giants after each fitting step. **Top:**The simulated astrometric data on the x (shown in red) and y (shown in blue) directions. **Left:**The data residuals after each fitting step. **Right:** The power spectrum of the data residuals shown on the left.
## 4 Results
### The characterization of the four giants in the solar system
The amplitude of the astrometric motion of the Sun produced by a planet with mass \(m_{p}\) and semi-major axis \(a\) observed by an observer with a distance of \(d\) is:
\[\alpha=3\left(\frac{m_{p}}{10\,m_{\oplus}}\right)\left(\frac{a}{1\,\mathrm{AU}} \right)\left(\frac{d}{10\,\mathrm{pc}}\right)^{-1}\,\,\mu as. \tag{6}\]
With an observing baseline of 170 years and observational error down to 10 \(\mu\)as, all the four giants are successfully detected and characterized. This is expectable since the signal-noise-ratios (SNRs, defined as \(\alpha/\sigma\), where \(\sigma\) is the observational error) of the four giants calculated using Equation 6 are far larger than 3, according to the detection criterion given by Wu et al. (2016). Other small planets in the Solar system are hardly detectable because of their small SNRs. We show the posterior distributions for all parameters that are fitted in Figure 2. The first half iterations are thrown as burn-in. We find that the five stellar parameters (\(x_{0}\), \(y_{0}\), \(\mu_{x}\), \(\mu_{y}\) and \(\pi\)) and orbital parameters of the inner three giants (Jupiter, Saturn and Uranus) are all well-constrained with nearly Gaussian distributed posteriors, indicating that the parameters converge well. For the outermost planet Neptune, the orbital eccentricity (\(e_{3}\)) and perihelion moment (\(t_{03}\)) are not well-constrained since the planet only finishes one complete orbit during 170 years. The planetary mass and semi-major axis of the planets can be easily calculated using the best-fit parameters as we have mentioned in section 3. All the four giants are well characterized with relative fitting errors smaller than 1% for both the orbital period and planet mass. The relative fitting error of parameter \(\theta\) is given as \(\epsilon_{\theta}=|\theta_{\mathrm{fit}}-\theta_{\mathrm{true}}|/\theta_{ \mathrm{true}}\). \(\theta_{\mathrm{fit}}\) is calculated as the median value of the posterior distribution of \(\theta\).
### The influence of observing baseline and observational error
We also investigated the detection of the four giants with different observational errors and observing baselines. In our fiducial simulations described in Section 2, we fix the observing baseline to be 170 years and the observational error to be 10 \(\mu\)as. Now we gradually decrease the observing baseline from 170 years to 10 years, with a step of 20 years. To account for the detection of our Solar system by missions like Gaia (Perryman et al., 2014) and CHES (Ji et al., 2022), we further extend the observing baseline down to 4 years. We also considered different observational errors: 1 \(\mu\)as, 3 \(\mu\)as, 10 \(\mu\)as, 30 \(\mu\)as, 100 \(\mu\)as, 300 \(\mu\)as, 1000 \(\mu\)as, 3000 \(\mu\)as and 10000 \(\mu\)as. Other assumptions such as the distance of the observer and the sampling cadence remain the same. In the new simulations, we assume that the coordinates offsets, proper motion and parallax of our Sun are already well determined and carefully removed from the astrometric data by extraterrestrial intelligence before the fitting process starts. This assumption will largely reduce the computational time (We have conducted a small group of simulations including the coordinate offsets, proper motion and parallax, and we find that they have little influence on the characterization of the planets). Then we start the fitting procedure as described in Section 3 but now we skip step 1.
We show the relative fitting errors of planet mass as a function of the observing baseline and observational error for each of the four giants in Figure 3. Only planets with relative fitting errors of orbital period smaller than 0.1 (\(\epsilon_{P}<0.1\)) are shown. For planets with \(\epsilon_{P}<0.1\), their planetary mass are mostly well-fitted with
fitting errors of planet mass. There are several cases that planets are detected with \(\epsilon_{P}>0.1\), however, their planet mass are generally poorly fitted with \(\epsilon_{m}>1\). Therefore, we claim a planet is well characterized if it has \(\epsilon_{P}<0.1\).
As we have pointed out in Wu et al. (2016), the detection of a planet using astrometry method relys on the SNR of the planet and the observing baseline. Here we show the contours of the SNRs in Figure 3. We find that all the four giants can be successfully detected and well-characterized as long as their SNRs \(>1\) and the observing baseline exceeds 90 years. In general, planets with SNRs \(>1\) and observing baseline longer than about half an orbital period could be detected. However, this is not the case for Uranus. Because the fitting of the orbital period of Uranus is largely influenced by that of Neptune, whose orbital period is about two times that of Uranus. Not until the orbital period of Neptune is successfully identified will Uranus be well-characterized. There are exceptions that planets are detected with SNR \(<1\). However, these detections are not detected with SNR \(<1\).
Figure 2: Posterior distribution of \(x_{0}\), \(y_{0}\), \(\mu_{x}\), \(\mu_{y}\), \(\pi\), \(P_{1}\), \(e_{1}\), \(t_{0,1}\), \(P_{2}\), \(e_{2}\), \(t_{0,2}\), \(P_{3}\), \(e_{3}\), \(t_{0,3}\), \(P_{4}\), \(e_{4}\), \(t_{0,4}\). Four planets are detected by extraterrestrial intelligence located 10 pc away with an observing baseline of 170 years and observational precision of 10 \(\mu\)as.
If astrometric missions conducted by extraterrestrial civilizations are similar to Gaia or CHES, with typical observing baselines ranging from 5 to 10 years, the detectability of the giant planets in our simulations would be limited. Specifically, only Jupiter would be detectable under these circumstances. However, if Gaia were able to achieve an observational error down to 100 \(\mu\)as, it would be possible to characterize Jupiter with an accuracy of \(\epsilon_{m}<0.5\). Alternatively, if CHES, with an observational error down to 1 \(\mu\)as and and an observational time of approximately 6 years, conducted the mission, Jupiter could be characterized with an accuracy of \(\epsilon_{m}<0.5\).
Figure 3: The relative fitting errors of the planet mass \(\epsilon_{m}\) as a function of observing baseline and observational error. Different colors represent different \(\epsilon_{m}\). Gray circles represent none detection of planet signals or the identified planet has large fitting errors of orbital period (i.e. \(\epsilon_{P}>0.1\)). The gray dashed lines represent the contours of different SNRs.
### Which stars could detect the four giants in the solar system
We move forward to estimate how many neighbouring stars in the Galaxy could detect the four giants in our solar system. We identify 8707 stars from the Gaia Catalog of Nearby Stars (GCNS)(Gaia Collaboration et al., 2021) that lie within 30 pc to the Solar system. We calculate the SNR of each giant planet observed by each star with different assumed observational errors. We find that all 8707 stars have the possibility to detect and well-characterize the four giants if they could achieve an astrometric error down to 10 \(\mu\)as and observe the solar system for enough long time (such as 90 years). If the observational error is as large as 100 \(\mu\)as, only 183 neighbouring stars could detect all the four giants, but all of them could detect the Jupiter within 10 years. We also estimate the number of neighbouring stars that could detect our Earth. About 310 neighbouring stars located within 10 pc from our Sun have the potential to detect the Earth if the observational error is as small as 0.3 \(\mu\)as. With a larger observational error, such as 1 \(\mu\)as, only 8 stars located within 3 pc from the Sun could possibly detect the Earth.
## 5 Conclusion
In this paper, we study the possibility that extraterrestrial intelligence detect the planets in our solar system. We find that all the four giants in our solar system could be detected and well-characterized as long as they are observed for at least 90 years with SNR \(>1\). For all 8707 stars lying within 30 pc to the solar system, all of them have the potential to detect the four giants within 100 years if they could achieve an observational precision down to 10 \(\mu\)as. If the astrometry method can achieve sub \(\mu\)as precision like 0.3 \(\mu\)as, then even our Earth will be detectable by extraterrestrial intelligence.
In each of our simulations, we assume a constant observational error during the long observing baseline for simplicity. A more reasonable assumption maybe a decreasing observational error with the increase of observing baseline. Besides, the sampling cadence is fixed to be 0.2 years in our simulations. This is hardly achieved in real observations. Finally, our simulations is truncated at 170 years since longer observing baseline requires longer computational time to fit the orbital parameters of the planets. However, we expect that longer observing baseline would allow the detection of plants with larger observational errors. These should be further considered in future works.
This study primarily addresses the likelihood of extraterrestrial civilizations in the vicinity of our solar system detecting our own system. However, it is important to note that the existence of extraterrestrial life remains uncertain, and if they do exist in planetary systems similar to ours, their presence could be incredibly rare. Cumming et al. (2008) demonstrated that the occurrence rate of cold Jupiters around stars similar to the Sun is only 10%. Consequently, the chances of our solar system being in proximity to a significant population of extraterrestrial civilizations are currently considered to be very small.
## 6 Acknowledgements
This work is supported by the National Natural Science Foundation of China (NSFC) (grant No. 12103003), |
2309.09166 | Complete steady gradient Yamabe solitons with positive scalar curvature
are rotationally symmetric | In this paper, we solve the Yamabe soliton version of the Perelman
conjecture. We show that any nontrivial complete steady gradient Yamabe
solitons with positive scalar curvature are rotationally symmetric. | Shun Maeta | 2023-09-17T05:41:42Z | http://arxiv.org/abs/2309.09166v2 | # Complete Steady Gradient Yamabe Solitons with Positive Scalar Curvature Are Rotationally Symmetric
###### Abstract.
In this paper, we solve the Yamabe soliton version of the Perelman conjecture. We show that any nontrivial complete steady gradient Yamabe solitons with positive scalar curvature are rotationally symmetric.
Key words and phrases:steady gradient Yamabe solitons; Yamabe soliton version of the Perelman conjecture; rotationally symmetric 2010 Mathematics Subject Classification: 53C21, 53C25, 53C20 The author is partially supported by the Grant-in-Aid for Scientific Research (C), No.23K03107, Japan Society for the Promotion of Science.
**Remark 1.2**.: _It is known that any compact gradient Yamabe solitons are trivial \((\)[9, 10]\()\). The original Perelman conjecture [12] is that any \(3\)-dimensional complete noncompact \(\kappa\)-noncollapsed gradient steady Ricci soliton with positive curvature is rotationally symmetric, which was proven by S. Brendle [3]. However, in higher dimensions, it is not well understood \((\)but, see for example [4]\()\)._
## 2. Preliminary and the proof
An \(n\)-dimensional Riemannian manifold \((M^{n},g)\) is called a gradient Yamabe soliton if there exists a smooth function \(F\) on \(M\) and a constant \(\lambda\in\mathbb{R}\), such that \(\nabla\nabla F=(R-\lambda)g,\) where \(\nabla\nabla F\) is the Hessian of \(F\), and \(R\) is the scalar curvature on \(M\). If \(F\) is constant, \(M\) is called trivial. If \(\lambda>0\), \(\lambda=0\), or \(\lambda<0\), then the Yamabe soliton is called shrinking, steady, or expanding.
Tashiro's theorem ([14], see also [5, 6, 11]) is used for proving Theorem 1.1.
**Theorem 2.1** ([14]).: _A Riemannian manifold \((M^{n},g)\) which satisfies that for any smooth functions \(F\) and \(\varphi\) on \(M\), \(\nabla\nabla F=\varphi g\) is either \((1)\) compact and rotationally symmetric, or \((2)\) rotationally symmetric and equal to the warped product \(([0,\infty),dr^{2})\times_{|\nabla F|}(\mathbb{S}^{n-1},\bar{g}_{S})\), where \(\bar{g}_{S}\) is the round metric on \(\mathbb{S}^{n-1},\) or \((3)\) the warped product \((\mathbb{R},dr^{2})\times_{|\nabla F|}(N^{n-1},\bar{g})\,,\) where the scalar curvature \(\bar{R}\) of \(N\) satisfies_
\[|\nabla F|^{2}R=\bar{R}-(n-1)(n-2)\varphi^{2}-2(n-1)g(\nabla F,\nabla\varphi). \tag{2.1}\]
**Remark 2.2**.: _The potential function \(F\) depends only on \(r\), and \(F^{\prime}(r)>0\)\((\)see, for example, the proof of Theorem \(1.1\) of [11]\()\). The manifold \((M,g,F,\varphi)\) that satisfies the condition \(\nabla\nabla F=\varphi g\) was also studied by Cheeger and Colding [7]._
As pointed out in [5], it follows as a corollary of Theorem 2.1 that any nontrivial complete gradient Yamabe solitons with positive Ricci curvature are rotationally symmetric.
Proof of Theorem 1.1.: To show rotational symmetry of \(M\), we only have to consider \((3)\) of Theorem 2.1. By the soliton equation, Remark 2.2 and (2.1), one has
\[\rho^{\prime}\rho^{2}+(n-1)(n-2)\rho^{\prime 2}+2(n-1)\rho\rho^{\prime\prime}= \bar{R}, \tag{2.2}\]
where \(\rho=F^{\prime}\). Since the left hand side of (2.2) depends only on \(r\), the scalar curvature \(\bar{R}\) of \(N\) is constant. Positivity of the scalar curvature shows that \(\bar{R}>0\) (which was shown in [5]). In fact, if \(\bar{R}\) is nonpositive, then \(\rho^{\prime\prime}\) is nonpositive, hence the positive function \(\rho\) is concave. Therefore, \(\rho\) is constant, which cannot happen. Since \(\rho^{\prime}>0\), \(\rho\) is monotone
increasing. Furthermore, one can show that \(\rho\) goes to infinity. Assume that \(\rho\) is bounded from above, that is, \(F^{\prime}=\rho\leq c\) for some positive constant \(c\). Then, the convex function \(F\) satisfies that \(F\leq cr+b\) on \(\mathbb{R}\), which cannot happen.
The equation (2.2) is an autonomous second order equation and can be made into a first order equation by using \(\rho\) as a new independent variable. If \(\rho^{\prime}=G(\rho)\), then \(\rho^{\prime\prime}=\dot{G}G\), and one has
\[G\rho^{2}+(n-1)(n-2)G^{2}+2(n-1)\rho\dot{G}G=\bar{R}. \tag{2.3}\]
By differentiating the equation, one has
\[\dot{G}\rho^{2}+2\rho G+2(n-1)^{2}\dot{G}G+2(n-1)\rho\ddot{G}G+2(n-1)\rho\dot{ G}^{2}=0. \tag{2.4}\]
Assume that \(\dot{G}>0\) at some point \(\rho_{0}\in(0,+\infty)\), that is, \(\dot{G}>0\) on some open interval \(\Omega=(\rho_{1},\rho_{2})(\geq\rho_{0})\). If \(\Omega=(\rho_{1},+\infty)\), by (2.3), \(G\rho^{2}+(n-1)(n-2)G^{2}<\bar{R}\) on \((\rho_{1},+\infty).\) However, the left hand side goes to infinity as \(\rho\nearrow+\infty\), which cannot happen. Thus, one can assume that \(\dot{G}=0\) at \(\rho_{2}\). Then, by (2.4), \(2(n-1)\rho_{2}\ddot{G}G+2\rho_{2}G=0\) at \(\rho_{2}\). Hence, \(\ddot{G}<0\) at \(\rho_{2}\), and \(G\) is monotone decreasing on \((\rho_{2},\rho_{3})\) for some \(\rho_{3}\). Iterating the same argument, one can extend \(\rho_{3}\) to \(+\infty\), that is, \(G\) is monotone decreasing on \((\rho_{2},+\infty)\). Hence, if there exists such an open interval \(\Omega\), it must be \((0,\rho_{2})\). Therefore, \(G\) has a maximum, say \(C(=G(\rho_{2})>0)\), that is, \(F^{\prime\prime}=G\leq C\). Thus, one has \(0<F^{\prime}\leq Cr+D\) on \(\mathbb{R}\), which cannot happen. We finally obtain \(\Omega=\emptyset\).
Therefore, one has \(0\geq\dot{G}=\frac{\rho^{\prime\prime}(r)}{\rho^{\prime}(r)}\) for every \(r\in\mathbb{R}\), and \(\rho^{\prime\prime}\leq 0\) on \(\mathbb{R}\). Since the positive smooth function \(\rho\) is concave on \(\mathbb{R}\), \(\rho\) is constant, which cannot happen.
By the same argument, we also get the similar result for shrinking solitons:
**Theorem 2.3**.: _Any nontrivial complete shrinking gradient Yamabe solitons with \(R>\lambda\) are rotationally symmetric._
**Remark 2.4**.: _The assumption \(R>\lambda\) is optimal. In fact, for any \(b\in\mathbb{R}\) and any Riemannian manifold \((N,\bar{g}_{N})\) with constant positive scalar curvature \(\bar{R}\), \((\mathbb{R}\times N,dr^{2}+\frac{\bar{R}}{\lambda}\,\bar{g}_{N},\sqrt{\frac{ \bar{R}}{\lambda}}\,r+b)\) is a nontrivial shrinking gradient Yamabe soliton with \(R=\lambda\)._
**Acknowledgements.**
The author would like to express his gratitude to Ken Shirakawa for valuable discussions.
**Data availability statement**
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
**Conflict of interest**
There is no conflict of interest in the manuscript.
|
2302.14523 | Automatic Heteronym Resolution Pipeline Using RAD-TTS Aligners | Grapheme-to-phoneme (G2P) transduction is part of the standard text-to-speech
(TTS) pipeline. However, G2P conversion is difficult for languages that contain
heteronyms -- words that have one spelling but can be pronounced in multiple
ways. G2P datasets with annotated heteronyms are limited in size and expensive
to create, as human labeling remains the primary method for heteronym
disambiguation. We propose a RAD-TTS Aligner-based pipeline to automatically
disambiguate heteronyms in datasets that contain both audio with text
transcripts. The best pronunciation can be chosen by generating all possible
candidates for each heteronym and scoring them with an Aligner model. The
resulting labels can be used to create training datasets for use in both
multi-stage and end-to-end G2P systems. | Jocelyn Huang, Evelina Bakhturina, Oktai Tatanov | 2023-02-28T12:33:12Z | http://arxiv.org/abs/2302.14523v1 | # Automatic Heteronym Resolution Pipeline Using RAD-TTS Aligners
###### Abstract
Grapheme-to-phoneme (G2P) transduction is part of the standard text-to-speech (TTS) pipeline. However, G2P conversion is difficult for languages that contain heteronyms - words that have one spelling but can be pronounced in multiple ways. G2P datasets with annotated heteronyms are limited in size and expensive to create, as human labeling remains the primary method for heteronym disambiguation. We propose a RAD-TTS Aligner-based pipeline to automatically disambiguate heteronyms in datasets that contain both audio with text transcripts. The best pronunciation can be chosen by generating all possible candidates for each heteronym and scoring them with an Aligner model. The resulting labels can be used to create training datasets for use in both multi-stage and end-to-end G2P systems.
Jocelyn Huang\({}^{1}\), Evelina Bakhturina\({}^{1}\), Oktai Tataov\({}^{2}\)\({}^{\dagger}\)+\({}^{1}\)NVIDIA, \({}^{2}\)AXB Research
[email protected], [email protected]
Footnote †: dagger}\) Work done while at NVIDIA.
**Index Terms**: grapheme-to-phoneme, text-to-speech, heteronym disambiguation
## 1 Introduction
Modern text-to-speech (TTS) models can learn pronunciations from raw text input and its corresponding audio data, but in languages such as English, phonemes provide more precise pronunciation information than graphemes. As a result, many TTS systems use phonemic input during training to directly access and correct pronunciations for new vocabulary at inference time. One of the hardest problems for grapheme-to-phoneme (G2P) systems is the resolution of heteronyms, i.e., words that have a single spelling but different pronunciations. For example, _"read"_ in _"I will read the book"_ vs. _"She read her project last week"_. Some heteronyms, such as _"bass"_, have multiple pronunciations with the same part of speech, and they need to be disambiguated based on semantic context.
In this work, we focus on the heteronym disambiguation task and propose a pipeline for labeling heteronyms in training data for both multi-stage and end-to-end (E2E) G2P models. Some multi-stage G2P systems [1, 2] use a set of rules for heteronym disambiguation, but high-quality rule-based systems require expert knowledge and are difficult to scale and maintain. An alternative machine learning approach for heteronym disambiguation is to treat this task as a part-of-speech tagging or a classification problem [3, 4]. Emerging E2E G2P systems use sentence-level training data [5, 6] and aim to handle out-of-vocabulary (OOV) and heteronyms in a single pass. Neural multi-stage and E2E solutions for heteronym disambiguation require labeled data where heteronyms appear in context, but unfortunately, there is a dearth of such data.
Due to the domain expertise required for labeling phonemes, G2P datasets are few and far between. In datasets like TIMIT [7] and The Buckeye Speech Corpus [8], phoneme transcriptions of audio are provided along with grapheme transcriptions. In TIMIT, transcriptions were human-verified, but the number of unique sentences is too small to train a G2P model. The Buckeye Speech Corpus consists of around 26 hours of conversational speech that was transcribed and phonemically labeled. Since the phoneme labels were automatically generated from the audio, the labels are noisy and sometimes contain alignment errors despite some corrections made by human research assistants, which makes the dataset more unreliable for G2P training.
To our knowledge, the Wikipedia Homograph Data [4] (WikiHomograph) is the only open-source dataset with a sufficient number of samples to train a neural model for heteronym disambiguation. WikiHomograph is a text-only dataset where each sample is an entire sentence with a labeled heteronym. Unfortunately, this dataset does not contain a comprehensive list of English homographs. Moreover, some pronunciations in the WikiHomograph set of heteronyms are significantly underrepresented, leading to class imbalance [9]. For example, the corpus contains multiple sentences with the noun form of the heteronyms "desert", "addict" and "subject" and no samples with the verb forms. The WikiHomograph dataset was annotated by linguists, and manual annotation remains the mainstream method of data creation. In addition, some preprocessing is required to train an E2E G2P model on the WikiHomograph dataset, as only the target homograph is labeled in each example sentence. [6] uses CMUdict [10] to label known words while dropping sentences with OOV words.
As a heteronym data augmentation technique, Nishiyama et al. [11] introduced a method to match each sense of a heteronym to a synonymous word with a unique pronunciation and to substitute the heteronym for its synonym in a text corpus. This method requires a large textual database for queries, as well as expert knowledge and evaluators to confirm that the resulting sentences are correct. As the method was applied to Japanese heteronyms, there is no available data for English. Other relevant methods of heteronym resolution and verification include the morphological rewriting rules [12] and the context-dependent phone-based HMMs that use acoustic features [13]. [14] skips the phoneme representation in lieu of passing graphemes into a language model to generate its text representation. We plan to add these to our paper to address this broader context.
We propose an automatic heteronym disambiguation approach that can generate examples for underrepresented or missing heteronym forms. Our proposed pipeline annotates speech data with heteronym phoneme labels automatically. The
labeled sentences can then be used in conjunction with dictionary lookups for unambiguous known words and "\(<\)unk\(>\)" tokens for OOV words to create training data for neural G2P or heteronym classification models without human labeling. To get target phonetic labels for heteronyms, we train the RADTTS Aligner [15] on transcribed audio data. Then we use the Aligner to score possible heteronym pronunciation options and choose the one that matches the corresponding audio best. To evaluate the quality of generated data, we train a BERT-based classification model and E2E ByT5 G2P model. The results show that the proposed data augmentation technique improves heteronym disambiguation accuracy for both models. We release code1 and all aligner-generated and hand-checked data for heteronym disambiguation model training.
Footnote 1: [https://github.com/NVIDIA/NeMo](https://github.com/NVIDIA/NeMo)
## 2 Heteronym resolution pipeline
We propose using a RAD-TTS Aligner [15] model to automatically select correct heteronym forms. The RAD-TTS Aligner [15] is a speech-to-text alignment model based on the alignment mechanism introduced in RAD-TTS [16], which allows for easy visualization and human-understandable scores when comparing candidate pronunciations. The Aligner takes a mix of graphemes and phonemes as input: phonemes for known unambiguous words and graphemes for ambiguous or OOV words. It learns to align text tokens and audio frame encodings using the \(L_{2}\) distance between the representations, generating a soft alignment that can be converted to a hard alignment using the Viterbi algorithm.
These hard alignments between text tokens and audio frames can be used in tandem with the predicted \(L_{2}\) distance matrix in order to determine the distances between a token encoding and each of its corresponding audio frames' encodings. Thus, given a word \(T\) consisting of \(N\) input tokens \(t_{1},...,t_{N}\), where token \(t_{i}\) has been aligned with \(M_{i}\) audio frames \(a_{1}^{(i)},...,a_{M_{i}}^{(i)}\) out of audio \(A\), the average distance, \(D_{avg}\), between a word and the audio can be found as:
\[D_{avg}\big{(}T,A\big{)}=\frac{\sum\limits_{i=1}^{N}\sum\limits_{j=1}^{M_{i}} L_{2}(enc\_{i}.t_{i},enc\_{a_{j}^{(i)}})}{\sum\limits_{i=1}^{N}M_{i}} \tag{1}\]
In essence, the average distance between a word and its acoustic form is a sum of distances between its constituent tokens and their aligned audio frames, divided by the number of audio frames corresponding to the word. We can use these distances to disambiguate heteronyms with an audio sample.
Figure 1 shows the proposed automatic phoneme-labeling process for generating sentences with disambiguated heteronyms for sentence-level G2P model training. We first convert known unambiguous words to their phonetic pronunciations with dictionary lookups. This work uses the CMUdict training split defined in [17]. OOV words are left as graphemes. Next, we generate multiple candidates by substituting the heteronym with each possible phonemic form in the dictionary.
Figure 1: Data labeling pipeline for sentence-level G2P model training includes the following steps: 1) Input text. 2) Replace known unambiguous words with phoneme forms from the dictionary. 3) For sentences with heteronyms: generate sentences with all possible heteronym forms. 4) Score candidate pronunciations with context using the Aligner. 5) Select a sentence with the minimum score. 6) Mask remaining OOV words.
Figure 2: A comparison between the L2 distance matrices between the aligned text and audio embeddings when disambiguating the word “read” from the entry: “... and therefore far pleasure and easier to read”. Values shown correspond to the audio frames that were aligned with each text token, and the average distance is taken across this diagonal to find the overall score for a given pronunciation; the rest of the values are disregarded. The average embedding distances for /xrd/ and /xid/ are 452.9 and 403.3, respectively. The latter one would be picked, as it is closer to the audio embedding across the aligned frames.
Then, we pass each candidate along with the corresponding audio file through a trained Aligner model to automatically label heteronyms by picking the pronunciation whose phoneme encodings are closer on average to the audio encodings, i.e., smallest \(D_{avg}\). Figure 2 shows an example of the alignments and distances for two potential pronuncations of _"read"_ from an entry that ends _"and therefore far pleasanter and easier to read."_ Using this method, we can disambiguate all known heteronyms in our speech dataset. Finally, we mask out OOV words with a special masking token, "\(<\)unk\(>\)", and force the G2P model to produce the same masking token as a phonetic representation during training. During inference, the model generates phoneme predictions for OOV words without emitting the masking token as long as this token is not included in the grapheme input.
To control the quality of the disambiguated data, we propose thresholding with a confidence score that represents how much closer the best candidate pronunciation is to the audio. Specifically, the score is a normalized difference between the chosen candidate's L2 distance versus the least likely candidate's L2 distance. The confidence score of disambiguation is found by taking the difference between the highest and lowest L2 distances over all the candidates, then dividing it by the average between the highest and lowest L2 distances. For the example in Figure 2, this would be \((452.9-403.3)/(452.9+403.3)/(2)=0.116\). The higher the score, the more likely it is for the disambiguation to be correct. We can now remove any samples with disambiguations that have confidence scores lower than the desired threshold. Once heteronym disambiguations have been performed, the sentences can then be converted to phonemes for use in sentence-level G2P training. As before, we use a dictionary lookup for known unambiguous words, and now we can replace heteronyms with the disambiguated phoneme form. Samples with OOV words can either be dropped, or OOV labels can be replaced with an \(\langle\)unk\(\rangle\) token for training.
## 3 Aligner training and dataset generation
We use the LJSpeech [18] and Hi-Fi TTS [19] (speakers 9017 and 12787) datasets to generate G2P data with disambiguated heteronyms, and train one Aligner model per speaker. Speaker 9017's data contains 57.8 hours and its Aligner model was trained for 250 epochs, speaker 12787 contains 27.1 hours and its Aligner model was trained for 400 epochs, and the LJSpeech model was trained for 1000 epochs on 22.8 hours of data. All models were trained on a single RTX 8000 GPU using the Adam optimizer, a learning rate of 0.001, and a weight decay of 1e-6. A Cosine Annealing scheduler was used, with a minimum learning rate of 5e-5 and a warmup ratio of 0.35.
For disambiguation, sentences without heteronyms were discarded. Aligner-disambiguated training sets of speakers 9017, 12787, and LJSpeech were compiled into the **Aug** set. We also created subsets of the data by filtering out samples where the Aligner confidence score was below a threshold value: **Aug-0.01%** consists of samples with a confidence score of at least 0.01%; similarly for thresholds of 0.02% and 0.03%. For each augmented subset, we created a "balanced" version that aims to equalize the number of occurrences of each heteronym form in the combined WikiHomograph and Aug. training data to mitigate model bias (Table 1).
## 4 Evaluation
To assess the quality of heteronym resolution with the Aligner model, we hand-checked sentences from LJSpeech dev set, which contains 26 heteronyms. The LJSpeech Aligner model chose the grammatically correct candidate 23 times. However, two of the grammatically incorrect selections accurately reflected the pronunciation of the speaker. We also performed limited human evaluation of the heteronym labels derived from the Hi-Fi TTS speaker 9017 model for textit"read" and _"subject"_. Out of 179 occurrences of the word _"read,"_ (87 /iid/, 92 /izd/), the Aligner model picked the correct form 176 times (an accuracy of 98.3%), with only three errors. However, it performs poorly on heteronym _"subject"_, which has two forms that sound similar: /sob\(\,\)dgekt/ and /sobqkt/. This can be mitigated by confidence thresholding, as seen in Table 2. We conclude that the Aligner model is highly dependent on the enunciation and pronunciation of the speaker, and is prone to error if the audio is noisy or if the speaker mispronounces a heteronym. It also tends to have trouble with heteronyms that have forms that sound similar, but this can be mitigated by confidence thresholding.
We also manually verified heteronyms from the dev and test sets of the selected Hi-Fi TTS speakers. We then combined these samples with some proprietary sentences to create a test set that covers most of the heteronym forms missing from the evaluation set of the WikiHomograph dataset. This dataset (hereafter **Hard-eval**) contains 195 sentences and is used to evaluate the effect of the Aug data on the G2P models' performance.
To perform automatic quality estimation, we train a token classification BERT-based [20] heteronym disambiguation model on the WikiHomograph dataset. The model takes a sentence as input, and then for every word, it selects a heteronym option out of the available dictionary forms. The model handles multiple heteronyms simultaneously. We mask irrelevant forms to disregard the model's predictions for non-ambiguous words. E.g., given the input "The Poems are simple to read and easy to comprehend." the model scores possible'read present' and'read past' options for the word "read". We finetuned our model from pre-trained "bert-base-cased"2 model for ten epochs on a 16GB GPU with batch size 32, the AdamW optimizer, a learning rate of 5e-5, a WarmupAnnealing scheduler, and a weight decay of 0.01.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline
**Threshold** & **0.00\%** & **0.01\%** & **0.02\%** & **0.03\%** \\ \hline Num samples (bal) & 1230 & 794 & 620 & 572 \\ \hline Num samples (non bal) & 3883 & 2939 & 2286 & 1805 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of aligner-generated samples added depending on the confidence threshold values and balancing strategy.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline
**“Subject” Eval** & \multicolumn{2}{c|}{**/sob\(\,\)dgekt/ (v.)**} & \multicolumn{2}{c|}{**/sobqkt/ (adj\_n.)**} & \multicolumn{1}{c}{**Total**} \\ \cline{2-6} & TP & FP & TP & FP & \\ \hline Threshold: 0.00\% & 1 & 30 & 48 & 0 & 79 \\ \hline Threshold: 0.01\% & 1 & 5 & 25 & 0 & 31 \\ \hline Threshold: 0.02\% & 1 & 1 & 13 & 0 & 15 \\ \hline Threshold: 0.03\% & 0 & 0 & 4 & 0 & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: True positives and false positives of each pronunciation of “subject” as predicted by the speaker 9017 Aligner with various confidence thresholds.
Table 3 summarizes experiments with the BERT classification model trained on WikiHomograph data and various amounts of Aligner-generated data. The results are the averages of 3 runs. The highest accuracy was achieved on WikiHomograph and Hard-eval sets with "non-balanced 0.02" aligner data augmentation, 99.07% and 91.04%, respectively. Performance on the balanced set is more consistent on the WikiHomograph set (99+%) and slightly below the best result. Non-balanced data augmentation leads to better results in the Hard-eval set than the performance with balanced data augmentation, 90+% vs. about 89%. We hypothesize that this is because the augmented data provides more non-Wikipedia examples with a vocabulary closer to the Hard-eval set. A confidence threshold of at least 0.01% is recommended as it provides a higher quality of the augmented data; see the performance drop from 86.64% to 83.02% if no thresholding is used. The heteronym disambiguation task has a low tolerance towards errors as these errors propagate down to the text-to-speech pipeline. Using higher Aligner threshold values reduces the number of the augmented samples but assures a higher quality of the training data.
To check the validity of our sentence-level labeling pipeline on E2E G2P models, we follow [5] and [17] and train a sentence-level ByT5 model G2P model. The training data for our E2E G2P model consists of CMUdict [10] and WikiHomograph with various amounts of Aligner augmented data. We used the same CMUdict split proposed in [17] for labeling known words and "\(<\)unk\(>\)" token for OOV words. We fine-tuned our model from pre-trained "google/byt5-small" model for five epochs on eight 16GB GPUs with batch size 8, the AdamW optimizer, a learning rate of 1e-3, a WarmupAnaleing scheduler, and a weight decay of 0.01. Experiments with E2E ByT5 model (Table 4) second the positive effect of the data augmentation while keeping the phoneme error rate (PER) on CMUDict test nearly the same. PER measures the generation capabilities of E2E G2P models.
## 5 Conclusions
In this paper, we propose a data augmentation method that can automatically disambiguate heteronyms to generate data for sentence-level G2P model training. This data labeling technique can be used to balance out existing heteronym forms in gold standard data, add new heteronyms without manual labeling, or simply create more training data as labeled heteronym data is scarce. The proposed method is also controllable using confidence threshold filtering, depending on whether a particular application may need more data with potentially lower quality, or high confidence labels at the cost of the number of samples generated. Additionally, we introduce a masking token that opens the door to sentence-level G2P model training without human annotation. We show through human evaluation and experimentation that the resulting automatically-generated data improves the performance of both BERT classification and E2E G2P systems. We hope that this method will help to remedy this lack of data both for more robust training and for more informative evaluation.
|
2309.16046 | Confidence and second-order errors in cortical circuits | Minimization of cortical prediction errors has been considered a key
computational goal of the cerebral cortex underlying perception, action and
learning. However, it is still unclear how the cortex should form and use
information about uncertainty in this process. Here, we formally derive neural
dynamics that minimize prediction errors under the assumption that cortical
areas must not only predict the activity in other areas and sensory streams but
also jointly project their confidence (inverse expected uncertainty) in their
predictions. In the resulting neuronal dynamics, the integration of bottom-up
and top-down cortical streams is dynamically modulated based on confidence in
accordance with the Bayesian principle. Moreover, the theory predicts the
existence of cortical second-order errors, comparing confidence and actual
performance. These errors are propagated through the cortical hierarchy
alongside classical prediction errors and are used to learn the weights of
synapses responsible for formulating confidence. We propose a detailed mapping
of the theory to cortical circuitry, discuss entailed functional
interpretations and provide potential directions for experimental work. | Arno Granier, Mihai A. Petrovici, Walter Senn, Katharina A. Wilmes | 2023-09-27T21:58:18Z | http://arxiv.org/abs/2309.16046v3 | # Precision estimation and second-order prediction errors in cortical circuits
###### Abstract
Minimization of cortical prediction errors is believed to be a key canonical computation of the cerebral cortex underlying perception, action and learning. However, it is still unclear how the cortex should form and use knowledge about uncertainty in this process of prediction error minimization. Here we derive neural dynamics minimizing prediction errors under the assumption that cortical areas must not only predict the activity in other areas and sensory streams, but also jointly estimate the precision of their predictions. This leads to a dynamic modulatory balancing of cortical streams based on context-dependent precision estimates. Moreover, the theory predicts the existence of second-order prediction errors, i.e. errors on precision estimates, computed and propagated through the cortical hierarchy alongside classical prediction errors. These second-order errors are used to learn weights of synapses responsible for precision estimation through an error-correcting synaptic learning rule. Finally, we propose a mapping of the theory to cortical circuitry.
## Introduction
The cerebral cortex has been described as an organ of prediction, where cortical areas attempt to predict the activity in other areas or sensory streams. The computational goal of the cortex would then be to minimize differences between these predictions and actual activity--prediction errors. Neural computations realizing this goal have been proposed as canonical cortical computations [1, 2, 3, 4, 5] and as mechanisms supporting the emergence of cognition [6, 7]. Additionally, adopting a probabilistic or Bayesian framework for cortical processing, where uncertainty is taken into account, has proven useful [8, 9]. To harness the power of the probabilistic framework, predictions made by cortical areas should not simply be single potential representations in the target area but rather distributions over the space of potential representations.
In that case, normative theories based on variants of maximum likelihood estimation suggest that cortical prediction errors should be weighted by the precision (reliability, inverse uncertainty) of the predictive distributions. Humans and other animals have indeed been shown to weight prior knowledge and data from multiple modalities by their relative precision during perceptual integration [10, 11], decision-making [12] and sensorimotor control [13, 14], even when the precision changes dynamically [15, 16, 17, 18]. This modulatory weighting of prediction errors has gained a central place in the branch of cognitive sciences based on predictive coding [19, 20, 21], most notably in models of attention [22, 23, 24, 25] and in neuropsychiatric [26, 27, 28, 29, 30]. Potential implementations in the cerebral cortex have been discussed, notably in cortico-pulvinar loops [31] or more generally through the action of neuromodulation [32, 33, 34, 35]. However, a neurally plausible theoretical formalization of learned and context-dependent prediction error modulation is still missing.
In this work we start with the idea that top-down cortico-cortical gain modulation implements a form of precision weighting of prediction errors [36]. To formalize this idea, we introduce precision estimates computed as a function of current higher-level representations. This in line with rare cases where precision
was defined as a function of current neuronal activity [22, 31], and in contrast with the majority of literature which formally defines precision as a parameter of the model (e.g. [2, 37, 38]). With this formulation, precision estimates can have a fast, dynamic and context-dependent influence on neural dynamics, while parameters of the precision estimation function, encoded in synaptic weights, slowly integrate statistics of the environment.
We then derive neural dynamics of predictive coding with this additional ingredient. In the resulting neuronal dynamics, the relative importance accorded to bottom-up and top-down cortical streams is dynamically adapted based on estimated precision, in line with Bayes-optimal computation. Additionally, the estimated precision do not have a purely modulatory influence, as the (additive) correction of second-order errors (i.e. errors on the precision estimates) also plays a major role. Moreover, we show that the natural way for a cortical area to learn to estimate the precision of its predictions is through a local synaptic learning rule correcting for postsynaptic second-order errors. Finally, we propose a mapping of our dynamics to cortical circuitry that is consistent with the known laminar target pattern of feedforward and feedback cortico-cortical connections and neural responses of specific cortical cell types.
## Results
### An energy for cortical function
We propose that one of the goals of the cerebral cortex is to infer a set of latent representations \(\mathbf{z}\) coherent with an internal model \(p(\mathbf{x},\mathbf{z}|\mathbf{\theta})\) for the current observation \(\mathbf{x}\) coming from input streams (e.g. sensory information, thalamic activity, etc.). Following the organization of the cortex into specialized areas, we decompose latent representations \(\mathbf{z}\) into representations \(\mathbf{u_{1}},\dots,\mathbf{u_{n}}\) corresponding to the membrane potentials of neuronal populations in \(n\) areas, and denote \(\mathbf{u_{0}}\) the observation \(\mathbf{x}\) (see Fig. 1a). For example, a part of the observation \(\mathbf{u_{0}}\) might be encoded in the activity of cells in the retina or the lateral geniculate nucleus (LGN), and latent cortical representations \(\mathbf{u_{1}},\dots,\mathbf{u_{n}}\) might then encode local orientation (V1), shape (IT), color (V4), motion (MT), etc. On longer timescales the cortex should learn parameters \(\mathbf{\theta}\) of the internal model, corresponding to weights of synaptic connections, so as to better represent its environment.
As a simplifying assumption, we organize areas in a strict generative hierarchy, such that area \(l\!+\!1\) tries to predict the activity of area \(l\), and nothing else (see Fig. 1a). It does so by sending its output rates \(\mathbf{r_{l+1}}=\phi(\mathbf{u_{l+1}})\) through top-down synapses with plastic weights \(\mathbf{W_{l}}\), where \(\phi\) represents the neuronal activation function. Additionally, area \(l\!+\!1\) similarly estimates and conveys to area \(l\) the precision of its prediction through top-down synapses with non-negative plastic weights \(\mathbf{A_{l}}\). We further hypothesise that the resulting predictive distribution is the (entropy-maximizing) normal distribution with mean \(\mathbf{W_{l}}\mathbf{r_{l+1}}\) and precision vector \(\mathbf{\lambda_{l}}=\mathbf{A_{l}}\mathbf{r_{l+1}}\) (see Fig. 1b). Crucially, the precision is a parameterized function of current higher-level representations, similarly to the mean, and not simply a parameter of the model (see Fig. 1c). This is simply an extension of the notion of prediction, where cortical areas predict the precision (second-order information) in addition of the mean (first-order information).
We can now write our energy or objective for cortical function as the negative log-likelihood
\[E=-\log p(\mathbf{x},\mathbf{z}|\mathbf{\theta})=\frac{1}{2}\sum_{l=0}^{n-1}\|\mathbf{e_{l}} \|_{\mathbf{\lambda_{l}}}^{2}-\frac{1}{2}\sum_{l=0}^{n-1}|\log\mathbf{\lambda_{l}}|+ \text{const}\;, \tag{1}\]
where \(\mathbf{e_{l}}=\mathbf{u_{l}}-\mathbf{W_{l}}\mathbf{r_{l+1}}\) is a prediction error, \(\|\cdot\|_{\mathbf{\lambda_{l}}}\) denotes the norm with \(\mathbf{\lambda_{l}}=\mathbf{A_{l}}\mathbf{r_{l+1}}\) as a metric (i.e. a variance-normalized norm) and \(|\cdot|\) denotes (unusually) the sum of components. Note that \(\|\mathbf{e_{l}}\|_{\mathbf{\lambda_{l}}}\) is the classical Euclidean norm of standardized errors \(\|\mathbf{e_{l}}/\mathbf{\sigma_{l}}\|\), where \(\mathbf{\sigma_{l}}^{2}=\mathbf{1}/\mathbf{\lambda_{l}}\) is the estimated variance vector. In other words, here we measure distances as numbers of (estimated) standard deviations away from the (estimated) mean rather than more simply as the Euclidean distance to the (estimated) mean (see Fig. 1d).
From the Bayesian perspective, minimization of \(E\) with respect to \(\mathbf{z}\) leads to a maximum a posteriori estimate of latent variables \(\mathbf{z^{*}}\). Then, we can update parameters \(\mathbf{\theta}\) such that the model assigns a higher probability for the pair of current observation \(\mathbf{x}\) and optimal latent variables \(\mathbf{z^{*}}\), which can be done again by minimizing \(E\), this time with respect to \(\mathbf{\theta}\). This can be seen as a simplified version of the expectation-maximization
algorithm [39] where we compute a point estimate of latent variables instead of a full posterior distribution.
From the perspective of energy-based models, \(E\) as described in the right-hand side of Eqn. 1 seems to be an energy worth minimizing. The first term is a measure of distance between actual representations and predictions. This measure takes into account the precision of predictions: the more a prediction was deemed precise, the more a deviation from it matters. The second term indicates that high precision is preferable. That is, as long as estimating a high precision does not excessively drive up the first term: there must be a balance between the estimated precision and the (average) magnitude of prediction errors. Moreover, this same second term also acts as a regularizer to avoid very small precision estimates, which would be a non-informative solution to minimize the first term.
### Neuronal dynamics with precision estimation
Similarly to previous work [1, 2, 40, 41], we now derive neuronal dynamics minimizing the energy \(E\) through gradient descent. Moreover, here we make use of our precision estimates \(\mathbf{\lambda_{l}}\) as metrics for our descent [42]. Note that, since the precision is the Hessian of the Gaussian negative log-likelihood, the resulting dynamics can be interpreted as an approximate second-order optimization scheme (see Methods). This leads to the leaky neuronal dynamics
\[\tau\dot{\mathbf{u_{l}}}=-\mathbf{\sigma_{l}^{2}}\circ\partial E/\partial\mathbf{u_{l}}=- \mathbf{u_{l}}+\mathbf{W_{l}}\mathbf{r_{l+1}}+\mathbf{\sigma_{l}^{2}}\circ\mathbf{a_{l}}\;, \tag{2}\]
integrating top-down predictions \(\mathbf{W_{l}}\mathbf{r_{l+1}}\) and (uncertainty-weighted) total propagated errors
\[\mathbf{a_{l}}=\mathbf{r_{l}^{\prime}}\circ(\mathbf{W_{l-1}^{T}}(\mathbf{\lambda_{l-1}}\circ \mathbf{e_{l-1}})+\mathbf{A_{l-1}^{T}}\mathbf{\delta_{l-1}}) \tag{3}\]
defined as the sum of precision-weighted prediction errors \(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}}\) and second-order errors \(\mathbf{\delta_{l}}=(\mathbf{\sigma_{l}^{2}}-\mathbf{e_{l}^{2}})/2\), both propagated upwards from the lower area (see Fig. 2a). Here \(\circ\) is the componentwise (Hadamard) product and \(\mathbf{e_{l}^{2}}=\mathbf{e_{l}}\circ\mathbf{e_{l}}\). The second-order errors \(\mathbf{\delta_{l}}\) are not errors on the prediction of the mean
but errors on the precision estimate \(\mathbf{\lambda_{l}}=\mathbf{A_{l}}\mathbf{r_{l+1}}\), which are expected to be on average 0 if and only if \(\mathbf{\lambda_{l}}\) correctly captures the true precision. Following previous work [43], we suppose that total propagated errors \(\mathbf{a_{l}}\) are encoded in the apical dendrites of cortical neurons with somatic membrane potential \(\mathbf{u_{l}}\).
These neuronal dynamics (Eqs. 2 and 3) entail two major points of interest, one of gain modulation of errors based on precision estimates (see Fig. 2b) and one of second-order error propagation (see Fig. 2c). In the following section we complete our theoretical framework by deriving synaptic learning rules for parameters \(\mathbf{W_{l}}\) and \(\mathbf{A_{l}}\). We then come back to neuronal dynamics and unpack further these two points of interest.
### Error-correcting synaptic learning of precision
At equilibrium of neuronal dynamics, weights of synapses carrying predictions can be learned following the gradient
\[\dot{w}_{l}^{ij}\!\!\propto-\partial E/\partial w_{l}^{ij}=\lambda_{l}^{i}e_{l }^{i}r_{l+1}^{j}\;, \tag{4}\]
where \(w_{l}^{ij}\) is the prediction weight from neuron \(j\) in area \(l\!+\!1\) to neuron \(i\) in area \(l\), \(\lambda_{l}^{i}e_{l}^{i}\) is the postsynaptic precision-weighted prediction error and \(r_{l+1}^{j}\) is the presynaptic rate. This is the classical learning rule for prediction weights in predictive coding.
Weights of synapses carrying precision estimates can also be learned following the gradient of \(E\). The partial derivative \(-\partial E/\partial a_{l}^{ij}=\dot{\delta}_{l}^{i}r_{l+1}^{j}\) indicates that the energy-minimizing update for precision estimation weights \(a_{l}^{ij}\) is one that corrects for postsynaptic second-order errors. To ensure that all components of \(\mathbf{\lambda_{l}}=\mathbf{A_{l}}\mathbf{r_{l+1}}\) remain positive, we additionally want weights \(a_{l}^{ij}\) to remain non-negative at all time. That is necessary as \(\mathbf{\lambda_{l}}\) approximates an inverse variance, which enters both in the energy (Eqn. 1) and the neuronal dynamics (Eqn. 2) as a metric. To enforce this, we postulate that all weights are initialized to positive values and that learning is modulated by the current weight, essentially preventing weights from crossing 0. Since all
Figure 2: Neuronal dynamics of predictive coding with adaptive precision estimation. (a) A schematic depiction of neuronal dynamics (Eqs. 2 and 3). Representations \([\mathbf{u_{l}}]\) are encoded in the somatic membrane potential of pyramidal cells. Prediction errors \([\mathbf{e_{l-1}}]\) are first computed by comparing predictions \([\mathbf{W_{l-1}}\mathbf{r_{l}}]\) with actual activity or data \([\mathbf{u_{l-1}}]\). (b) Adaptive balancing of cortical streams based on precision, realized through prediction error modulation. Prediction errors are weighted multiplicatively by the estimated precision of the prediction \([\mathbf{\lambda_{l-1}}=\mathbf{A_{l-1}}\mathbf{r_{l}}]\). The weighted errors are then propagated upwards \([\mathbf{W_{l-1}^{T}}]\), and weighted divisively by the estimated precision of the higher-level prediction \([\mathbf{\lambda_{l}}=\mathbf{A_{l}}\mathbf{r_{l+1}}]\) (multiplication by the prior variance \([\mathbf{\sigma_{l}^{2}}]\)). (c) Second-order error propagation. Second-order errors \([\mathbf{\delta_{l-1}}]\) are computed by comparing inverse precision estimates \([\mathbf{\sigma_{l-1}^{2}}=\mathbf{1}/\mathbf{\lambda_{l-1}}]\) and squared prediction errors \([\mathbf{e_{l-1}^{2}}]\). They are then uppropagated \([\mathbf{A_{l-1}^{T}}]\) and integrated alongside uppropagated prediction errors into the total error \([\mathbf{a_{l}}]\) which is then used in inference dynamics.
then stay positive, we can interpret this as a simple modulation of the learning rate for precision learning. This leads to the learning rule
\[\hat{\alpha}_{l}^{ij}\propto-a_{l}^{ij}\partial E/\partial a_{l}^{ij}=a_{l}^{ij} \delta_{l}^{i}\tau_{l+1}^{j}\;. \tag{5}\]
We proceed to show in simulations that Eqs. 4 and 5 can indeed learn correct mean and precision estimates as a function of higher-level activity. In our simulations, we first randomly select an underlying context. This context determines both the data distribution, from which we sample a data point, and the higher-level representation (see Fig. 3a). The prediction and the precision estimate are functions of the higher-level representation associated with this context. Prediction errors are computed as the distance between the sampled data point and the prediction and are used to learn prediction weights following Eqn. 4, so as to estimate the mean of the data distribution associated with this context (see Fig. 3b). Second-order errors are then computed as the distance between precision estimates and the squared prediction errors and are used to learn the precision estimation weights following Eqn. 5, so as to estimate the precision of the data distribution associated with this context (see Fig. 3c).
These two similar learning rules simply state that synaptic weights evolve towards values that lead to smaller remaining errors. Importantly, all the information needed for learning, namely the presynaptic rate, postsynaptic error and current synaptic weight, is present close to the synapse. With our use of precision estimates as metrics, Eqn. 5 might be seen as a rule for metric learning. Having developed a way to learn top-down precision estimates, we will now further examine how these are used in neuronal dynamics and demonstrate their computational utility.
### Adaptive balancing of cortical streams based on precision
For the neuronal dynamics in our model (Eqs. 2 and 3), the relative importance given to top-down predictions and bottom-up prediction errors is controlled by two mechanisms that both modulate the gain of prediction errors. First, the estimated precision of top-down predictions about what the activity of a neuron should be (the prior) impacts divisively the importance of bottom-up errors in the inference dynamics of this neuron (see Fig. 4a). Second, the estimated precision of predictions that a neuron make about what the activity of other neurons should be impacts multiplicatively the importance of errors entailed by these predictions (see Fig. 4b). This weighting is proportional to the more classical Bayes-optimal weighting of top-down prediction (akin to prior) and bottom-up errors (akin to data) by their respective reliabilities, and leads to a maximum a posteriori estimate of latent variables at equilibrium of neuronal dynamics (Eqn. 2). This is useful when
Figure 3: **Error-correcting synaptic learning.****(a)** In these simulations, we consider a higher area with \(N_{l+1}\) neurons and a lower area with \(N_{l}\) neurons. Specifically, here we take \(N_{l+1}=N_{l}=100\). The activity vector in the higher area can take \(N_{c}\) different values \([\mathbf{r_{n}},\ n=1,\dots,N_{c}]\), to each of which is associated a different mean \([\mathbf{\mu_{n}}]\) and a different variance \([\mathbf{\sigma_{n}^{2}}]\). The activity in the lower area is then sampled from the Gaussian distribution with this mean and variance. Predictions \([\mathbf{W}\mathbf{r_{i}}]\) and precision estimates \([\mathbf{Ar_{i}}]\) are formed from the higher-level representation and prediction errors \([\mathbf{e}=\mathbf{x}-\mathbf{W}\mathbf{r_{i}}]\) and second-order errors \([\mathbf{\delta}=\mathbf{1}/\mathbf{Ar_{i}}-\mathbf{e}^{2}]\) are computed and used to learn parameters \([\mathbf{W}\) and \(\mathbf{A}]\). For simulations marked (random), higher-level representations are random binary vectors with an average of \(50\%\) of ones. For simulations marked (one-hot), higher-level representations are one-hot encoded. **(b)** Here we show that with the learning rule Eqn. 4 the network correctly learns to estimate the means \([\mathbf{\mu_{n}},\ n=1,\dots,N_{c}]\) from higher-level activity \([\mathbf{r_{n}},\ n=1,\dots,N_{c}]\). In these simulations we suppose that the precision estimate is 1. **(c)** Here we show that with the learning rule Eqn. 5 the network correctly learns to estimate the precisions \([\mathbf{1}/\mathbf{\sigma_{n}^{2}}]\) from higher-level activity \([\mathbf{r_{n}}]\).
integrating information from sources with different levels of reliability (or noise), as, for example, necessary during multimodal integration (see Fig. 3c and Methods).
At the level of a cortical area, the top-down precision estimate controls the balance of bottom-up and top-down information on a neuron-by-neuron basis, providing fine-grained control over what is attended to. We emphasize that, with our formulation of precision estimates as a function of higher-level representations, we can encompass state-, context-, task- or feature- dependent precision signals, depending on what the higher-level representations encode. Moreover, as higher-level representations change, so do the precision signals, providing a mechanism to explain the observed trial-to-trial variability of precision weighting in animals.
### Second-order error propagation
In neuronal dynamics (Eqs. 2 and 3), second-order errors \(\mathbf{\delta_{l}}\) are propagated through the cortical hierarchy alongside classical precision-weighted prediction errors \(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}}\). This forms a second-order stream where cortical areas exchange precision estimates and second-order errors (see Fig. 5a).
To better understand the computational role of this second-order error propagation, we place a network without hidden layers (see Fig. 5b) in supervised learning settings on simple nonlinear binary classification tasks (see Fig 5ci and Methods). Parameters are learned following Eqs. 4 and 5 and, as expected, the precision signal after learning represents the class-specific precision. With our dynamics (see Fig. 5cii) but not with classical predictive coding dynamics (see Fig. 5ciii), the network without hidden layers can solve these nonlinear classification tasks (see Fig. 5d).
At a computational level, this difference can be understood by looking at the way we measure distances. With our model we use the variance-normalized distance between the input and the class distributions, whereas classical predictive coding uses the Euclidean distance between the input and the means of class distributions. At an algorithmic level, the capacity of our network to solve these tasks comes from the computation and propagation of second-order errors. To minimize second-order errors, the network must not only choose the class whose point prediction is closest to the data point (non-informative here), but also the class that best predicts the remaining distance between point prediction and data.
Figure 4: **Adaptive balancing of cortical streams based on precision.****(a)** Divisive weighting of errors by the estimated precision of top-down predictions about what the activity of a neuron should be (the prior), corresponding to the multiplicative term \([\mathbf{\sigma_{l}^{2}}=\mathbf{1}/\mathbf{A_{l}r_{l+1}}\) in Eqn. 2]. **(b)** Multiplicative weighting of errors by the estimated precision of predictions that a neuron make about what the activity of other neurons should be \([\mathbf{\lambda_{l-1}}]\). **(c)** Approximate Bayes-optimal computation in a volatile environment. We consider \(N_{c}\) different classes to which we associated \(N_{c}\) different priors \((\mathbf{\mu_{i}},\mathbf{\sigma_{i}^{2}})\) and data uncertainty \(\mathbf{\lambda_{i}}\), \(i\in[1,N_{c}]\). The goal is to infer true latent \(\mathbf{x}\sim\mathcal{N}(\mathbf{\mu_{i}},\mathbf{\sigma_{i}^{2}})\) from noisy data \(\mathbf{d}\sim\mathcal{N}(\mathbf{x},1/\mathbf{\lambda_{i}})\) and prior \(\mathbf{\mu_{i}}\). We do that in four different ways that differ in how they take into account uncertainty and precision. (Bayes-optimal) a Bayes-optimal estimate, with knowledge of true prior uncertainty and true data precision (precision estimates) our dynamics, with knowledge of true prior uncertainty and an estimate of data precision as a function of current representation (mean precision) an estimate with knowledge only of the mean prior uncertainty and data precision across classes (no weighting) an estimate blind to uncertainty and precision. We plot the average distance between each estimate and the true latent \(\mathbf{x}\). The error bars indicate the standard deviation.
### Precision estimation in cortical circuits
We now turn to the task of exploring how our dynamics could be realized in cortical circuits (see Fig. 6). We classically postulate that latent variables \(\mathbf{u_{l}}\) are encoded in the somatic activity of a population of pyramidal neurons L6p situated in infragranular cortical layers. Here we choose specifically intracortical pyramidal cells of layer 6 since, as demanded by our theoretical framework, they receive the majority of their input from intracortical long-range projection neurons [44] and send top-down projections to lower cortical areas [45, 46, 47]. We propose that these projections notably carry predictions [48, 49, 50, 51], but also precision estimates. Following experimental evidence of error encoding in pyramidal cells of cortical layer 2/3 [52, 53, 54, 55], we propose that precision-weighted prediction errors \(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}}\) and second-order errors \(\mathbf{\delta_{l}}\) are computed by two populations of pyramidal neurons situated in supragranular layers, respectively L3\(e\) and L3\(\delta\). Recent evidence suggests that L3\(e\) expresses _Adamts2_ and _Rrad_[56], while no functional role has yet been proposed for the third class of supragranular pyramidal cells expressing _Agmat_, which could be L3\(\delta\). Additionally, our theory suggests that both type of errors should be integrated into the total propagated errors \(\mathbf{a_{l}}\) (as defined in Eqn. 3), which we propose takes place in distal apical dendrites of L6p situated in L4/5a [57], in line with previous work postulating error encoding in segregated dendritic compartments [43, 58].
We now concern ourselves with the precision-balancing of cortical streams entailed by our theory through inhibition and disinhibition of errors. We propose that raw prediction errors \(\mathbf{e_{l}}\) are computed in dendrites of L3\(e\) by comparing local and top-down inputs from L6p. Precision-weighting of prediction errors might then be realized through top-down gain modulation targeting these dendrites. We propose that this is (at least partially) achieved through a well-known dishinibitory circuit motif involving VIP-expressing interneurons receiving top-down input and inhibiting SST-expressing interneurons which in turn inhibits dendrites of L3\(e\)[59, 60, 61]. This would entail that VIPs encode a precision signal and supragranular SSTs a variance signal. This is supported by recent 2-photon imaging on rodents placed in an oddball paradigm, where activity ramps up in VIPs and decays in SSTs as a stimulus is repeated (and both show no sign of prediction error computation, contrarily to pyramidal cells) [62]. Moreover, we propose that the uncertainty modulation of total bottom-up errors entailed by our theory (the factor \(\mathbf{\sigma_{l}^{2}}\) in Eqn. 2) is elicited through modulation of L6p apical dendrites by infragranular SST interneurons, which would then encode a precision signal. The laminar specificity of SSTs activity [63] supports this hypothesis.
Finally, we make tentative propositions for circuit-level mechanisms underlying second-order error computation in L3\(\delta\). To compute second-order errors, precision estimates must be compared to the magnitude of
Figure 5: **Second-order errors propagation for classification.****(a)** A second-order cortical stream where precision estimates and second-order errors are exchanged between cortical areas. **(b)** A 2\(\times\)2 network for binary classification. During learning, the X and Y data are sampled from one of the two class distributions and the activity of neurons representing the class is clamped to the one-hot encoded correct class. Parameters \([\mathbf{W},\mathbf{A}]\) are then learned following Eqs. 4 and 5. During inference, the activity of neurons representing the class follows neuronal dynamics (without top-down influence) and we read the selected class as the one corresponding to the most active neuron. Prediction error (first-order) propagation is omitted in the depiction. **(c)** The two columns depicts two different tasks. **(c)** True class distributions. **(cii)** Classification with second-order error propagation. **(ciii)** Classification without second-order error propagation. **(d)** Classification accuracy on the task presented in ci left.
prediction errors. We propose that the magnitude of (precision-weighted) prediction errors is computed in PV-expressing basket cells [64] from local L3\(e\) inputs. At a circuit level, L3\(e\) is believed to be separated into two populations L3\(e^{+}\) and L3\(e^{-}\) encoding the positive and negative part of \(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}}\) respectively [54]. If this is the case, then excitatory projections from L3\(e^{+}\) and L3\(e^{-}\) to local basket cells, eventually followed by a nonlinear integration by basket cells [65], would be sufficient to perform the needed computation. Basket cells would then project to L3\(\delta\) realizing a subtractive lateral inhibition [66]. Additionally, we suppose that L3\(\delta\) receives top-down precision estimates. Now with this setup, the quantity encoded in L3\(\delta\) would be \(\mathbf{\lambda_{l}}-(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}})^{2}=\mathbf{\lambda_{l}^{2}} \circ\mathbf{\delta_{l}}\). This is in fact another form of second-order errors, which could be interesting on their own, but to send up \(\mathbf{\delta_{l}}\) as suggested by our theoretical framework, we postulate that the output of L3\(\delta\) is modulated by chandelier cells, the other main class of PV-expressing interneurons, which would then encode a squared precision signal. In accordance with this hypothesis, chandelier cells almost exclusively target the axonal initial segment of pyramidal cells and have been shown to be capable of both promoting and inhibiting action potential generation [67]. These propositions, though they are unlikely to prove exactly correct, could provide starting points for experimental investigation of cortical second-order errors.
Figure 6: **Precision estimation in cortical circuits.** Cortical circuit for neuronal dynamics of inference (as described in Eqn. 2 [\(\tau\mathbf{u_{l}}=-\mathbf{u_{l}}+\mathbf{W_{l}}\mathbf{r_{l+1}}+\mathbf{\sigma_{l}^{2}}\circ\mathbf{a_{ l}}\)] and Eqn. 3 [\(\mathbf{a_{l}}=\mathbf{r_{l}^{\prime}}\circ(\mathbf{W_{l-1}^{T}}(\mathbf{\lambda_{l-1}}\circ\bm {e_{l-1}})+\mathbf{A_{l-1}^{T}}\mathbf{\delta_{l-1}})\)]). Representations [\(\mathbf{u_{l}}\)] are held in the somatic membrane potential of L6p. Top-down synapses carrying predictions [\(\mathbf{W_{l+1}}\)] directly excite L6p at proximal dendrites (5). Bottom-up precision-weighted prediction errors [\(\mathbf{W_{l-1}^{T}}(\mathbf{\lambda_{l-1}}\circ\mathbf{e_{l-1}})\)] and second-order errors [\(\mathbf{A_{l-1}^{T}}\mathbf{\delta_{l-1}}\)] are integrated into total error [\(\mathbf{a_{l}}\)] in the distal dendrites of L6p as described in Eqn. 3 (3). This total error is then weighted by the prior uncertainty [\(\mathbf{\sigma_{l}^{2}}\)] through divisive dendritic inhibition realized by infragraulant SST-expressing interneurons (L56-SST) (4). Top-down predictions [\(\mathbf{W_{l}}\mathbf{r_{l+1}}\)] and local representations [\(\mathbf{u_{l}}\)] are compared in dendrites of L3\(e\). Precision-weighting is then realized through gain modulation of these dendrites by the disinhibitory VIP-expressing (VIP) and SST-expressing (L23-SST) interneurons motif (1). L3\(\delta\) integrate top-down precision estimates [\(\mathbf{\lambda_{l}}\)] and local squared precision-weighted prediction errors [\(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}})^{2}\)] encoded in basket cells (BC) into re-weighted second-order errors [\(\mathbf{\lambda_{l}}-(\mathbf{\lambda_{l}}\circ\mathbf{e_{l}})^{2}=\mathbf{\lambda_{l}^{2}} \circ\mathbf{\delta_{l}}\)]. Second-order errors [\(\mathbf{\delta_{l}}\)] are then sent up using the modulatory influence of chandelier cells (ChC) on the axonal initial segment of L3\(\delta\).
## Discussion
In this work we introduced diagonal estimates of the precision matrix as a function of current higher-level activity and derived neural dynamics of predictive coding with this additional ingredient. In the resulting neuronal dynamics, the relative importance of bottom-up and top-down cortical streams is controlled based on precision estimates, enabling efficient integration of cues with different context-dependent reliabilities. We proposed that in cortical circuits this weighting takes the form of top-down gain modulation realized through a combination of disinhibitory interneuron circuits targeting layer 2/3 pyramidal cells and apical modulation of layer 5/6 pyramidal cells. Moreover, the conditioning of precision estimates on current activity also led to the apparition of second-order prediction errors. Like classical prediction errors, second-order errors are propagated through the cortical hierarchy, leading to nonlinear classification capabilities in a single area. Additionally, these new errors are used for learning weights of synapses responsible for precision estimation.
The brain may use different forms of precision estimates and not only diagonal (vector) estimates as we explored in this work. Obvious alternatives would be scalar and full matrix precision estimates. First, scalar estimates would define the importance granted to all errors in an area, and in that case precision-weighting of errors might be realized through nonspecific release of neuromodulators. Such estimates might be useful for multimodal integration, where one modality as a whole might be reliable or not given context (e.g. vision during the day or during the night). For example, noradrenaline seems to encode environmental volatility [34]. Second, at the other extreme, we might consider full precision matrices. We would then be minimizing an approximate Mahalanobis distance [68] between representations and predictions, taking into account not only stretch but also skew in our metric. Doing so might lead to a theoretically grounded account of lateral connections between prediction error nodes [2], with links to the notion of partial correlations [69]. Moreover, we have conditioned precision estimates on the activity of the same population on which predictions (mean estimates) are conditioned. An alternative would have been to condition precision estimates on a new set of latent variables potentially held by another population of cortical neurons, disentangling the tasks of mean and precision estimation. Furthermore, these estimates might not only be conditioned on cortical but also on subcortical activity. This might help assign computational roles to interactions between the cortex and subcortical structures. Of course all those mechanisms need not be mutually exclusive and could be combined into more complex precision estimates, potentially increasing the explanatory power of predictive coding models of cortical circuits [70]. Note that adaptive precision-weighting and second-order errors might be crucial not only for sensory processes, but also in action selection following the growing tradition of active inference [71].
The dynamics that we presented share some classical weaknesses of predictive coding dynamics concerning biological plausibility that have been tackled elsewhere: weight transport [72, 73], long inference [74], encoding of signed errors [43, 54] and one-to-one connections [75]. Moreover, our learning rules Eqs. 4 and 5 only fulfill weak criteria of locality, as all the information necessary for learning is indeed present in a local patch of cortex around the synapse, but not necessarily precisely at the synapse. Note that to derive our synaptic learning rules we chose implicitly to use the Euclidean metric in our descent scheme. We could consider other metrics, as we did for neuronal dynamics (Eqn. 2), and this might lead to more biologically plausible learning rules. For example, previous work has argued that weights of synapses carrying predictions and targeting proximal dendrites of infragranular pyramidal cells might be learned using the apical activity at equilibrium of neuronal dynamics [43], which in our case would correspond to the learning rule entailed by using, again, precision estimates as a metric for synaptic learning. Additionally, our assumption of a strict hierarchy of latent variables seems at odds with known connectivity between cortical areas. Finally, our account of cortical and notably interneuron circuitry is still incomplete and should definitively be refined and challenged through interactions with experimental work. We believe that our work might provide a theoretical framework to interpret existing experimental results and fruitful directions for following experiments. More than a specific set of predictions, we would like to convey that looking for cortical precision estimates and second-order errors signals might be an interesting venue. More specifically one could look for precision, uncertainty or error magnitude signals in interneuron activity.
In our model, precision estimates computed at each level of the hierarchy as a function of current representations are used as soft multiplicative attention masks on errors, not unlike modern machine learning takes on attention [76]. We hope that our formulation will help link accounts of attention in the predictive
coding framework [22] and in machine learning. Anecdotally, VIP-expressing interneurons were described by experimentalists as "generating a spotlight of attention" [77].
Precision-weighting of prediction errors is often a central element in leading models in the field of neuropsychiatry [26, 27, 28, 29, 30]. We hope that this step towards a better theoretical and computational grasp of this mechanism will help in gaining a more holistic understanding of psychopathologies and altered states of mind under the predictive processing framework. The separation of neural mechanisms for prior and data weighting in our model (respectively L6p apical modulation and disinhibition of L3p dendrites) might prove critical to extend models based on pathological over- or under-weighting of either prior or data in a process of Bayesian integration to the whole cortical hierarchy, where activity at one level both represents data for the level above and generates prior for the level below. Moreover, our proposed computational roles for interneuron circuitry might help link accounts of neuropsychiatric disorders in terms of precision-weighting of errors to accounts in terms of cortical excitation-inhibition balance [78, 79]. Future work might include studying the effects of pathological precision estimation in simulations.
Finally, the somatic integration of apical activity in infragranular pyramidal cells has recently been shown to be crucial for perceptual decision-making [80], impaired in anesthesia [81] and placed at the center of theories of conscious processing [82]. Of particular importance is the gain of the coupling compartment between apical and perisomatic regions, controlling the balance between bottom-up and top-down cortical streams. The authors of [82] proposed higher-order thalamus as a major player in the game of controlling this coupling, but also added cortico-cortical and potentially more selective control as an outstanding area of investigation. In our framework, the uncertainty modulation of L6p apical dendrites (or more precisely, of the coupling compartment) would play such a role of locally controlling the relative importance of top-down predictions and bottom-up prediction errors in the inference process (see e.g. Fig. 4a).
## Methods
### Probabilistic model
Here we precise the form of the probabilistic model. We introduce a notion of strict hierarchy between levels of latent representations by supposing that the joint can be decomposed as
\[p(\mathbf{u_{0}},\mathbf{u_{1}},\ldots,\mathbf{u_{n}}|\mathbf{\theta})\propto p(\mathbf{u_{0}}|\bm {u_{1}},\mathbf{\theta})p(\mathbf{u_{1}}|\mathbf{u_{2}},\mathbf{\theta})\ldots p(\mathbf{u_{n-1}}| \mathbf{u_{n}},\mathbf{\theta}) \tag{6}\]
which can be justified by assuming a Markov property \(\forall k\), \(p(\mathbf{u_{k}}|\mathbf{u_{k+1}},\ldots,\mathbf{u_{n}},\mathbf{\theta})=p(\mathbf{u_{k}}|\mathbf{u_{ k+1}},\mathbf{\theta})\) and a uniform top level prior \(\mathbf{u_{n}}\sim\mathcal{U}\). Since the distribution of \(\mathbf{u_{l}}\) is conditioned on \(\mathbf{u_{l+1}}\), we call this a generative hierarchy.
We further assume that predictions \(\mathbf{u_{l}}|\mathbf{u_{l+1}}\) follow a multivariate Gaussian distribution
\[\mathbf{u_{l}}|\mathbf{u_{l+1}}\sim\mathcal{N}\left(\mathbf{W_{l}}\mathbf{r_{l+1}},\mathrm{ diag}(\mathbf{A_{l}}\mathbf{r_{l+1}})^{-1}\right) \tag{7}\]
with mean at point predictions \(\mathbf{W_{l}}\mathbf{r_{l+1}}\) and diagonal covariance matrix with positive diagonal \(\mathbf{\sigma}_{l}^{2}=\mathbf{1}/\mathbf{A_{l}}\mathbf{r_{l+1}}\).
Under these two assumptions described in Eqs. 6 and 7, we have the right-hand side equality in Eqn. 1, which we derive in more details in Supplementary Note 1.
### Precision estimates as metrics
Modern machine learning has made extensive use of Euclidean gradient descent, such that we now often confound the gradient and the partial derivative [42]. But more generally, for a metric characterized by the positive definite metric tensor \(\mathbf{D}\), the gradient of the energy is given by
\[\left(\nabla E\right)(\mathbf{x})=\mathbf{D}^{-1}\frac{\partial E}{\partial\mathbf{x}} \tag{8}\]
In this work we chose precision estimates as a metric for neuronal dynamics Eqn. 2, i.e. \(\mathbf{D}=\mathrm{diag}(\mathbf{\lambda_{l}})\). There are two justifications for this. First and foremost, the resulting neuronal dynamics Eqn. 2 appears to us more neurally plausible, with the explicit leak \(-\mathbf{u_{l}}\) and the apical modulation factor \(\mathbf{\sigma}_{l}^{2}\). Second, remark that the precision is the Hessian of the Gaussian negative log-likelihood
\[\frac{\partial^{2}-\log f(\mathbf{u};\mathbf{m},\mathbf{v})}{\partial\mathbf{u}^{2}}=\mathrm{ diag}(\mathbf{1}/\mathbf{v}) \tag{9}\]
with \(f\) the density of a multivariate Gaussian and, importantly, \(\mathbf{m},\mathbf{v}\) not functions of \(\mathbf{u}\). Second derivatives of the objective are known to have desirable properties as metrics, from Newton's method to natural gradient descent [83]. Of course, the precision is only a crude approximation of the actual Hessian \(\partial^{2}E/\partial\mathbf{u_{l}}^{2}\), since both means and variances in \(E\) are in fact functions of current activity \(\mathbf{u}\). In other words, if we make the approximation of ignoring dependencies of distribution parameters on current activity, the precision is the Hessian of the energy. This is equivalent to considering at each level \(l\) that the activity in level
\(l+1\) is fixed. In short, our approximation abandons mathematical exactness but retains the idea of a second derivative metric. An intuition of the effect on neuronal dynamics Eqn. 2 is as normalizing the balance of importance between local and lower prediction errors such that the importance of local errors is \(1\).
Now, to take precision estimates \(\mathbf{\lambda_{l}}=\mathbf{A_{l}r_{l+1}}\) as metrics, we do need to be cautious that elements of \(\mathbf{\lambda_{l}}\) are strictly positive. Indeed this a condition for \(\mathbf{\lambda_{l}}\) to even define a proper (Riemannian) metric. Let us look at how the precision estimation weights evolve through time as defined by Eqn. 5. For each weight \(\mathbf{\alpha_{l}}^{ij}\) in \(\mathbf{A_{l}}\), we have
\[\lim_{\mathbf{a_{l}}^{ij}\to 0^{+}}\mathbf{\alpha_{l}}^{ij}=0 \tag{10}\]
Hence, if we initialize elements of \(\mathbf{A_{l}}\) to positive values, then at all time \(\mathbf{a_{l}}^{ij}>0\). If we additionally assume that at all time at least one element of \(\mathbf{r_{l+1}}\) is nonzero, then at all time elements of \(\mathbf{\lambda_{l}}\) are strictly positive. With this, skeptics about this change of metric can at least be reassured that we are following a descent direction on \(E\).
## Simulation details
Pseudocode for simulations is available in the Supplementary Information, and an implementation in Julia is available at github.com/arnogranier/precision-estimation.
For all simulations, we take \(\phi\) the ReLU activation function.
### Precision learning
For simulations presented in Fig. 3c, we follow the simulation setup presented in Fig. 3a and described in more details bellow and in Supplementary Algorithm 1.
We consider a higher area with \(N_{l+1}\) neurons and a lower area with \(N_{l}\) neurons. We consider \(N_{c}\) different classes of inputs, each with its own distribution \(\mathcal{N}(\mathbf{\mu_{i}},\mathbf{\sigma_{i}^{2}}),i\in[1,N_{c}]\), where \(\mathbf{\mu_{i}}\) and \(\mathbf{\sigma_{i}^{2}}\) are vectors of size \(N_{l}\). We initialize all \(\mathbf{\mu_{i}}\) following a \(U(-1,1)\) and all \(\mathbf{\sigma_{i}^{2}}\) following a \(U(1/4,1)\). Then we choose the representational mode of the higher area, either random binary vectors or one-hot encoded and initialize higher-level representations \(\mathbf{r_{i}},i\in[1,N_{c}]\) as random binary vectors of size \(N_{l+1}\) with on average \(p\) ones or one-hot encoded \(i\) in \(N_{l+1}\), respectively. The precision estimation matrix \(\mathbf{A}\) is then initialized as a matrix filled with \(\alpha\), with \(\alpha=1/pN_{l+1}\) for the random binary vector case and \(\alpha=1\) for the one-hot encoded case. We then repeat the following procedure for multiple epochs (1) For each class, sample a data \(\mathbf{x_{i}}\) from \(\mathcal{N}(\mathbf{\mu_{i}},\mathbf{\sigma_{i}^{2}})\) (2) Set the higher-level representation to \(\mathbf{r_{i}}\) (3) Compute the precision estimate \(\mathbf{\lambda_{i}}=\mathbf{A\mathbf{r_{i}}}\) (4) Compute the second-order error \(\mathbf{\delta_{i}}=(1/\mathbf{\lambda_{i}}-(\mathbf{x_{i}}-\mathbf{\mu_{i}})^{2})/2\) (5) Update \(\mathbf{A}\) following Eqn. 5. In Fig. 3c, we plot the evolution of \((\sqrt{N_{l}}N_{c})^{-1}\sum_{i}\|\mathbf{\sigma_{i}^{2}}-1/\mathbf{A\mathbf{r_{i}}}\|\) through epochs. For Fig. 3c, parameters are \(T=10000,N_{l+1}=N_{l}=100,\eta=0.001\) with \(N_{c}\) varying depending on the simulation.
A similar procedure is used for Fig. 3b, but following Eqn. 4.
### Approximate Bayes-optimal integration
For simulations presented in Fig. 4c, we follow the simulation procedure described bellow. Pseudocode for these simulations is presented in Supplementary Algorithm 2 and a mathematical intuition is given in Supplementary Note 3.
We consider a higher area with \(N_{l+1}\) neurons and a lower area with \(N_{l}\) neurons. We consider \(N_{c}\) different classes of inputs, each with its own distribution \(\mathcal{N}(\mathbf{\mu_{i}},\mathbf{\sigma_{i}^{2}}),i\in[1,N_{c}]\), where \(\mathbf{\mu_{i}}\) and \(\mathbf{\sigma_{i}^{2}}\) are vectors of size \(N_{l}\). We initialize all \(\mathbf{\mu_{i}}\) following a \(U(0,2/N_{l})\) and all \(\mathbf{\sigma_{i}^{2}}\) by randomly choosing each component in \(\{0.1,2\}\) with a 50% chance. We initialize the precision estimation matrix \(\mathbf{A}\) following a \(U(0,2)\). We additionally collect the mean prior variance vector across classes \(\mathbf{\bar{\sigma}}^{2}=\frac{1}{N_{c}}\sum_{i}\mathbf{\sigma_{i}^{2}}\) and the mean data precision vector across classes \(\mathbf{\bar{\lambda}}=\frac{1}{N_{c}}\sum_{i}\mathbf{A\phi}(\mathbf{\mu_{i}})\). We then repeat across epochs the following procedure. For each class \(i\) (1) we sample a true target latent \(\mathbf{x}\sim\mathcal{N}(\mathbf{\mu_{i}},\mathbf{\sigma_{i}^{2}})\). We consider that the precision estimation weights are correct such that the precision of the data is \(\mathbf{\lambda}=\mathbf{A\phi}(\mathbf{x})\). (2) Then we sample noisy data. Here we want to focus on precision estimation and not mean prediction, so we suppose that the prediction function is the identity, and we then sample data \(\mathbf{d}\sim\mathcal{N}(\mathbf{x},1/\mathbf{\lambda})\). The goal is then to infer \(\mathbf{x}\) from data \(\mathbf{d}\) and prior \(\mathbf{\mu_{i}}\). We do that in four different ways that differ in how they take into account uncertainty and precision:
(3i) a Bayes-optimal estimate, with knowledge of true prior variance and true data precision
\[\mathbf{u}=(\mathbf{\lambda}\circ\mathbf{d}+\mathbf{\sigma_{i}^{-2}}\circ\mathbf{\mu_{i}})/(\mathbf{ \lambda}+\mathbf{\sigma_{i}^{-2}}) \tag{11}\]
(3ii) our dynamics, with knowledge of true prior variance and data precision estimation
\[\tau\mathbf{\hat{u}}=-\mathbf{u}+\mathbf{\mu_{i}}+\mathbf{\sigma_{i}^{2}}\circ\mathbf{A\phi}(\mathbf{u}) \circ(\mathbf{d}-\mathbf{u}) \tag{12}\]
(3iii) an estimate with knowledge only of the mean prior variance and data precision across classes
\[\tau\mathbf{\hat{u}}=-\mathbf{u}+\mathbf{\mu_{i}}+\mathbf{\bar{\sigma}}\circ\mathbf{\bar{\lambda}} \circ(\mathbf{d}-\mathbf{u}) \tag{13}\]
(3iv) an estimate blind to variance and precision
\[\tau\mathbf{\hat{u}}=-\mathbf{u}+\mathbf{\mu_{i}}+(\mathbf{d}-\mathbf{u}) \tag{14}\]
In Fig. 4c, we plot the average distance between each estimate and the true latent \((N_{c}N_{c}\sqrt{N_{l}})^{-1}\|x-u\|\) and its standard deviation.
### Nonlinear binary classification
For simulations presented in Fig. 5cd, we built the datasets by sampling \(N=1000\) points \((x_{1},y_{1}),\ldots,(x_{N},y_{N})\) from each of the gaussian distributions represented in Fig. 5ci (first column: \(\mathcal{N}([0,0],\mathrm{diag}([1,1/4]))\) and \(\mathcal{N}([0,0],\mathrm{diag}([1/4,1]))\), second column: \(\mathcal{N}([0,0],\mathrm{diag}([9,9]))\) and \(\mathcal{N}([0,0],\mathrm{diag}([1/3,1/3]))\), represented by their \(99.7\%\) confidence ellipses) and attaching the corresponding class label (either red or blue).
We then build a 2x2 network where the top level activity is a one-hot representation of the class and the bottom level activity is the coordinate in space \((x,y)\). We train this network in supervised learning settings on the dataset by clamping [84] both top and bottom area to the corresponding elements of the dataset and perform one step of parameters learning as described in Eqs. 4 and 5.
We then test the capacity of our network to classify data by only clamping the bottom level to the data and letting the top level activity follow Eqn. 2. We then select as the output class index the index of the maximum top level activity, and plot the corresponding classification in Fig. 5cii.
Pseudocode for the training and testing procedures is provided in Supplementary Algorithms 3 and 4.
For comparison, we also plot in Fig. 5ciii the classification results obtained with the same 2x2 architecture but using classical predictive coding dynamics
\[\tau\dot{\mathbf{u_{l}}}=-\mathbf{u_{l}}+\mathbf{W_{l}}\mathbf{r_{l+1}}+\mathbf{r_{l}^{\prime}} \circ\mathbf{W_{l-1}^{T}}\mathbf{e_{l-1}} \tag{15}\]
\[\dot{\mathbf{W_{l}}}\propto\mathbf{e_{l}}\mathbf{r_{l+1}^{T}} \tag{16}\]
and following the same training and testing procedures.
In Fig. 5d we plot the associated performance, with the addition of the maximum likelihood estimate with perfect knowledge of the means and variances.
## Data availability
All data is generated by the simulation code (see Code availability statement below).
## Code availability
Simulation code for this paper can be accessed at github.com/arnogranier/attention-pc.
|
2310.20497 | On the matrix code of quadratic relationships for a Goppa code | In this article, we continue the analysis started in \cite{CMT23} for the
matrix code of quadratic relationships associated with a Goppa code. We provide
new sparse and low-rank elements in the matrix code and categorize them
according to their shape. Thanks to this description, we prove that the set of
rank 2 matrices in the matrix codes associated with square-free binary Goppa
codes, i.e. those used in Classic McEiece, is much larger than what is
expected, at least in the case where the Goppa polynomial degree is 2. We build
upon the algebraic determinantal modeling introduced in \cite{CMT23} to derive
a structural attack on these instances. Our method can break in just a few
seconds some recent challenges about key-recovery attacks on the McEliece
cryptosystem, consistently reducing their estimated security level. We also
provide a general method, valid for any Goppa polynomial degree, to transform a
generic pair of support and multiplier into a pair of support and Goppa
polynomial. | Rocco Mora | 2023-10-31T14:35:07Z | http://arxiv.org/abs/2310.20497v2 | # On the matrix code of quadratic relationships for a Goppa code
###### Abstract.
In this article, we continue the analysis started in [1] for the matrix code of quadratic relationships associated with a Goppa code. We provide new sparse and low-rank elements in the matrix code and categorize them according to their shape. Thanks to this description, we prove that the set of rank 2 matrices in the matrix codes associated with square-free binary Goppa codes, i.e. those used in Classic McEiece, is much larger than what is expected, at least in the case where the Goppa polynomial degree is 2. We build upon the algebraic determinantal modeling introduced in [1] to derive a structural attack on these instances. Our method can break in just a few seconds some recent challenges about key-recovery attacks on the McEliece cryptosystem, consistently reducing their estimated security level. We also provide a general method, valid for any Goppa polynomial degree, to transform a generic pair of support and multiplier into a pair of support and Goppa polynomial.
## 1. Introduction
### The McEliece scheme and its cryptanalysis
The McEliece cryptosystem [1] is the oldest code-based encryption scheme, dating back to 1978, i.e. just a few months after the ubiquitously used RSA cryptosystem [13]. Contrarily to the latter [21], the former is also widely believed to be a quantum-resistant alternative, meaning that quantum algorithms are not expected to break it exponentially faster than classical ones. This is mirrored in the NIST Post-Quantum Standardization Process, where the IND-CCA secure version Classic McEliece [1] is currently one of the few candidates survived at the fourth round. Despite the public key size being huge, the McEliece encryption scheme benefits from extremely fast encryption and decryption algorithms and very small ciphertexts. This potentially makes it an attractive option for several use-cases1.
Footnote 1: [https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/jgevye&ehcM](https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/jgevye&ehcM)
The other important argument in favor of McEliece is that all the known general decoding algorithms for message key recovery developed in more than 60 years of research, being they classical or quantum, barely improved the exponent of the exponential cost [14, 15, 1, 1, 1, 1, 1, 1, 1, 1, 1, 16]. However, these are still used to design secure parameters, because key recovery attacks are immensely more expensive than message recovery techniques.
Key recovery attacks try to exploit the algebraic structure of the underlying family of codes. Indeed, in order to decrypt a message, the receiver must be able to decode a codeword and therefore a code equipped with an efficient decoding algorithm must be adopted. The original proposal of McEliece, as well as Classic
McDiece, builds upon the class of binary Goppa codes. An element of this family is uniquely determined by a vector, called support, of length equal to the code length and an irreducible polynomial of a relatively small degree defined over an extension of the binary field and called Goppa polynomial. On the other hand, a Goppa code corresponds to several pairs of supports and Goppa polynomials. Recovering any of them allows to decode efficiently any message.
For a long time, the only key recovery attack consisted of guessing a valid pair of support and Goppa polynomial [10] and then checking via the Support Splitting Algorithm [11] whether it defines a Goppa code that is permutation equivalent to the public on. The total complexity is exponential and, as already mentioned, the exponent is much bigger than the one for message recovery approaches.
Even the potentially easier task of distinguishing efficiently if a generator matrix comes from a Goppa code or a random one (this takes the name of the Goppa distinguishing problem) had been considered difficult for a long time. This is because Goppa codes share a lot of properties in common with random ones. For instance, they asymptotically meet the Gilbert-Varshamov distance, they have approximately the same weight distribution and also a trivial permutation group. Based on the intractability of the Goppa distinguishing problem and the hardness of decoding a generic linear code, it is possible to devise a security proof for the McEliece scheme [11].
An important step in the understanding of the structure of a Goppa code is the high-rate distinguisher presented in [12]. Here it was shown that a linear system associated with Goppa codes has an unusually small rank when the code rate is high enough. This is not the case for Classic McEliece, but it impacts other schemes like the CFS digital signature [13]. Much later, a different perspective about the distinguisher was given in [14], exploiting the link between the linear system and square codes revealed in [15]. More precisely, the distinguisher was explained in terms of the dimension of the _square code_ of the dual of a Goppa code. Tight upper bounds for this dimension have been provided in [14], thus making the distinguisher more rigorous. This result can be seen as an extension of the analysis of the square code of a generalized Reed-Solomon (GRS) code [12], indeed Goppa codes are subfield subcodes of particular GRS codes. More in general, the subfield subcode of a GRS code is an alternant code, that is in fact a generalization of classical Goppa codes. Later, this kind of analysis has also been adapted to Goppa-like algebraic geometry codes [10].
The square code analysis given in [14] resulted in the first-ever polynomial-time cryptanalysis on unstructured alternant codes with high rate [1]. This attack is composed of two steps: first, it computes a filtration of alternant codes and then it solves an algebraic system that models the key recovery problem from an easier instance obtained from the filtration, using Grobner basis techniques. The attack works with overwhelming probability on random alternant codes but it fails if a Goppa code is chosen instead. Still, it shows that key recovery methods can be effective, which had been previously demonstrated only for structured versions of alternant or Goppa codes [1, 2, 13, 14]. This approach is limited to the high-rate regime and the problem of attacking, or even simply distinguishing, a Goppa code with rate comparable to those used in Classic McEliece was left open. This question has been addressed in [14], where a completely
new approach based on quadratic forms to attack the McEliece cryptosystems has been introduced.
### The matrix code of quadratic relationships
In [13], a new object is associated with a linear code: the matrix code of quadratic relationships. This notion is strictly related to that of the square code. If \(\mathscr{C}=\langle\,\boldsymbol{c}_{1},\ldots,\boldsymbol{c}_{k}\,\rangle_{ \mathbb{F}}\subseteq\mathbb{F}^{n}\) is a \(k\)-dimensional linear code, then a natural set of generators for its square code \(\mathscr{C}^{\star 2}\) is given by all component-wise products of codewords \(\boldsymbol{c}_{i}\star\boldsymbol{c}_{j}\), \(1\leq i\leq j\leq k\). Some of these generators may be linearly dependent and this is even expected when the dimension \(k\) is big enough compared to \(n\). When \(\mathscr{C}\) is the dual of an alternant or Goppa code, some very structured linear dependencies appear among the generators of \(\mathscr{C}^{\star 2}\). In other words, quadratic relationships among the generators of \(\mathscr{C}\) are guaranteed to exist. The high-rate distinguisher relies on this fact, by counting the dimension of the space generated by such quadratic dependencies and comparing them with those for a random code, if any. However, unless the rate of the Goppa code is very high, the dimensions of the two corresponding spaces are the same, even if the hidden structure is different. The matrix code serves as a tool to distinguish Goppa codes even when the two dimensions match.
We are going to describe more formally and accurately all these objects and constructions in the following sections. Here we just briefly highlight the two main contributions that the study of the matrix code of quadratic relationships in [13] led to. For simplicity, we will denote in this subsection the matrix code of quadratic relationships originated by alternant/Goppa code by \(\mathscr{M}_{\mathscr{A}}\), ignoring that this depends on the chosen basis. We indicate by \(\mathscr{M}_{\mathscr{R}}\) the matrix code of quadratic relationships originated by a random code with the same parameters of the alternant/Goppa code. The contributions of [13] can be summarized as follows:
1. A general and simple procedure to distinguish alternant and Goppa codes of rate at least \(2/3\) from random ones has been introduced. This consists of defining a determinantal ideal, i.e. an ideal generated by minors of a given size. In the mentioned rate regime, \(\mathscr{M}_{\mathscr{R}}\) is not expected to contain nonzero matrices of rank at most \(3\). Therefore, the variety associated with the ideal of minors of size \(4\) is trivial. The inverse happens for \(\mathscr{M}_{\mathscr{A}}\): low-rank matrices are guaranteed to exist in it. Hence, solving the polynomial system of minors allows to discriminate between random and alternant/Goppa codes.
In particular, a dedicated algebraic approach for finding low-rank elements in the matrix code in the case of characteristic \(2\) has been proposed. This is related to the notion of Pfaffians, which allow to describe the determinantal ideal with polynomials of smaller degree, thus reducing the complexity of the system resolution. The behavior of a Grobner basis computation has been deeply studied in the random case. This analysis does not provide the cost of finding low-rank matrices in \(\mathscr{M}_{\mathscr{A}}\), but ensures an upper bound on the distinguisher, that is the number of operations needed to ensure that \(\mathscr{M}_{\mathscr{R}}\) does not contain rank \(2\) matrices. The complexity of this new distinguisher smoothly interpolates between polynomial (in the regime already distinguishable by [12, 13]) and superexponential for constant rates. In other words, it has subexponential complexity for families of alternant/Goppa codes whose dimension \(k\) grows between linear and quadratic with respect to the codimension \(n-k\). To the best of our
knowledge, this has been the first improvement on this topic since the high-rate distinguisher first appeared in [11], with the only exception of [12, Chapter 5] whose impact is however very limited and it does not affect binary Goppa codes.
2. A new polynomial-time structural attack on alternant and Goppa codes in the rate of parameters distinguishable by [11, 12] has been devised. This contribution extends the results of [10] where generic alternant codes with binary or ternary field size have been attacked. In particular, the algorithm from [13] in concert with [10] breaks all distinguishable alternant codes (for any field size) and all distinguishable Goppa codes whose Goppa polynomials have degree \(r<q-1\). The idea is that, under the constraints above, \(\mathscr{M}_{\mathscr{A}}\) has a block diagonal structure if expressed with respect to a (secret) canonical basis. Each block "corresponds" to a GRS code that is the image through one of the \(m\) Frobenius automorphisms of the GRS supercode. By sampling matrices in \(\mathscr{M}_{\mathscr{A}}\) of rank close to the maximum, such GRS code can be isolated and reconstructed. Once a basis of it has been recovered, it is enough to apply any known attack on the GRS code (for instance the Sidelnikov-Shestakov attack [14] or the square code attack [11]) to retrieve a valid pair of support and multiplier for it. Such vectors represent a valid pair of support and multiplier for the original alternant code as well.
### Contributions and organization of this work
The contribution of the paper is twofold. In a nutshell:
1. We illustrate a new efficient key-recovery attack on binary Goppa codes with a Goppa polynomial degree equal to 2. The attack can be split into two parts. First, we perform algebraic cryptanalysis on the Pfaffian system introduced in [13], thus finding low-rank elements in the matrix code of quadratic relationships. Then we exploit the knowledge of such matrices to reconstruct the secret key of the Goppa code, i.e. a valid pair of support and Goppa polynomial. As we will explain, this attack is tailored specifically for Goppa codes and does not affect generic alternant codes of order 2. The previous result is made possible by an in-depth analysis of structured elements lying within the matrix code of quadratic relationships originated by an alternant or a Goppa code. Indeed, we prove that, when the Goppa polynomial degree is 2, the variety associated with the Pfaffian system is big. This fact is instrumental for the algebraic attack as it shows that several variables can be specialized. Our investigation, however, is more general than the attack, as it covers any code order. Motivations and more details about it are explained in the following.
In [13], a set of structured and low-rank matrices in \(\mathscr{M}_{\mathscr{A}}\) has been described. These were only the matrices strictly necessary to ensure the existence of a distinguisher. However, they do not represent the totality of structured matrices in \(\mathscr{M}_{\mathscr{A}}\), and a complete description is certainly beneficial for improving the results of [13]. For instance, a better understanding of \(\mathscr{M}_{\mathscr{A}}\) could lead to a sharper upper bound on the distinguisher complexity. Indeed, [13] exploits only the fact that at some degree \(d\) (the degree of regularity), the Hilbert function of the Pfaffian ideal evaluates to 0 in the case of a random code. This value \(d\) is chosen
to compute a truncated Grobner basis and therefore the upper bound. However, it may happen that even if the Hilbert function in the random case is positive at some degree \(d^{\prime}<d\), still the Hilbert function in the alternant/Goppa case is larger and thus it is possible to use the degree \(d^{\prime}\) to derive a better upper bound. Understanding the structure \(\mathscr{M}_{\mathscr{A}}\) is necessary to find such \(d^{\prime}\). Secondly, an accurate description of \(\mathscr{M}_{\mathscr{A}}\) can lead to an extension of the polynomial-time attack from [10] to cases where the matrix code is not block diagonal. The efficient attack on binary Goppa codes of degree \(2\) presented in this work exactly fits in the latter direction.
After recalling some preliminary notions and results in Section 2, we categorize in Section 3 a set of structured matrices lying in \(\mathscr{M}_{\mathscr{A}}\) into \(5\) types, including those from [10] but also enriching the classification with new ones. We devise this analysis with respect to a canonical basis. For instance, we show rank \(4\) matrices whose entries are null inside the main diagonal blocks. Some of them contain constant entries and exist for any alternant code. Other ones are specific for binary Goppa codes and involve the Goppa polynomial coefficients.
Thanks to this knowledge, we are able to prove that the variety associated with the Pfaffian ideal originated by a square-free binary Goppa polynomial of degree \(2\) is bigger than expected. In [10], an upper bound on the solution space dimension is given as \(2r-3\), where \(r\) is the alternant code degree. For \(r\geq 3\), this upper bound is tight and usually attained for rates \(\geq 2/3\). Here we show that the dimension of the variety is at least \(3\) for square-free binary Goppa polynomial of degree \(2\), instead. This allows the specialization of \(3\) variables instead of \(1\) when solving the Pfaffian system.
Note that the Goppa codes used in Classic McEliece are defined over \(\mathbb{F}_{2}\) and the Goppa polynomial does not have multiple roots. Recall that alternant codes of degree \(2\) are usually thrown away from the narration because the distinguishers do not work as the square code construction does not reveal any collision of products of codewords. Binary Goppa codes are an exception to this as revealed by Type \(3\) and Type \(5\) matrices.
Indeed, our new attack, presented in Section 4, is specific for binary Goppa codes and does not work on generic alternant codes. This goes in the opposite direction with respect to other recent advances in the algebraic cryptanalysis of unstructured alternant/Goppa code [1], [10], where Goppa codes surprisingly seemed to resist better than generic alternant codes.
In particular, here we exploit the analysis made for the Pfaffian ideal to mount a polynomial-time attack on square-free binary Goppa codes of degree \(2\). Indeed, this attack adapts the efficient attack from [10] but sampling rank \(2\) matrices instead of almost full rank ones.
This specific focus has been partially motivated by some recent key-recovery challenges for the McEliece scheme2, that we will call "TII challenges" from now on. We have been able to break all TII challenges using a Goppa polynomial of degree \(r=2\) and length \(n>3rm-3\), where \(m\) is the field extension degree, within a few seconds, as it can be consulted at [https://www.herox.com/TIIMcElieceChallenges/leaderboard](https://www.herox.com/TIIMcElieceChallenges/leaderboard). We provide a table with the broken instances. Notably, one of them has claimed bit complexity \(\lambda=68\) ( computed with respect to Support Splitting Algorithm [12]).
In Section 5, we address and solve a common issue arising from the attacks of Section 4 or [12] but also of [13]. When the attacked code is Goppa, recovering a pair of support and multiplier does not necessarily provide directly a valid Goppa polynomial. In particular, the multiplier is not the inverse of the component-wise evaluation of the support through a degree \(r\) polynomial. We can thus distinguish between alternant and "Goppa code representations" of a Goppa code. This is not just a matter of form: if the attacker wants to correct errors above the \(r/2\) threshold, the Goppa code representation is needed. We show under which circumstances a Goppa code representation is obtained and explain how to move from a generic alternant representation to a Goppa one. We remark that this method is not limited to degree \(2\) but works for any Goppa polynomial degree. This is also useful for the TII challenges where a pair of support and Goppa polynomial is explicitly required.
## 2. Preliminaries
### Notation
#### General notation
The closed integer interval between \(a\) and \(b\) is denoted with \(\llbracket a,b\rrbracket\).
#### Finite fields
We denote by \(\mathbb{K}\) a generic field and by \(\overline{\mathbb{K}}\) its algebraic closure. Instead, \(\mathbb{F}\) stands for a generic finite field and \(\mathbb{F}_{q}\) for the finite field of size a prime power \(q\). We will often consider the finite field extension \(\mathbb{F}_{q^{m}}/\mathbb{F}_{q}\), where \(\mathbb{F}_{q^{m}}\) is the finite fields with \(q^{m}\) elements, for some positive integer \(m\).
#### Vectors and matrices
Vectors are indicated by lowercase bold letters \(\boldsymbol{x}\) and matrices by uppercase bold letters \(\boldsymbol{M}\). By convention, vector coordinates are indexed starting from \(1\), but we will write matrix blocks using indexes that start from \(0\). We denote the component-wise image with respect to a vector \(\boldsymbol{x}=(x_{i})_{1\leqslant i\leqslant n}\in\mathbb{F}\) of a function \(f\) with domain \(\mathbb{F}\) by using the expression \(f(\boldsymbol{x})\), i.e. \(f(\boldsymbol{x})=(f(x_{i}))_{1\leqslant i\leqslant n}\). In a similar manner, given \(\boldsymbol{x},\boldsymbol{y}\in\mathbb{F}^{n}\) and two positive integers \(a,b\), we denote by \(\boldsymbol{x}^{a}\boldsymbol{y}^{b}\) the vector \((x_{i}^{a}y_{i}^{b})_{1\leqslant i\leqslant n}\). Given a matrix \(\boldsymbol{M}=(m_{i,j})\in\mathbb{F}_{q^{m}}^{m\times n}\), we write \(\boldsymbol{M}^{(q)}=(m_{i,j}^{q})\), i.e. the matrix where the Frobenius automorphism \(a\mapsto a^{q}\) has been applied to all the entries. The set of \(k\times k\) symmetric matrices over \(\mathbb{F}\) is denoted by \(\mathbf{Sym}(k,\mathbb{F})\), whereas the corresponding set of skew-symmetric matrices is denoted by \(\mathbf{Skew}(k,\mathbb{F}_{q})\).
#### Vector spaces
The \(\mathbb{K}\)-linear space generated by \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{m}\in\mathbb{K}^{n}\) is denoted by \(\left\langle\,\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{m}\,\right\rangle_{ \mathbb{K}}\). If \(\mathbb{K}=\mathbb{F}\) then \(\mathscr{C}=\left\langle\,\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{m}\, \right\rangle_{\mathbb{F}}\) is an \([n,k]\)-linear code, where \(k\) is the dimension as an \(\mathbb{F}\)-vector space.
#### Polynomial ideals
Polynomial ideals are indicated by calligraphic capital letters. Given the multivariate polynomials \(f_{1},\ldots,f_{m}\in\mathbb{K}[x_{1},\ldots,x_{n}]\), we denote by \(\left\langle f_{1},\ldots,f_{m}\right\rangle\) the polynomial ideal generated by them. The variety associated with a polynomial ideal \(\mathcal{I}\subseteq\mathbb{K}[x_{1},\ldots,x_{n}]\) is \(\boldsymbol{V}(\mathcal{I})=\{\boldsymbol{a}\in\overline{\mathbb{K}}^{n}\mid \forall f\in\mathcal{I},\;f(\boldsymbol{a})=0\}\).
### GRS and Goppa codes
We first recall the definition of GRS codes, a family of evaluation codes.
**Definition 1** (Generalized Reed-Solomon (GRS) code ).: _Let \(\boldsymbol{x}=(x_{1},\ldots,x_{n})\in\mathbb{F}^{n}\) be a vector of pairwise distinct entries and \(\boldsymbol{y}=(y_{1},\ldots,y_{n})\in\mathbb{F}^{n}\) a vector of nonzero
entries, where \(\mathbb{F}\) is a finite field. The generalized Reed-Solomon (GRS) code over \(\mathbb{F}\) of dimension \(k\) with support \(\mathbf{x}\) and multiplier \(\mathbf{y}\) is_
\[\mathbf{GRS}_{k}(\mathbf{x},\mathbf{y})\stackrel{{\text{def}}}{{=}}\{(y_{1 }P(x_{1}),\ldots,y_{n}P(x_{n}))\mid P\in\mathbb{F}[z],\deg P<k\}.\]
An alternant code is defined as the subfield subcode of a GRS code. Here we exploit the following proposition to define the former as the subfield subcode of the dual of a GRS code.
**Proposition 2**.: _[_13_, Theorem 4, p. 304]_ _Let \(\mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})\) be a GRS code of length \(n\). Its dual is also a GRS code. In particular \(\mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})^{\perp}=\mathbf{GRS}_{n-r}(\mathbf{x},\mathbf{y}^{ \perp}),\) with \(\mathbf{y}^{\perp}\stackrel{{\text{def}}}{{=}}\left(\frac{1}{\pi_{ \mathbf{x}}^{\prime}(x_{1})y_{1}},\ldots,\frac{1}{\pi_{\mathbf{x}}^{\prime}(x_{n})y_{n }}\right)\), where \(\pi_{\mathbf{x}}(z)\stackrel{{\text{def}}}{{=}}\prod_{i=1}^{n}(z-x_ {i})\) and \(\pi_{\mathbf{x}}^{\prime}\) is its derivative._
**Definition 3** (alternant code).: _Let \(n\leq q^{m}\), for some positive integer \(m\). Let \(\mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})\) be the GRS code over \(\mathbb{F}_{q^{m}}\) of dimension \(r\) with support \(\mathbf{x}\in\mathbb{F}_{q^{m}}^{n}\) multiplier \(\mathbf{y}\in(\mathbb{F}_{q^{m}}^{*})^{n}\). The alternant code with support \(\mathbf{x}\), multiplier \(\mathbf{y}\) and degree \(r\) over \(\mathbb{F}_{q}\) is_
\[\mathscr{A}_{r}(\mathbf{x},\mathbf{y})\stackrel{{\text{def}}}{{=}} \mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})^{\perp}\cap\mathbb{F}_{q}^{n}=\mathbf{GRS}_{n-r }(\mathbf{x},\mathbf{y}^{\perp})\cap\mathbb{F}_{q}^{n}.\]
_The integer \(m\) is called extension degree of the alternant code._
A Goppa code is an alternant code where the support and multiplier are linked by a very particular relation.
**Definition 4** (Goppa code).: _Let \(\mathbf{x}\in\mathbb{F}_{q^{m}}^{n}\) be a support vector and \(\Gamma\in\mathbb{F}_{q^{m}}[z]\) a polynomial of degree \(r\) such that \(\Gamma(x_{i})\neq 0\) for all \(i\in\{1,\ldots,n\}\). The Goppa code of degree \(r\) with support \(\mathbf{x}\) and Goppa polynomial \(\Gamma\) is defined as \(\mathscr{G}(\mathbf{x},\Gamma)\stackrel{{\text{def}}}{{=}}\mathscr{A} _{r}(\mathbf{x},\mathbf{y}),\) where \(\mathbf{y}\stackrel{{\text{def}}}{{=}}\left(\frac{1}{\Gamma(x_{1})}, \ldots,\frac{1}{\Gamma(x_{n})}\right).\)_
**Theorem 5**.: _[_12_]_ _Let \(\mathscr{G}(\mathbf{x},\Gamma)\) be a binary Goppa code with a square-free Goppa polynomial \(\Gamma\) of degree \(r\). Then_
\[\mathscr{G}(\mathbf{x},\Gamma)=\mathscr{G}(\mathbf{x},\Gamma^{2})=\mathscr{A}_{2r}( \mathbf{x},\mathbf{y}),\]
_where \(y_{i}\stackrel{{\text{def}}}{{=}}\frac{1}{\Gamma(x_{i})^{2}}\) for all \(1\leq i\leq n\)._
### Product and square of codes
The notion of squares of codes is at the core of the high-rate distinguisher as presented in [14]. Given the _component-wise product_ of two vectors \(\mathbf{a},\mathbf{b}\in\mathbb{F}^{n}\)
\[\mathbf{a}\star\mathbf{b}\stackrel{{\text{def}}}{{=}}(a_{1}b_{1},\ldots, a_{n}b_{n}),\]
we define the component-wise (or Schur's) product of codes.
**Definition 6**.: _The component-wise product of codes \(\mathscr{C},\mathscr{D}\) over \(\mathbb{F}\) with the same length \(n\) is defined as_
\[\mathscr{C}\star\mathscr{D}\stackrel{{\text{def}}}{{=}}\left< \mathbf{c}\star\mathbf{d}\mid\mathbf{c}\in\mathscr{C},\mathbf{d}\in\mathscr{D}\right>_{ \mathbb{F}}.\]
_If \(\mathscr{C}=\mathscr{D}\), we call \(\mathscr{C}^{*2}\stackrel{{\text{def}}}{{=}}\mathscr{C}\star \mathscr{C}\) the square code of \(\mathscr{C}\)._
### Extension of a code over a filed extension
For some codes naturally defined over \(\mathbb{F}_{q}\), namely subfield subcodes, we will extensively consider their linear span over a field extension \(\mathbb{F}_{q^{m}}\). More formally, we define the extension of a code over a field extension (or extension of scalars) in the following way.
**Definition 7** (extension of a code over a field extension).: _Let \(\mathscr{C}\) be a linear code over \(\mathbb{F}_{q}\). We denote by \(\mathscr{C}_{\mathbb{F}_{q^{m}}}\) the \(\mathbb{F}_{q^{m}}\)-linear span of \(\mathscr{C}\) in \(\mathbb{F}_{q^{m}}^{n}\)._
If we apply this construction in the dual of an alternant code, we get
**Proposition 8**.: _[_1_]_ _Let \(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})\) be an alternant code over \(\mathbb{F}_{q}\). Then \(\left(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})^{\perp}\right)_{\mathbb{F}_{q^{m}}}= \sum_{j=0}^{m-1}\mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})^{(q^{j})}=\sum_{j=0}^{m-1} \mathbf{GRS}_{r}(\mathbf{x}^{q^{j}},\mathbf{y}^{q^{j}}).\)_
This is useful because it allows to view the code generators as the component-wise evaluations of monomials, as this is the case for the GRS codes \(\mathbf{GRS}_{r}(\mathbf{x}^{q^{j}},\mathbf{y}^{q^{j}})\). This perspective is motivated and made possible by the fact that the extension of scalars commutes with all the standard constructions in coding theory. We recall from [14] the properties that will be implicitly exploited in this work.
**Lemma 9** (from Lemma 2.22 and Lemma 2.23, [14]).: _Let \(\mathscr{C},\mathscr{C}^{\prime}\subseteq\mathbb{F}_{q}^{n}\) be two \(\mathbb{F}_{q}\)-linear codes. Then_
* _If_ \(\mathbf{G}\) _is a generator matrix for_ \(\mathscr{C}\) _over_ \(\mathbb{F}_{q}\)_, then it is also a generator matrix of_ \(\mathscr{C}_{\mathbb{F}_{q^{m}}}\) _over_ \(\mathbb{F}_{q^{m}}\)_._
* _If_ \(\mathbf{H}\) _is a parity-check matrix for_ \(\mathscr{C}\) _over_ \(\mathbb{F}_{q}\)_, then it is also a parity-check matrix of_ \(\mathscr{C}_{\mathbb{F}_{q^{m}}}\) _over_ \(\mathscr{C}^{\perp})_{\mathbb{F}_{q^{m}}}=(\mathscr{C}_{\mathbb{F}_{q^{m}}})^{ \perp}\subseteq\mathbb{F}_{q^{m}}^{n}\)_._
* \(\mathscr{C}\subseteq\mathscr{C}^{\prime}\iff\mathscr{C}_{\mathbb{F}_{q^{m}}} \subseteq\mathscr{C}_{\mathbb{F}_{q^{m}}}\)_._
* \((\mathscr{C}+\mathscr{C}^{\prime})_{\mathbb{F}_{q^{m}}}=\mathscr{C}_{\mathbb{ F}_{q^{m}}}+\mathscr{C}_{\mathbb{F}_{q^{m}}}^{\prime}\)_._
* \((\mathscr{C}\cap\mathscr{C}^{\prime})_{\mathbb{F}_{q^{m}}}=\mathscr{C}_{ \mathbb{F}_{q^{m}}}\cap\mathscr{C}_{\mathbb{F}_{q^{m}}}^{\prime}\)_._
* \((\mathscr{C}\star\mathscr{C}^{\prime})_{\mathbb{F}_{q^{m}}}=\mathscr{C}_{ \mathbb{F}_{q^{m}}}\star\mathscr{C}_{\mathbb{F}_{q^{m}}}^{\prime}\)_._
### The matrix code of quadratic relationships
Exploiting Proposition 8, we define an ordered basis, and call it **canonical basis**, \(\mathcal{A}\) of \(\left(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})^{\perp}\right)_{\mathbb{F}_{q^{m}}}\):
\[\mathcal{A}\stackrel{{\rm def}}{{=}}(\mathbf{y},\mathbf{x}\mathbf{y},\dots, \mathbf{x}^{r-1}\mathbf{y},\dots,\mathbf{y}^{q^{m-1}},(\mathbf{x}\mathbf{y})^{q^{m-1}},\dots,(\mathbf{x }^{r-1}\mathbf{y})^{q^{m-1}}). \tag{1}\]
We remark that \(\mathcal{A}\) is an unknown basis from the attacker's point of view, as he/she does not have access a priori to the basis of a single GRS code \(\mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})^{q^{j}}\) and even less to the monomial basis. In the attack from [13], \(\mathcal{A}\) was indeed the secret basis to recover. We also denote with \(\mathcal{B}\) another (public) basis of \(\left(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})^{\perp}\right)_{\mathbb{F}_{q^{m}}}\).
Observe that, for any \(0\leqslant a,b,c,d<r\) such that \(a+b=c+d\), we have \(\mathbf{x}^{a}\mathbf{y},\mathbf{x}^{b}\mathbf{y},\mathbf{x}^{c}\mathbf{y},\mathbf{x}^{d}\mathbf{y}\in \mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})\) and these codewords satisfy the following quadratic relationship:
\[(\mathbf{x}^{a}\mathbf{y})\star(\mathbf{x}^{b}\mathbf{y})=(\mathbf{x}^{c}\mathbf{y})\star(\mathbf{x}^{d} \mathbf{y}). \tag{2}\]
Proposition 8 explains why the existence of these quadratic relationships is preserved when considering subfield subcodes of GRS codes, i.e. alternant (including Goppa) codes. Moreover, other structured relations arise from the subfield subcode construction, because codewords from different \(\mathbf{GRS}_{r}(\mathbf{x},\mathbf{y})^{q^{j}}\)'s may be involved. The following definition captures the fact that in general quadratic relationships form a vector space.
**Definition 10** (Code of quadratic relationships, [14]).: _Let \(\mathscr{C}\) be an \([n,k]\) linear code over \(\mathbb{F}\) and let \(\mathcal{V}=\{\mathbf{v}_{1},\dots,\mathbf{v}_{k}\}\) be a basis of \(\mathscr{C}\). The **code of quadratic relationships between the Schur's products with respect to \(\mathcal{V}\)** is_
\[\mathscr{C}_{\mathrm{rel}}(\mathcal{V})\stackrel{{\text{def}}}{{= }}\{\mathbf{c}=(c_{i,j})_{1\leqslant i\leqslant j\leqslant k}\mid\sum_{i\leqslant j }c_{i,j}\mathbf{v}_{i}\star\mathbf{v}_{j}=0\}\subseteq\mathbb{F}^{\binom{k+1}{2}}.\]
By considering codewords corresponding to the identities (2), we can therefore predict the shape of low (Hamming) weight codewords in the code of quadratic relationships \(\mathscr{C}_{\mathrm{rel}}(\mathcal{A})\), where \(\mathcal{A}\) is given in (1). This fact has potential interest for cryptanalysis, however the same does not happen in general for \(\mathscr{C}_{\mathrm{rel}}(\mathcal{B})\), where \(\mathcal{B}\) is another basis of \(\left(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})^{\perp}\right)_{\mathbb{F}_{q^{m}}}\). Not only we do not know the shape of low-weight codewords in \(\mathscr{C}_{\mathrm{rel}}(\mathcal{B})\), but these are not even guaranteed to exist. This suggests that the Hamming distance is not the right metric to look at and that another perspective is required.
Note that any element \(\mathbf{c}=(c_{i,j})_{1\leqslant i\leqslant j\leqslant k}\in\mathscr{C}_{\mathrm{ rel}}(\mathcal{V})\) defines a quadratic form as
\[Q_{\mathbf{c}}(x_{1},\cdots,x_{k})=\sum_{i\leqslant j}c_{i,j}x_{i}x_{j}.\]
For instance, the relationship in (2) is associated with the low-rank quadratic form
\[x_{a+1}x_{b+1}-x_{c+1}x_{d+1}.\]
We can therefore represent the elements of \(\mathscr{C}_{\mathrm{rel}}(\mathcal{V})\) as matrices corresponding to the bilinear map given by the polar form of the quadratic form, i.e. the matrix \(\mathbf{M}_{\mathbf{c}}\) corresponding to \(\mathbf{c}\in\mathscr{C}_{\mathrm{rel}}(\mathcal{V})\) that satisfies for all \(\mathbf{w}\) and \(\mathbf{z}\) in \(\mathbb{F}_{q^{m}}^{k}\)
\[\mathbf{w}\mathbf{M}_{\mathbf{c}}\mathbf{z}^{\intercal}=Q_{\mathbf{c}}(\mathbf{w}+\mathbf{z})-Q_{\mathbf{c}}( \mathbf{w})-Q_{\mathbf{c}}(\mathbf{z}). \tag{3}\]
Note that \(\mathbf{M}_{\mathbf{c}}\) is symmetric in odd characteristic, whereas it is skew-symmetric in characteristic 2. This definition provides a matrix perspective on the space of quadratic relationships that fits in both the odd characteristic and characteristic 2 cases:
**Definition 11** (Matrix code of relationships, [14]).: _Let \(\mathscr{C}\) be an \([n,k]\) linear code over \(\mathbb{F}\) and let \(\mathcal{V}=\{\mathbf{v}_{1},\dots,\mathbf{v}_{k}\}\) be a basis of \(\mathscr{C}\). The **matrix code of relationships between the Schur's products with respect to \(\mathcal{V}\)** is_
\[\mathscr{C}_{\mathrm{mat}}(\mathcal{V})\stackrel{{\text{def}}}{{= }}\{\mathbf{M}_{\mathbf{c}}=(m_{i,j})_{\begin{subarray}{c}1\leqslant i\leqslant k\\ 1\leqslant j\leqslant k\end{subarray}}\mid\mathbf{c}=(c_{i,j})_{1\leqslant i \leqslant j\leqslant k}\in\mathscr{C}_{\mathrm{rel}}(\mathcal{V})\}\subseteq \mathbf{Sym}(k,\mathbb{F}),\]
_where \(\mathbf{M}_{\mathbf{c}}\) is defined as \(\begin{cases}m_{i,j}\stackrel{{\text{def}}}{{=}}m_{j,i} \stackrel{{\text{def}}}{{=}}c_{i,j},&1\leqslant i<j\leqslant k,\\ m_{i,i}\stackrel{{\text{def}}}{{=}}2c_{i,i},&1\leqslant i \leqslant k.\end{cases}\)_
The turning point of this approach is that, regardless of the chosen basis, the matrix code of relationships contains low-weight elements when computed with respect to the usual rank metric
\[d(\mathbf{M}_{1},\mathbf{M}_{2})\stackrel{{\text{def}}}{{=}}\mathbf{Rank }(\mathbf{M}_{1}-\mathbf{M}_{2}).\]
Indeed we have
**Proposition 12** ([14]).: _Let \(\mathcal{A}\) and \(\mathcal{B}\) be two bases of a same \([n,k]\)\(\mathbb{F}\)-linear code \(\mathscr{C}\), with \(\mathbb{F}\). Then \(\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\) and \(\mathscr{C}_{\mathrm{mat}}(\mathcal{B})\) are congruent matrix codes, i.e. there exists \(\mathbf{P}\in\mathbf{GL}_{k}(\mathbb{F})\) such that_
\[\mathscr{C}_{\mathrm{mat}}(\mathcal{A})=\mathbf{P}^{\intercal}\mathscr{C}_{\mathrm{ mat}}(\mathcal{B})\mathbf{P}. \tag{4}\]
_The matrix \(\mathbf{P}\) coincides with the change of basis matrix between \(\mathcal{A}\) and \(\mathcal{B}\)._
The proposition above readily implies that the weight distributions of \(\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\) and \(\mathscr{C}_{\mathrm{mat}}(\mathcal{B})\) (again with respect to the rank distance) are the same. Another trivial invariant is the matrix code dimension:
**Proposition 13** ([13]).: _Let \(\mathscr{C}\subseteq\mathbb{F}^{n}\) be an \([n,k]\) linear code with ordered basis \(\mathcal{V}\). Then_
\[\dim_{\mathbb{F}}\mathscr{C}_{mat}(\mathcal{V})=\dim_{\mathbb{F}}\mathscr{C}_ {rel}(\mathcal{V})=\binom{k+1}{2}-\dim_{\mathbb{F}}\mathscr{C}^{\star 2}.\]
If \(\mathcal{V}\) is a basis of \(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})_{\mathbb{F}_{q^{m}}}^{\perp}\), then \(\mathscr{C}_{\mathrm{mat}}(\mathcal{V})\) contains rank-3 matrices in odd characteristic and rank 2-matrices in characteristic 2, obtained from (2) by taking \(c=d\). These elements are categorized as Type 1 matrices in the following section.
In [13], the question of whether matrices of such low rank are expected in \(\mathscr{C}_{\mathrm{mat}}(\mathcal{V})\), where \(\mathcal{V}\) is the basis of a random \([n,rm]\)\(\mathbb{F}_{q^{m}}\)-linear, code has been addressed. By computing the Gilbert-Varshamov distance with respect to the space of
(skew-)symmetric matrices, it was shown in [13, Proposition 10] that matrices of rank \(\leqslant 3\) belong to \(\mathscr{C}_{\mathrm{mat}}(\mathcal{V})\) with non-negligible probability iff
\[n\leqslant 3rm-3. \tag{5}\]
Therefore, it is possible to set up an algebraic modeling for the (skew-)symmetric variant of the minrank problem with target rank 2 or 3.
**Problem 14** ((Skew-)Symmetric MinRank problem for rank \(r\)).: _Let \(\mathbf{M}_{1},\cdots,\mathbf{M}_{K}\) be (skew-)symmetric matrices in \(\mathbb{F}^{N\times N}\). Find an \(\mathbf{M}\in\langle\,\mathbf{M}_{1},\cdots,\mathbf{M}_{K}\,\rangle_{\mathbb{F}}\) of rank \(r\)._
We remark that:
* if one is able to prove that the minrank instance has no nonzero solution, then it means that the instance is not an alternant (or Goppa) code, which leads to a distinguisher;
* all the parameters used in Classic McEliece are such that \(n>3rm-3\).
From these two observations, a special minrank modeling for the case of characteristic 2 has been introduced in [13] and a complexity estimate of the corresponding distinguisher has been determined.
In the rest of the paper, we will deep the study of the matrix code originated by a generic alternant code or a Goppa code, describing and categorizing structured matrices lying in it. Thanks to this characterization, we also present an adaptation of a polynomial-time attack from [13] to binary Goppa codes of degree 2.
## 3. Description of structured matrices in the code of quadratic relationships
Recall that \(\mathcal{A}\) is the canonical basis of of \(\left(\mathscr{A}_{r}(\mathbf{x},\mathbf{y})^{\perp}\right)_{\mathbb{F}_{q^{m}}}\). We will study the matrix code \(\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\), whose elements have a special structure and are easier to treat. This will be instrumental to describe a set of rank-2 matrices in the matrix code of quadratic relationships for a binary Goppa code of degree \(r=2\), whose dimension is 3 as a variety:
**Proposition 15**.: _Let \(\mathscr{C}_{mat}\) be the matrix code of quadratic relationships corresponding to the dual of an \([n,n-2m]\) binary Goppa code in the extension field with a square-free Goppa polynomial of degree \(2\). Let \(\mathcal{P}_{2}^{+}(\boldsymbol{M})\) be the corresponding Pfaffian ideal. Then \(\dim\boldsymbol{V}(\mathcal{P}_{2}^{+}(\boldsymbol{M}))\geqslant 3\)._
This proposition will be better explained and proved afterwards.
We remark again that the following description is not limited to Goppa codes of degree \(r=2\), but it also applies to alternant codes or higher order. Let us now consider the following block shape for an element of \(\boldsymbol{A}\in\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\)
\[\boldsymbol{A}=\begin{bmatrix}\boldsymbol{A}_{0,0}&\boldsymbol{A}_{1,0}{}^{ \intercal}&\cdots&\boldsymbol{A}_{m-1,0}{}^{\intercal}\\ \boldsymbol{A}_{1,0}&\boldsymbol{A}_{1,1}&\cdots&\boldsymbol{A}_{m-1,1}{}^{ \intercal}\\ \vdots&\vdots&\ddots&\vdots\\ \boldsymbol{A}_{m-1,0}&\boldsymbol{A}_{m-1,1}&\cdots&\boldsymbol{A}_{m-1,m- 1}\end{bmatrix},\]
with \(\boldsymbol{A}_{l,l}\in\mathbf{Sym}(r,\mathbb{F}_{q^{m}})\), and give a description of the \(r\times r\) blocks \(\boldsymbol{A}_{i,j}\)'s below and intersecting the main diagonal, i.e. for \(i\geqslant j\), for the matrix \(\boldsymbol{A}\) associated with a given algebraic relation. The blocks above the main diagonal can be easily obtained by transposition. In particular, we can identify identities that correspond to sparse and constant low-rank matrix codewords. We split such matrices into several types.
**Type 1.** Let \(a+b=2c\) be even and \(0\leqslant a<c<b\leqslant r-1\). The quadratic relation
\[(\boldsymbol{x}^{a}\boldsymbol{y})^{q^{l}}\star(\boldsymbol{x}^{b}\boldsymbol {y})^{q^{l}}=(\boldsymbol{x}^{c}\boldsymbol{y})^{q^{l}}\star(\boldsymbol{x}^{ c}\boldsymbol{y})^{q^{l}}\]
translates into \(\boldsymbol{a}_{a}^{d}\star\boldsymbol{a}_{b}^{q^{l}}-\boldsymbol{a}_{c}^{q^{ l}}\star\boldsymbol{a}_{c}^{d}=0\) with respect to the basis \(\mathcal{A}\) and thus it is associated with the matrix \(\boldsymbol{A}\) such that
* in field characteristic \(\neq 2\): \[\begin{array}{cccc}a&c&b\\ \\ \boldsymbol{A}_{l,l}=\begin{bmatrix}\boldsymbol{0}&&&1\\ &&-2&&\\ &1&&&\boldsymbol{0}\end{bmatrix}\begin{matrix}a\\ c\\ b\end{matrix}&,&\text{ and }\boldsymbol{A}_{i,j}=\boldsymbol{0}_{r\times r}\ \text{ for }\ 0\leqslant j \leqslant i\leqslant m{-}1,(i,j)\neq(l,l).\end{array}\]
* in field characteristic \(2\): \[\boldsymbol{A}_{l,l}=\begin{bmatrix}\boldsymbol{0}&&&1\\ &&0&&\\ &1&&&\boldsymbol{0}\end{bmatrix}\begin{matrix}a\\ c\\ b\end{matrix},&\text{ and }\boldsymbol{A}_{i,j}=\boldsymbol{0}_{r\times r}\ \text{ for }\ 0\leqslant j \leqslant i\leqslant m{-}1,(i,j)\neq(l,l).\end{array}\]
Therefore \(\mathbf{Rank}(\boldsymbol{A})=\mathbf{Rank}(\boldsymbol{A}_{l,l})=\begin{cases} 3&\text{if field characteristic }\neq 2\\ 2&\text{if field characteristic }=2\end{cases}\).
**Type 2.** Let \(0\leqslant a<c<d<b\leqslant r-1\) and \(a+b=c+d\). The quadratic relation
\[(\boldsymbol{x}^{a}\boldsymbol{y})^{q^{l}}\star(\boldsymbol{x}^{b}\boldsymbol {y})^{q^{l}}=(\boldsymbol{x}^{c}\boldsymbol{y})^{q^{l}}\star(\boldsymbol{x} ^{d}\boldsymbol{y})^{q^{l}}\]
translates into \(\mathbf{a}_{a}^{q^{l}}\star\mathbf{a}_{b}^{q^{l}}-\mathbf{a}_{c}^{q^{l}}\star\mathbf{a}_{d}^{q^{l} }=0\) with respect to the basis \(\mathcal{A}\) and thus it is associated with the matrix \(\mathbf{A}\) such that
\[\mathbf{A}_{l,l}=\begin{bmatrix}\mathbf{0}&&&&1\\ &&-1&&\\ &&-1&&\\ &1&&&&\mathbf{0}\end{bmatrix}_{\begin{array}{cccc}a\\ c\\ d\\ b\end{array}},\qquad\text{and }\mathbf{A}_{i,j}=\mathbf{0}_{r\times r}\ \text{ for }\ 0\leqslant j \leqslant i\leqslant m-1,(i,j)\neq(l,l).\]
Therefore \(\mathbf{Rank}(\mathbf{A})=\mathbf{Rank}(\mathbf{A}_{l,l})=4\).
_Remark 16_.: For \(a+b=c+d\) even, Type 2 matrices can be derived as linear combinations of two Type 1 matrices thanks to the equality
\[\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}-\mathbf{a}_{c}^{2^{l}}\star\mathbf{a}_{d}^{2 ^{l}}=(\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}-\mathbf{a}_{\frac{a+b}{2}}^{2^{l} }\star\mathbf{a}_{\frac{a+b}{2}}^{2^{l}})-(\mathbf{a}_{c}^{2^{l}}\star\mathbf{a}_{d}^{2^{l }}-\mathbf{a}_{\frac{c+d}{2}}^{2^{l}}\star\mathbf{a}_{\frac{c+d}{2}}^{2^{l}})\]
that holds under the conditions above.
Moreover, we can obtain linear dependencies within the set of Type 2 matrices. Indeed let \(a+b=c+d=e+f\) with \(0\leqslant a<c<e<f<d<b\leqslant r-1\). Then the equality
\[\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}-\mathbf{a}_{c}^{2^{l}}\star\mathbf{a}_{d}^{2 ^{l}}=(\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}-\mathbf{a}_{e}^{2^{l}}\star\mathbf{a}_ {f}^{2^{l}})-(\mathbf{a}_{c}^{2^{l}}\star\mathbf{a}_{d}^{2^{l}}-\mathbf{a}_{e}^{2^{l}} \star\mathbf{a}_{f}^{2^{l}})\]
induces a linear dependency on the matrices.
**Type 3.** Let \(q=2\) the field size of a Goppa code with a square-free Goppa polynomial and \(0\leqslant a<b\leqslant r-1\). In the quadratic relation
\[(\mathbf{x}^{a}\mathbf{y})^{2^{l}}\star(\mathbf{x}^{b}\mathbf{y})^{2^{l}}=(\mathbf{x}^{a+b}\mathbf{y} ^{2})^{2^{l-1}}\star(\mathbf{x}^{a+b}\mathbf{y}^{2})^{2^{l-1}},\]
the term \((\mathbf{x}^{a}\mathbf{y})^{2^{l}}\star(\mathbf{x}^{b}\mathbf{y})^{2^{l}}\) translates into \(\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}\) with respect to the basis \(\mathcal{A}\), while \((\mathbf{x}^{a+b}\mathbf{y}^{2})^{2^{l-1}}\star(\mathbf{x}^{a+b}\mathbf{y}^{2})^{2^{l-1}}\) is the square of a codeword in \(\mathscr{G}(\mathbf{x},\Gamma)^{\perp}_{\mathbb{F}_{q^{m}}}\). Indeed
\[(\mathbf{x}^{a+b}\mathbf{y}^{2})^{2^{l-1}}\in\mathbf{GRS}_{2^{r}}(\mathbf{x},\mathbf{y}^{2})^{ (2^{l-1})}\subseteq\mathscr{A}_{2^{r}}(\mathbf{x},\mathbf{y}^{2})^{\perp}_{\mathbb{ F}_{q^{m}}}=\mathscr{A}_{2^{r}}(\mathbf{x},1/\Gamma(\mathbf{x})^{2})^{\perp}_{\mathbb{F}_{q^{m}}}= \mathscr{G}(\mathbf{x},\mathbf{y})^{\perp}_{\mathbb{F}_{q^{m}}}\]
with the last equality following from Theorem 5. Therefore, as shown in [13, Proposition 9], this quadratic relation is associated with the matrix \(\mathbf{A}\) such that
\[\mathbf{A}_{l,l}=\begin{bmatrix}\mathbf{0}&&&&1\\ &&\\ &1&&\mathbf{0}\end{bmatrix}_{\begin{array}{cccc}a\\ b\\ &1&&\mathbf{0}\end{array}},\qquad\text{and }\mathbf{A}_{i,j}=\mathbf{0}_{r\times r}\ \text{ for }\ 0\leqslant j \leqslant i\leqslant m-1,(i,j)\neq(l,l).\]
Therefore \(\mathbf{Rank}(\mathbf{A})=\mathbf{Rank}(\mathbf{A}_{l,l})=2\).
_Remark 17_.: Type 1 and Type 3 matrices are the only ones described in [13] and are exactly those of rank \(\leqslant 3\). In the case of field characteristic 2, these are indeed the targets of the Pfaffian modeling. Note that, in the case of a square-free binary Goppa code, Type 1 matrices are included in Type 3 matrices and
correspond to the choice \(a+b\) even. Moreover, Type 2 matrices can be obtained as linear combinations of two Type 3 matrices thanks to the equality
\[\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}-\mathbf{a}_{c}^{2^{l}}\star\mathbf{a}_{d}^{2^{l }}=(\mathbf{a}_{a}^{2^{l}}\star\mathbf{a}_{b}^{2^{l}}-\mathbf{a}_{a+b}^{2^{l-1}}\star\mathbf{a}_ {a+b}^{2^{l-1}})-(\mathbf{a}_{c}^{2^{l}}\star\mathbf{a}_{d}^{2^{l}}-\mathbf{a}_{c+d}^{2^{l- 1}}\star\mathbf{a}_{c+d}^{2^{l-1}})=0\]
that holds whenever \(a+b=c+d\).
**Type 4.** Let \(a<c\), \(b>d\), \((u-l)\mod m\leqslant e_{\mathscr{A}}\stackrel{{\mathrm{def}}}{{=}} \left\lfloor\log_{q}(r+1)\right\rfloor\) and \(aq^{l}+bq^{u}=cq^{l}+dq^{u}\). The quadratic relation
\[(\mathbf{x}^{a}\mathbf{y})^{q^{l}}\star(\mathbf{x}^{b}\mathbf{y})^{q^{u}}=(\mathbf{x}^{c}\mathbf{y})^ {q^{l}}\star(\mathbf{x}^{d}\mathbf{y})^{q^{u}}\]
translates into \(\mathbf{a}_{a}^{q^{l}}\star\mathbf{a}_{b}^{q^{u}}-\mathbf{a}_{c}^{q^{l}}\star\mathbf{a}_{d}^{q ^{u}}=0\) with respect to the basis \(\mathcal{A}\) and thus it is associated with the matrix \(\mathbf{A}\) such that
* if \(u>l\): \[\mathbf{A}_{u,l}= \begin{bmatrix}\mathbf{0}&&-1\\ &1&&\mathbf{0}\\ &&\end{bmatrix}_{b}\,\qquad\text{and }\mathbf{A}_{i,j}=\mathbf{0}_{r\times r}\ \text{ for }\ 0\leqslant j\leqslant i\leqslant m -1,(i,j)\neq(u,l).\]
* if \(l>u\): \[\mathbf{A}_{u,l}^{\intercal}= \begin{bmatrix}\mathbf{0}&&-1\\ &1&&\mathbf{0}\\ &&\end{bmatrix}_{b}\,\qquad\text{and }\mathbf{A}_{i,j}=\mathbf{0}_{r\times r}\ \text{ for }\ 0\leqslant j \leqslant i\leqslant m-1,(i,j)\neq(l,u).\]
Therefore \(\mathbf{Rank}(\mathbf{A})=\mathbf{Rank}(\mathbf{A}_{u,l})+\mathbf{Rank}(\mathbf{A}_{u,l} ^{\intercal})=4\). Differently from the previous types, these matrices are not block diagonal.
_Remark 18_.: We can obtain linear dependencies within the set of Type 4 matrices. Indeed let \(aq^{l}+bq^{u}=cq^{l}+dq^{u}=eq^{l}+fq^{u}\) with \(a<c<e,f<d<b\). Then the equality
\[\mathbf{a}_{a}^{q^{l}}\star\mathbf{a}_{b}^{q^{u}}-\mathbf{a}_{c}^{q^{l}}\star\mathbf{a}_{d}^{q ^{u}}=(\mathbf{a}_{a}^{q^{l}}\star\mathbf{a}_{b}^{q^{u}}-\mathbf{a}_{e}^{q^{l}}\star\mathbf{a}_ {f}^{q^{u}})-(\mathbf{a}_{c}^{q^{l}}\star\mathbf{a}_{d}^{q^{u}}-\mathbf{a}_{e}^{q^{l}}\star \mathbf{a}_{f}^{q^{u}})\]
induces a linear dependency on the matrices.
**Type 5.** Let \(q=2\) be the field size of a Goppa code with a square-free Goppa polynomial \(\Gamma(z)=\sum_{a=0}^{r}\gamma_{a}z^{a}\). Fix \(u,l\in\llbracket 0,m-1\rrbracket\) such that \(1\leqslant(u-l)\mod m\leqslant v\stackrel{{\mathrm{def}}}{{=}} \left\lfloor\log_{2}r\right\rfloor\) and let \(s\stackrel{{\mathrm{def}}}{{=}}2^{(u-l-1)\mod m}w\), for some \(w\in\llbracket 0,2r-2\rrbracket\). For all \(a\in\llbracket 0,r-1\rrbracket\), we define
\[B_{a}\stackrel{{\mathrm{def}}}{{=}}\llbracket\left\lceil\frac{a+s -r+1}{2^{(u-l)\mod m}}\right\rceil,\left\lfloor\frac{a+s}{2^{(u-l)\mod m}} \right\rceil\rrbracket\cap\llbracket 0,r-1\rrbracket.\]
For all \(a\in\llbracket 0,r\rrbracket\), let us also define \(c_{(a,b)}=a+s-2^{(u-l)\mod m}b\) and \(\gamma_{c_{(a,b)},b}\in\mathbb{F}_{q^{m}}\), \(b\in B_{a}\), such that \(\sum_{b\in B_{a}}\gamma_{c_{(a,b)},b}=\gamma_{a}\). In the quadratic relation
\[\sum_{a=0}^{r}\left(\sum_{b\in B_{a}}\gamma_{c_{(a,b)},b}^{2^{l}} (\boldsymbol{x}^{c_{(a,b)}}\boldsymbol{y})^{2^{l}}\star(\boldsymbol{x}^{b} \boldsymbol{y})^{2^{u}}\right)\] \[= \sum_{a=0}^{r}\left(\sum_{b\in B_{a}}\gamma_{c_{(a,b)},b} \boldsymbol{x}^{a}\right)^{2^{l}}\boldsymbol{x}^{2^{l}s}\boldsymbol{y}^{2^{u} +2^{l}}\] \[= \boldsymbol{x}^{2^{l}s}\boldsymbol{y}^{2^{u}}\] \[= (\boldsymbol{x}^{w}\boldsymbol{y}^{2})^{2^{(u-2)\mod m}}\star( \boldsymbol{x}^{w}\boldsymbol{y}^{2})^{2^{(u-2)\mod m}},\]
the term \(\sum_{a=0}^{r}\left(\sum_{b\in B_{a}}\gamma_{c_{(a,b)},b}^{2^{l}}( \boldsymbol{x}^{c_{(a,b)}}\boldsymbol{y})^{2^{l}}\star(\boldsymbol{x}^{b} \boldsymbol{y})^{2^{u}}\right)\) translates into
\(\sum_{a=0}^{r}\left(\sum_{b\in B_{a}}\gamma_{c_{(a,b)},b}^{2^{l}}\boldsymbol{ a}_{c_{(a,b)}}^{2^{l}}\star\boldsymbol{a}_{b}^{2^{u}}\right)\) with respect to the basis \(\mathcal{A}\), while \((\boldsymbol{x}^{w}\boldsymbol{y}^{2})^{2^{(u-2)\mod m}}\star(\boldsymbol{x} ^{w}\boldsymbol{y}^{2})^{2^{(u-2)\mod m}}\) is the square of a codeword in \(\mathscr{G}(\boldsymbol{x},\Gamma)_{\mathbb{F}_{q^{m}}}^{\perp}\) because of Theorem 5. Therefore, analogously to Type 3 matrices, whenever \(\forall a\in\llbracket 0,r\rrbracket,B_{a}\neq\emptyset\), this quadratic relation is associated with the matrix \(\boldsymbol{A}\) such that
* if \(u>l\): \[\begin{array}{c}i\\ \\ \boldsymbol{A}_{u,l}=\begin{bmatrix}\ddots&\vdots&\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Example 19**.: _Let \(r=6\), \(u=1\), \(l=0\) and let us take \(s=1\). The matrix \(\mathbf{A}\) such that_
\[\mathbf{A}_{1,0}=\begin{bmatrix}0&\gamma_{0}&\gamma_{1}&\gamma_{2}&\gamma_{3}& \gamma_{4}\\ 0&0&0&0&\gamma_{5}&\gamma_{6}\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\end{bmatrix},\qquad\text{and}\ \mathbf{A}_{i,j}=\mathbf{0}_{r\times r}\ \text{ for }\ 0\leq j\leq i\leq m-1,(i,j)\neq(u,l).\]
_belongs to \(\mathscr{C}_{\text{mat}}(\mathcal{A})\) and has rank 4._
**Example 20**.: _Let \(r=6\), \(u=2\), \(l=0\) and let us take \(s=2\cdot 5=10\). The matrix \(\mathbf{A}\) such that_
\[\mathbf{A}_{3,1}=\begin{bmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ \gamma_{0}^{2}&\gamma_{1}^{2}&\gamma_{2}^{2}&\gamma_{3}^{2}&0&0\\ \gamma_{4}^{2}&\gamma_{5}^{2}&\gamma_{6}^{2}&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix},\qquad\text{and}\ \mathbf{A}_{i,j}=\mathbf{0}_{r\times r}\ \text{ for }\ 0\leq j\leq i\leq m-1,(i,j)\neq(u,l).\]
_belongs to \(\mathscr{C}_{\text{mat}}(\mathcal{A})\) and has rank 4._
**The Pfaffian variety with respect to a square-free Goppa polynomial of degree \(r=2\).** Let \(\mathbb{F}\) be a finite field of characteristic 2, \(\mathscr{C}\subset\mathbb{F}^{n}\) an \([n,s]\) linear code with basis \(\mathscr{C}_{\text{mat}}(\mathcal{V})\), \(\mathbf{M}=(m_{i,j})\in\mathbf{Skew}(s,\mathbb{F})\) the generic skew-symmetric matrix (i.e. all \(\binom{s}{2}\) entries below the main diagonal are seen as independent variables) and \(\mathbf{m}\) the vector of length \(\binom{s}{2}\) containing all the independent variables corresponding to the entries of \(\mathbf{M}\). We recall here the definition of the generic Pfaffian ideal \(\mathcal{P}_{2}(\mathbf{M})\in\mathbb{F}[\mathbf{m}]\) for rank 2.
**Definition 21** (Pfaffian ideal for rank 2).: _The Pfaffian ideal of rank 2 for \(\mathbf{M}\) in characteristic 2 is_
\[\mathcal{P}_{2}(\mathbf{M})\stackrel{{\text{def}}}{{=}}\langle m_{i,j}m_{k,l}+m_{i,k}m_{j,l}+m_{i,l}m_{j,k}\mid 1\leq i<j<k<l\leq s\rangle. \tag{6}\]
The Hilbert series of the quotient \(\mathbb{F}[\mathbf{m}]/\mathcal{P}_{2}(\mathbf{M})\) is known [10] and the dimension of the associated variety is \(2s-3\).
In [11], the Pfaffian ideal \(\mathcal{P}_{2}^{+}(\mathbf{M})\in\mathbb{F}[\mathbf{m}]\) for rank 2 with respect to the matrix code \(\mathscr{C}_{\text{mat}}(\mathcal{V})\) has been defined. This can be written as
\[\mathcal{P}_{2}^{+}(\mathbf{M})=\mathcal{P}_{2}(\mathbf{M})+\langle L_{1}(\mathbf{m}), \ldots,L_{t}(\mathbf{m})\rangle,\]
where the \(t=\binom{s}{2}-\dim_{\mathbb{F}}\mathscr{C}_{\text{mat}}(\mathcal{V})\) linearly independent linear polynomials \(L_{i}\)'s in \(\mathbf{m}\) express the fact that \(\mathbf{M}\) belongs to the matrix subspace \(\mathscr{C}_{\text{mat}}(\mathcal{V})\). This ideal was introduced because, by evaluating the Hilbert function at a high enough degree, alternant and Goppa codes can be distinguished from random. In particular, if \(\mathscr{C}\subset\mathbb{F}_{q^{m}}^{n}\) is a random \([n,rm]\) code and \(n>3rm-3\), then with high-probability the variety \(\mathbf{V}(\mathcal{P}_{2}^{+}(\mathbf{M}))=\{\mathbf{0}\}\). On the other hand, if \(\mathscr{C}=\mathscr{G}(\mathbf{x},\Gamma)^{\frac{1}{\mathcal{P}_{2}}}\), where \(\mathscr{G}(\mathbf{x},\Gamma)\) is an \([n,n-rm]\) binary Goppa code with a square-free Goppa polynomial defined from a field extension of degree \(m\) and \(r\geq 3\), then [11, Proposition 16] shows that
\[\dim\mathbf{V}(\mathcal{P}_{2}^{+}(\mathbf{M}))\geq 2r-3. \tag{7}\]
The reason why the result does not include the case \(r=2\) is strongly connected to the distinguishability of the GRS code \(\mathbf{GRS}_{r}(\boldsymbol{x},\boldsymbol{y})\). Indeed the smallest value for which two Schur's products of different pairs of codewords in \(\mathbf{GRS}_{r}(\boldsymbol{x},\boldsymbol{y})\) coincide is 3: \((\boldsymbol{x}^{2}\boldsymbol{y})\star(\boldsymbol{y})=(\boldsymbol{x} \boldsymbol{y})^{\star 2}\). Intuitively, this is reasonable: the vectors \(\boldsymbol{x}\) and \(\boldsymbol{y}\) are a compact representation of the GRS code, but any 2-dimensional code is determined by two linearly independent codewords. This fact is inherited when considering subfield subcodes of GRS codes: the alternant code \(\mathscr{A}_{2}(\boldsymbol{x},\boldsymbol{y})\) is not distinguishable from random in general. However, a binary Goppa code of degree 2 with a square-free Goppa polynomial remains in principle distinguishable and \(\boldsymbol{V}(\mathcal{P}_{2}^{+}(\boldsymbol{M}))\supsetneq\{0\}\). This is obvious from the existence of rank-2 matrices in \(\mathscr{C}_{\text{mat}}(\mathcal{A})\). Indeed, by fixing \(l\) and choosing \(a=0,b=1\) in the Type 3 construction, we obtain the matrix \(\boldsymbol{A}\in\mathscr{C}_{\text{mat}}(\mathcal{A})\) such that
\[\boldsymbol{A}_{l,l}=\begin{bmatrix}0&1\\ 1&0\end{bmatrix},\qquad\qquad\boldsymbol{A}_{i,j}=\boldsymbol{0}_{2\times 2 }\quad\text{ otherwise.}\]
This immediately implies that \(\dim\boldsymbol{V}(\mathcal{P}_{2}^{+}(\boldsymbol{M}))\geq 1\) getting back the lower bound in (7) for \(r=2\) too. We recall and finally prove Proposition 15, which shows that in this case the lower bound on the dimension of the variety is not tight.
**Proposition 15**.: _Let \(\mathscr{C}_{\text{mat}}\) be the matrix code of quadratic relationships corresponding to the dual of an \([n,n-2m]\) binary Goppa code in the extension field with a square-free Goppa polynomial of degree \(2\). Let \(\mathcal{P}_{2}^{+}(\boldsymbol{M})\) be the corresponding Pfaffian ideal. Then \(\dim\boldsymbol{V}(\mathcal{P}_{2}^{+}(\boldsymbol{M}))\geq 3\)._
Proof.: We consider a subspace of \(\mathscr{C}_{\text{mat}}(\mathcal{A})\) spanned by some of the matrices presented in Section 2. Let us fix some \(l\in\llbracket 0,m-1\rrbracket\) and take
* the Type 3 matrix obtained by taking \(a=0,b=1\) and \(l\);
* the Type 3 matrix obtained by taking \(a=0,b=1\) and \(l+1\);
* the Type 5 matrices obtained by taking \(s=0,u=l+1,\gamma_{c_{(0,0)},0}=\gamma_{0},\gamma_{c_{(1,0)},0}=\gamma_{1}, \gamma_{c_{(2,1)},1}=\gamma_{2}\) and \(\gamma_{i,b}=0\) otherwise;
* the Type 5 matrices obtained by taking \(s=1,u=l+1,\gamma_{c_{(0,0)},0}=\gamma_{0},\gamma_{c_{(1,1)},1}=\gamma_{1}, \gamma_{c_{(2,1)},1}=\gamma_{2}\) and \(\gamma_{i,b}=0\) otherwise.
We fix \(l=0\) for simplicity, but the same argument works for any other value in \(\llbracket 0,m-1\rrbracket\). Let \(\lambda=(\lambda_{1},\ldots,\lambda_{4})\in\mathbb{F}_{q^{m}}^{4}\) be the vectors of coefficients in the linear combination of the 4 matrices above. We thus obtain the parametrized matrix
\[\boldsymbol{A}(\lambda)=\left[\begin{array}{ccccc}0&\lambda_{1}&\lambda_{3 }\gamma_{0}&\lambda_{3}\gamma_{2}+\lambda_{4}\gamma_{1}&\\ \lambda_{1}&0&\lambda_{3}\gamma_{1}+\lambda_{4}\gamma_{0}&\lambda_{4}\gamma_{ 2}&\\ \lambda_{3}\gamma_{0}&\lambda_{3}\gamma_{1}+\lambda_{4}\gamma_{0}&0&\lambda_{ 2}&\boldsymbol{0}_{4\times 2m-4}\\ \lambda_{3}\gamma_{2}+\lambda_{4}\gamma_{1}&\lambda_{4}\gamma_{2}&\lambda_{ 2}&0&\\ \boldsymbol{0}_{2m-4\times 4}&&\boldsymbol{0}_{2m-4\times 2m-4}\\ \end{array}\right].\]
The only possible nonzero \(4\times 4\) minor is the top left one. This is a principal minor, thus the submatrix is skew-symmetric and its determinant is the square of
the Pfaffian. A straightforward computation gives
\[\det\left(\left[\begin{array}{cccc}0&\lambda_{1}&\lambda_{3}\gamma_{0}& \lambda_{3}\gamma_{2}+\lambda_{4}\gamma_{1}\\ \lambda_{1}&0&\lambda_{3}\gamma_{1}+\lambda_{4}\gamma_{0}&\lambda_{4}\gamma_{2} \\ \lambda_{3}\gamma_{0}&\lambda_{3}\gamma_{1}+\lambda_{4}\gamma_{0}&0&\lambda_{2} \\ \lambda_{3}\gamma_{2}+\lambda_{4}\gamma_{1}&\lambda_{4}\gamma_{2}&\lambda_{2} &0\end{array}\right]\right)\] \[= pf^{2}\left(\left[\begin{array}{cccc}0&\lambda_{1}&\lambda_{3} \gamma_{0}&\lambda_{3}\gamma_{2}+\lambda_{4}\gamma_{1}\\ \lambda_{1}&0&\lambda_{3}\gamma_{1}+\lambda_{4}\gamma_{0}&\lambda_{4}\gamma_{2 }\\ \lambda_{3}\gamma_{0}&\lambda_{3}\gamma_{1}+\lambda_{4}\gamma_{0}&0&\lambda_{2 }\\ \lambda_{3}\gamma_{2}+\lambda_{4}\gamma_{1}&\lambda_{4}\gamma_{2}&\lambda_{2} &0\end{array}\right]\right)\] \[= (\lambda_{1}\lambda_{2}+\lambda_{3}\lambda_{4}\gamma_{0}\gamma_{2} +\lambda_{3}^{2}\gamma_{1}\gamma_{2}+\lambda_{3}\lambda_{4}\gamma_{1}^{2}+ \lambda_{3}\lambda_{4}\gamma_{0}\gamma_{2}+\lambda_{4}^{2}\gamma_{0}\gamma_{1} )^{2}\] \[= (\lambda_{1}\lambda_{2}+\lambda_{3}^{2}\gamma_{1}\gamma_{2}+ \lambda_{3}\lambda_{4}\gamma_{1}^{2}+\lambda_{4}^{2}\gamma_{0}\gamma_{1})^{2}.\]
We want to study for which \(\lambda\) the expression \(\lambda_{1}\lambda_{2}+\lambda_{3}^{2}\gamma_{1}\gamma_{2}+\lambda_{3}\lambda_ {4}\gamma_{1}^{2}+\lambda_{4}^{2}\gamma_{0}\gamma_{1}\) equals \(0\). For any free choice of \(\lambda_{3},\lambda_{4}\) two cases may occur:
* if \(\lambda_{3}^{2}\gamma_{1}\gamma_{2}+\lambda_{3}\lambda_{4}\gamma_{1}^{2}+ \lambda_{4}^{2}\gamma_{0}\gamma_{1}=0\), then the equality is obtained by fixing one between \(\lambda_{1}\) or \(\lambda_{2}\) to \(0\) and the other can take any value over \(\mathbb{F}_{q^{m}}\);
* if \(\lambda_{3}^{2}\gamma_{1}\gamma_{2}+\lambda_{3}\lambda_{4}\gamma_{1}^{2}+ \lambda_{4}^{2}\gamma_{0}\gamma_{1}\neq 0\), then the equality is obtained by taking any \(\lambda_{1}\neq 0\) and \(\lambda_{2}=\lambda_{1}^{-1}\).
Therefore, there are \(3\) degrees of freedom in the choice of \(\lambda\) and since all the corresponding matrices belong to \(\mathscr{C}_{\text{mat}}(\mathcal{A})\), this implies that \(\dim\boldsymbol{V}(\mathcal{P}_{2}^{+}(\boldsymbol{M}))\geq 3\).
The previous proposition shows that we can fix \(3\) variables in the Pfaffian system and still expect to have non-trivial solutions, i.e. rank-\(2\) matrices in \(\mathscr{C}_{\text{mat}}(\mathcal{B})\), with overwhelming probability in the specialized non-homogeneous system.
## 4. Retrieving the basis of a GRS code
Analogously to the previous section, we focus here on the case of a binary Goppa code with a square-free Goppa polynomial of degree \(2\) and adapt the attack from [13, Section 6] to this setting. We first recall the setting of the attack proposed in [13] for distinguishable parameters with respect to the notion of distinguishability given in [12, 10]. In this framework, under the condition that \(3\leq r<q+1\) (\(3\leq r<q-1\) respectively) the matrix code \(\mathscr{C}_{\text{mat}}(\mathcal{A})\) originated by a generic alternant code (generic Goppa code respectively) contains only block diagonal elements (with \(m\) blocks of size \(r\times r\)). This very special shape allowed to recover a basis of the alternant/Goppa code, by sampling almost full-rank matrices in the code and gaining information about codewords lying in a single GRS code from the kernels of these matrices.
However, the conditions are not satisfied here for several reasons:
* the Goppa code is not distinguishable in the sense of [12, 10];
* the Goppa polynomial degree \(r=2<3\);
* for Goppa codes it was required that \(r<q-1\), but here \(r=q=2\).
We thus follow an alternative approach. Suppose we are able to sample rank-\(2\) matrices in \(\mathscr{C}_{\text{mat}}(\mathcal{B})\) by solving the Pfaffian system specialized into \(3\) variables. In this section, we describe how to recover a basis of one of the \(m\) codes \(\mathbf{GRS}_{2}(\boldsymbol{x}^{q^{j}},\boldsymbol{y}^{q^{j}})\). The algorithm is therefore an adaptation of [13, Section 6], where kernels are computed from low-rank matrices instead of almost full-rank ones.
A set of rank-2 matrices in \(\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\) has been exhibited in the previous section. Below the Gilbert-Varshamov distance computed with respect to the space of skew-symmetric matrices of size \(rm\), all the solutions of the Pfaffian system are expected to have the structure described in the proof of Proposition 15, or the analogous one for another fixed value of \(l\in\llbracket 0,m-1\rrbracket\), with very high probability. We will exploit that any rank-2 matrix in the code \(\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\) has this structure. We recall from [13, Proposition 10] that the parameters above the Gilbert-Varshamov distance correspond to those for which
\[n>3m-3.\]
Therefore, the attack we will now present works for binary Goppa codes with a square-free Goppa polynomial of degree 2 and is based on the following standard assumption:
**Assumption 22**.: _Let \(\mathscr{G}(\boldsymbol{x},\Gamma)\) be a binary \(\llbracket n,n-2m\rrbracket\) Goppa code with a square-free Goppa polynomial \(\Gamma\) of degree \(2\) and let \(n>3m-3\). Let \(\mathcal{A}\) the canonical basis of \(\mathscr{G}(\boldsymbol{x},\Gamma)_{\mathbb{F}_{q^{m}}}^{\perp}\) as given in (1). Then, for \(n\to\infty\), a rank 2 matrix in \(\boldsymbol{A}\in\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\) is such that_
\[\exists l\in\llbracket 0,m-1\rrbracket\text{ s.t. }\forall(u,v)\notin\{(l,l),(l+1,l),(l+ 1,l+1)\}\mod m,\quad\boldsymbol{A}_{u,v}=\boldsymbol{0}_{r\times r},\]
_i.e._
\[\boldsymbol{A}=\begin{bmatrix}\boldsymbol{0}_{r\times r}&&&&\boldsymbol{0}_{r \times r}\\ &\ddots&&&\iddots\\ &&\boldsymbol{A}_{l-1,l-1}&\boldsymbol{A}_{l,l-1}\boldsymbol{\mathsf{{}^{ \mathsf{T}}}}&&\\ &&\boldsymbol{A}_{l,l-1}&\boldsymbol{A}_{l,l}&&\\ &\iddots&&&\ddots&\\ \boldsymbol{0}_{r\times r}&&&&\boldsymbol{0}_{r\times r}\end{bmatrix}.\]
_with probability \(1-o(1)\)._
Algorithm 1 provides a sketch of the attack.
It succeeds if at some iteration of the repeat cycle the code \(\mathbf{GRS}\) is one of the \(m\) GRS codes of which \(\mathscr{G}(\boldsymbol{x},\Gamma)\) is a subfield subcode. Indeed, in this case, the well-known Sidelnikov-Shestakov attack allows to retrieve a valid pair of support and multiplier for \(\mathbf{GRS}\) that also defines an alternant code that coincides with \(\mathscr{G}(\boldsymbol{x},\Gamma)\).
Hence the correctness of Algorithm 1 follows on the spot from the next proposition.
**Proposition 23**.: _Consider an iteration of the repeat/until loop of Algorithm 1. For \(n\to\infty\) and under Assumption 22, the equation_
\[(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})^{\perp}\cdot\mathbf{H}_{\mathcal{B}}=\mathbf{ GRS}_{2}(\mathbf{x},\mathbf{y})^{q^{l}}\]
_holds with probability \(\frac{1}{m}(1-o(1))\) at least._
In the next subsection, we provide a proof of Proposition 23 that is therefore a proof of correctness for Algorithm 1.
### Proof of correctness of Algorithm 1
Algorithm 1 starts by computing a basis \(\mathcal{B}\) of \(\mathscr{G}(\mathbf{x},\Gamma)_{\mathbb{F}_{q^{m}}}^{\perp}\) with a special shape:
\[\mathcal{B}=(\mathbf{b}_{1},\dots,\mathbf{b}_{r},\mathbf{b}_{1}^{q},\dots,\mathbf{b}_{r}^{q}, \dots,\mathbf{b}_{1}^{q^{m-1}},\dots,\mathbf{b}_{r}^{q^{m-1}}). \tag{8}\]
An efficient procedure to obtain a basis like this has already been explained in [13].
Let us define the right \(r\)-cyclic shift matrix \(\mathbf{S}\in\mathbf{GL}_{mr}(\mathbb{F}_{q^{m}})\) be, i.e.
\[\mathbf{S}\stackrel{{\mathrm{def}}}{{=}}\begin{pmatrix}&\mathbf{I}_{r} &&&&\\ &&\mathbf{I}_{r}&&&\mathbf{0}\\ &&\mathbf{0}&\ddots&&\\ &&&&&\mathbf{I}_{r}\\ \mathbf{I}_{r}&&&&\end{pmatrix}. \tag{9}\]
Note that \(\mathbf{S}^{-1}=\mathbf{S}^{\intercal}\) is the left \(r\)-cyclic shift matrix. We first recall, without proving it, a preliminary result from [13] that also comes in handy here.
**Lemma 24**.: _Whenever a basis \(\mathcal{B}\) has the form given in (8), \(\mathscr{C}_{\mathrm{mat}}(\mathcal{B})\) is stable by the operation_
\[\mathbf{M}\longmapsto\mathbf{S}^{\intercal}\mathbf{M}^{(q)}\mathbf{S}.\]
The next results characterize the structure of the kernel of a rank 2 matrix \(\mathbf{A}\in\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\).
**Lemma 25**.: _Let \(\mathcal{A},\mathcal{B}\) be the two bases introduced before and \(\mathbf{P}\) the change of basis, i.e. \(\mathbf{H}_{\mathcal{B}}=\mathbf{P}\mathbf{H}_{\mathcal{A}}\). Let \(\mathbf{B}\in\mathscr{C}_{\mathrm{mat}}(\mathcal{B})\) be of rank 2 and \(n>3m-3\). Then \(\exists\,l\in\llbracket 0,m-1\rrbracket\) such that_
\[\ker(\mathbf{B})(\mathbf{P}^{-1})^{\intercal}\mathbf{P}^{-1}\mathbf{H}_{\mathcal{B}}\supset \sum_{j\in\llbracket 0,m-1\rrbracket\setminus\{l-1\mod m,l\}}\mathbf{ GRS}_{2}(\mathbf{x},\mathbf{y})^{q^{j}}\]
_with probability \(1-o(1)\)._
Proof.: For better readability, we will assume in the following that \(l\in\llbracket 1,m-2\rrbracket\), but the same arguments work for \(l=0,m-1\) as well. Let \(\mathbf{P}^{\intercal}\mathbf{B}\mathbf{P}\in\mathscr{C}_{\mathrm{mat}}(\mathcal{A})\). From Assumption 22, with overwhelming probability, we have
\[\mathbf{P}^{\intercal}\mathbf{B}\mathbf{P}=\begin{bmatrix}\mathbf{0}_{r\times r}&&&&\mathbf{0}_{r \times r}\\ &\ddots&&&\iddots\\ &&\mathbf{A}_{l-1,l-1}&\mathbf{A}_{l,l-1}{}^{\intercal}&&\\ &&\mathbf{A}_{l,l-1}&\mathbf{A}_{l,l}&&\\ &\iddots&&&\ddots&\\ \mathbf{0}_{r\times r}&&&&\mathbf{0}_{r\times r}\end{bmatrix}. \tag{10}\]
Let \(\mathbf{c}\in\sum_{j\in\llbracket 0,m-1\rrbracket\setminus\{l-1\mod m,l\}}\mathbf{GRS}_{2}( \mathbf{x},\mathbf{y})^{q^{j}}\). Then \(\mathbf{c}=\mathbf{d}\mathbf{H}_{\mathcal{A}}\), where
\[\mathbf{d}=(\mathbf{0}_{r},\ldots,\mathbf{0}_{r},\mathbf{d}_{l},\mathbf{d}_{l+1},\mathbf{0}_{r},\ldots, \mathbf{0}_{r}).\]
This implies that
\[\mathbf{d}\in\ker(\mathbf{P}^{\intercal}\mathbf{B}\mathbf{P}),\]
which is equivalent to
\[\mathbf{d}\mathbf{P}^{\intercal}\in\ker(\mathbf{B}),\]
since \(\mathbf{P}\) is invertible. Hence, we obtain
\[\mathbf{c}=\mathbf{d}\mathbf{H}_{\mathcal{A}}=\mathbf{d}(\mathbf{P}^{\intercal}(\mathbf{P}^{-1})^{ \intercal})\mathbf{P}^{-1}\mathbf{H}_{\mathcal{B}}\in\ker(\mathbf{B})(\mathbf{P}^{-1})^{ \intercal}\mathbf{P}^{-1}\mathbf{H}_{\mathcal{B}}.\]
**Lemma 26**.: _Let \(\mathbf{B}\in\mathscr{C}_{\text{mat}}(\mathcal{B})\) and \(\mathscr{V}=\ker(\mathbf{B})\). Then \(\mathscr{V}^{(q)}\mathbf{S}=\ker(\mathbf{S}^{\intercal}\mathbf{B}^{(q)}\mathbf{S}).\)_
Proof.: This readily follows from the fact that \(\mathbf{S}\) is invertible with inverse \(\mathbf{S}^{\intercal}\). Indeed, \(\forall\mathbf{v}\in\mathscr{V}\),
\[0 =\mathbf{v}\mathbf{B}\] \[\Longleftrightarrow 0 =(\mathbf{v}\mathbf{B})^{q}\mathbf{S}=\mathbf{v}^{q}\mathbf{B}^{(q)}\mathbf{S}=(\mathbf{v}^{ q}\mathbf{S})\cdot\mathbf{S}^{\intercal}\mathbf{B}^{(q)}\mathbf{S}.\]
Let \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\) be the two matrices sampled at line 3 in Algorithm 1 and consider the case where \(\mathbf{P}^{\intercal}\mathbf{B}_{1}\mathbf{P}\) and \(\mathbf{P}^{\intercal}\mathbf{B}_{2}\mathbf{P}\) are as in Equation (10) for the same indexes \(l-1\mod m,l\). This will happen with probability \(1/m\). Then, since \(\dim_{\mathbb{F}_{q^{m}}}\ker(\mathbf{B}_{1})=\dim_{\mathbb{F}_{q^{m}}}\ker(\mathbf{B }_{2})=rm-2\) and \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\) are independent, we expect that
\(\dim_{\mathbb{F}_{q^{m}}}\ker(\mathbf{B}_{1})\cap\ker(\mathbf{B}_{2})=rm-4\) with high probability. If this is the case, the fact that
\[\dim_{\mathbb{F}_{q^{m}}}\sum_{j\in\llbracket 0,m-1\rrbracket\setminus\{l-1 \mod m,l\}}\mathbf{GRS}_{2}(\mathbf{x},\mathbf{y})^{q^{j}}=rm-4\]
in conjunction with Proposition 25 implies that
\[\mathscr{V}(\mathbf{P}^{-1})^{\intercal}\mathbf{P}^{-1}\mathbf{H}_{\mathcal{B}}=\sum_{j \in\llbracket 0,m-1\rrbracket\setminus\{l-1\mod m,l\}}\mathbf{GRS}_{2}(\mathbf{x}, \mathbf{y})^{q^{j}}, \tag{11}\]
where
\[\mathscr{V}\stackrel{{\rm def}}{{=}}(\ker(\mathbf{B}_{1})\cap\ker( \mathbf{B}_{2})).\]
We are finally ready to prove Proposition 23, thus showing how the space \(\mathscr{V}\) unveils a basis for a single GRS code.
Proof of Proposition 23.: From Assumption 22 and the argument shown above about matrices \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\), we have that Equation (11) holds with probability at least \(\frac{1}{m}(1-o(1))\). In this case, by Lemmas 25 and 26, we deduce that
\[\mathscr{V}^{(q)}\mathbf{S}(\mathbf{P}^{-1})^{\intercal}\mathbf{P}^{-1}\mathbf{H}_{\mathcal{B }}=\sum_{j\in\llbracket 0,m-1\rrbracket\setminus\{l,l+1\mod m\}}\mathbf{GRS}_{2}( \mathbf{x},\mathbf{y})^{q^{j}}.\]
Under the usual assumption that all the \(m\) GRS codes have trivial intersection, we obtain
\[(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})(\mathbf{P}^{-1})^{\intercal}\mathbf{P}^{-1}\mathbf{ H}_{\mathcal{B}}=\sum_{j\in\llbracket 0,m-1\rrbracket\setminus\{l\}}\mathbf{GRS}_{2}( \mathbf{x},\mathbf{y})^{q^{j}}.\]
Let us now pick \(\mathbf{v}^{\perp}\in(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})^{\perp}\). For any \(\mathbf{v}\in\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S}\), we can write
\[0=\langle\mathbf{v},\mathbf{v}^{\perp}\rangle=\langle\mathbf{v}\mathbf{I}_{rm},\mathbf{v}^{\perp} \rangle=\langle\mathbf{v}(\mathbf{P}^{\intercal})^{-1}\mathbf{P}^{\intercal},\mathbf{v}^{ \perp}\rangle=\langle\mathbf{v}(\mathbf{P}^{-1})^{\intercal},\mathbf{v}^{\perp}\mathbf{P}\rangle.\]
Therefore \(\mathbf{v}^{\perp}\mathbf{P}\) is zero outside the \(j\)-th block. Hence
\[(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})^{\perp}\mathbf{H}_{\mathcal{B}}=((\mathscr{ V}+\mathscr{V}^{(q)}\mathbf{S})^{\perp}\mathbf{P})\mathbf{H}_{\mathcal{A}}\subseteq\mathbf{ GRS}_{2}(\mathbf{x},\mathbf{y})^{q^{l}},\]
and since \(\dim_{\mathbb{F}_{q^{m}}}((\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})^{\perp})=rm- \dim_{\mathbb{F}_{q^{m}}}(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})=2\), we get
\[(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})^{\perp}\mathbf{H}_{\mathcal{B}}=\mathbf{ GRS}_{2}(\mathbf{x},\mathbf{y})^{q^{l}}.\]
At this stage, it is enough to apply the Sidelnikov-Shestakov attack [10] on \(\mathbf{GRS}_{2}(\mathbf{x},\mathbf{y})^{q^{l}}=(\mathscr{V}+\mathscr{V}^{(q)}\mathbf{S})^ {\perp}\mathbf{H}_{\mathcal{B}}\). The support-multiplier pair output by this procedure is also a valid support-multiplier pair for the Goppa code \(\mathscr{G}(\mathbf{x},\Gamma)=\mathscr{A}_{2}(\mathbf{x},\mathbf{y})\).
_Remark 27_.: It is easy to see that a slightly refined version of Algorithm 1 guarantees its termination, under Assumption 22. Even if the sampled matrices \(\mathbf{B}_{1}\) and \(\mathbf{B}_{2}\) do not lead to the same shape of Equation (10), there must be \(l\in[\![0,m-1]\!]\) such that \(\mathbf{B}_{1}\) and \((\mathbf{S}^{\intercal})^{l}\mathbf{B}_{2}^{(q^{l})}\mathbf{S}^{l}\) do. Therefore, at most \(m\) iterations are needed in order to get the GRS code.
We conclude the section by giving the parameters of various TII challenges broken by this attack in Table 1. We run experiments in MAGMA and use its online calculator. The variance of the attack timing computed on several resolutions is pretty high, due to the randomicity in the variable specialization, but on average all instances such that \(n>3rm-3\) take less than 10 seconds. When instead \(n\leqslant 3rm-3\), Assumption 22 does not hold anymore and rank-2 matrices other than those described in Proposition 15 are expected to belong to the matrix code. However, the attack is still supposed to work if the good matrices are sampled. Therefore, if \(3m-3-n\) is a small natural number, the attack may still be practical, despite not having polynomial-time complexity anymore. Table 1 provides a couple of such examples.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(q\) & \(r\) & claimed bit complexity \(\lambda\) & \(m\) & \(n\) & \(n>3rm-3\)? & average time attack \\ \hline \hline & & 22 & 5 & 32 & yes & \textless{} 3s \\ & & 39 & 5 & 28 & yes & \textless{}3s \\ & & 41 & 5 & 27 & no (equal) & \textless{}10s \\ & & 43 & 5 & 26 & no & \textless{}1min \\
2 & 2 & 44 & 6 & 61 & yes & \textless{}10s \\ & & 48 & 6 & 60 & yes & \textless{}10s \\ & & 58 & 6 & 57 & yes & \textless{}10s \\ & & 63 & 6 & 55 & yes & \textless{}10s \\ & & 65 & 6 & 54 & yes & \textless{}10s \\ & & 68 & 6 & 53 & yes & \textless{}10s \\ \hline \end{tabular}
\end{table}
Table 1. TII challenges with Goppa polynomial degree 2
## 5. The Goppa code representation
From the previous section, we recovered a pair \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) of valid support and multiplier for the Goppa code \(\mathscr{G}(\mathbf{x},\Gamma)\), is such that \(\mathscr{G}(\mathbf{x},\Gamma)=\mathscr{A}_{r}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\). In general, \(\mathbf{x}^{\prime}\) and \(\mathbf{y}^{\prime}\) do not coincide with \(\mathbf{x}\) and \(1/\Gamma(\mathbf{x})\) respectively, as the public code is not uniquely determined by a pair of support and multiplier and there is no way to recover the original ones. In addition, \(\mathbf{y}^{\prime}\) is not even guaranteed to be the inverse of the evaluation over \(\mathbf{x}^{\prime}\) of a degree-\(r\) polynomial \(\Gamma^{\prime}\). The next definition formalizes this concept.
**Definition 28** (Goppa code representation).: _Let \(\mathscr{G}(\mathbf{x},\Gamma)\subseteq\mathbb{F}_{q}^{n},\deg(\Gamma)=r\), be an \([n,n-rm]\) Goppa code obtained as the subfield subcode of a Reed-Solomon code defined over an extension field of degree \(m\). The pair \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) is said to be an \(r\)-Goppa code representation of \(\mathscr{G}(\mathbf{x},\Gamma)\) if \(\mathscr{G}(\mathbf{x},\Gamma)=\mathscr{A}_{r}(\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) and \(\mathbf{y}^{\prime}=\frac{1}{\Gamma^{\prime}(\mathbf{x}^{\prime})}\) for some \(\Gamma^{\prime}\in\mathbb{F}_{q^{m}}[z]\) of degree \(r\)._
_Remark 29_.: The definition above takes into account that Goppa polynomials of different degrees can be defined. For instance any pair of support and multiplier \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) is such \(\mathbf{y}^{\prime}=\frac{1}{\Gamma^{\prime}(\mathbf{x}^{\prime})}\) for some \(\Gamma\) of degree \(\leqslant n-1\) because of the interpolation theorem. Moreover, in the binary case, if \(\Gamma\) is square-free then Proposition 5 provides a new Goppa polynomial of degree \(2r\). In other words, the degree \(r\) required in the definition is the minimal one, i.e. that for which \(r=\frac{n-k}{m}\), where \(k\) is the Goppa code dimension.
While any valid pair of support and multiplier permits to decode a Goppa code, there exists a crucial difference that makes Goppa code representations particularly relevant. Again, this distinction is witnessed in the binary square-free Goppa case. In principle, knowing a generic pair \((\mathbf{x},\mathbf{y})\) enables to decode up to \(\frac{r}{2}\) errors. Because of Proposition 5, a Goppa code representation readily allows to see the Goppa code as an alternant code of degree \(2r\) and thus leads to an improved decoding capability of \(r\) errors. In the McEliece scheme, and more in general in code-based cryptography, the error weight is chosen to be close or equal to the maximum of coordinates that a decoder can correct in such a way to increase the difficulty of decrypting for an attacker. This means that a valid pair \((\mathbf{x},\mathbf{y})\) for a binary square-free Goppa code is not enough to efficiently decode errors of weight above \(r/2\).
Furthermore, the TII challenges accept solutions in the format'support + Goppa code', thus, translating to our formalism, they implicitly require to find a Goppa code representation.
In the following, we will then show how to move from a generic pair \((\mathbf{x},\mathbf{y})\) to an equivalent one \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) that is a Goppa code representation. The argument is not limited to \(r=2\), but works for any Goppa polynomial degree. This problem has already been partially investigated in [11, Section 4.4.6] in relation to the parity-check subcode of a Goppa code, the filtration and Grobner basis computation steps in the distinguisher-based attack [1]. The group of transformations that map a valid pair \((\mathbf{x},\mathbf{y})\) into another one \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) has been described in [10] for Cauchy codes and their subfield subcodes. Cauchy codes are the generalization of GRS codes over the projective line \(\mathbb{P}^{1}(\mathbb{F}_{q^{m}})\). However, the reconstruction of \((\mathbf{x},\mathbf{y})\) provided in the previous section guarantees that \(\mathbf{x}\in\mathbb{F}_{q^{m}}^{n}\subset\mathbb{P}^{1}(\mathbb{F}_{q^{m}})^{n}\). We recall from [11] that, when \(\mathbf{x}\in\mathbb{F}_{q^{m}}^{n}\) (and \(\mathbf{y}\in(\mathbb{F}_{q^{m}}^{*})^{n}\)), the restricted map from
[10] becomes
\[f\colon\quad\mathbb{F}_{q^{m}}^{n}\times(\mathbb{F}_{q^{m}}^{*})^{n} \to \mathbb{P}^{1}(\mathbb{F}_{q^{m}})\times(\mathbb{F}_{q^{m}}^{*})^{n}\] \[(\mathbf{x},\mathbf{y}) \mapsto (\mathbf{x}^{\prime},\mathbf{y}^{\prime})=(\tfrac{a\mathbf{x}+b}{c\mathbf{x}+d}, \lambda(c\mathbf{x}+d)^{r-1}\mathbf{y}) \tag{12}\]
for some \(a,b,c,d,\lambda\in\mathbb{F}_{q^{m}}\), such that \(ad-bc\neq 0\) and \(\lambda\neq 0\). We want to determine values of \(a,b,c,d,\lambda\) that map into admissible vectors \(\mathbf{x}^{\prime}\in\mathbb{F}_{q^{m}}^{n}\) and \(\mathbf{y}^{\prime}\in(\mathbb{F}_{q^{m}}^{*})^{n}\). This conditions are equivalent to say that \(0\notin\{cx_{i}+d\ |\ i\in\llbracket 1,n\rrbracket\}\). Moreover, we want \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) to be a Goppa code representation. The property of being a Goppa codes representation is preserved if and only if \(f\) is an affine map.
**Proposition 30**.: _Let \(\mathscr{G}(\mathbf{x},\Gamma)=\mathscr{A}_{r}(\mathbf{x},\mathbf{y})\) be an \([n,n-rm]\) Goppa code with \(\mathbf{y}=\frac{1}{\Gamma(\mathbf{x})}\) and \(\deg(\Gamma)=r\). Then \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})=f((\mathbf{x},\mathbf{y}))\) defined as in (12) is an \(r\)-Goppa representation if and only if \(f\) is an affine transformation._
Proof.: Inverting \(f\), we obtain the relation
\[\mathbf{x}=\frac{a^{\prime}\mathbf{x}^{\prime}+b^{\prime}}{c^{\prime}\mathbf{x}^{\prime}+ d^{\prime}},\]
where
\[\begin{bmatrix}a^{\prime}&b^{\prime}\\ c^{\prime}&d^{\prime}\end{bmatrix}=\begin{bmatrix}a&b\\ c&d\end{bmatrix}^{-1}=\begin{bmatrix}d&-b\\ -c&a\end{bmatrix},\]
thus leading to
\[\mathbf{x}=\frac{d\mathbf{x}^{\prime}-b}{-c\mathbf{x}^{\prime}+a}.\]
Note that
\[c\mathbf{x}+d=\frac{cd\mathbf{x}^{\prime}-bc}{-c\mathbf{x}^{\prime}+a}+d=\frac{ad-bc}{-c \mathbf{x}^{\prime}+a}.\]
The coordinates of the multiplier \(\mathbf{y}^{\prime}\) can thus be formulated as the evaluation of a rational function in the coordinates of \(\mathbf{x}^{\prime}\) as
\[\mathbf{y}^{\prime}= \lambda(c\mathbf{x}+d)^{r-1}\mathbf{y}=\frac{\lambda(c\mathbf{x}+d)^{r-1}}{ \Gamma\left(\frac{d\mathbf{x}^{\prime}-b}{-c\mathbf{x}^{\prime}+a}\right)}=\frac{ \lambda(c\mathbf{x}+d)^{r-1}}{\sum_{i=0}^{r}\gamma_{i}\left(\frac{d\mathbf{x}^{\prime} -b}{-c\mathbf{x}^{\prime}+a}\right)^{i}}=\frac{\lambda(c\mathbf{x}+d)^{r-1}(-c\mathbf{x}^ {\prime}+a)^{r}}{\sum_{i=0}^{r}\gamma_{i}(d\mathbf{x}^{\prime}-b)^{i}(-c\mathbf{x}^{ \prime}+a)^{r-i}}\] \[= \frac{\lambda(ad-bc)^{r-1}(-c\mathbf{x}^{\prime}+a)}{\sum_{i=0}^{r} \gamma_{i}(d\mathbf{x}^{\prime}-b)^{i}(-c\mathbf{x}^{\prime}+a)^{r-i}}.\]
In other words, the reduced form of such rational function has in general a numerator of degree \(1\) and a denominator of degree \(r\), i.e.
\[\mathbf{y}^{\prime}=\frac{A\mathbf{x}^{\prime}+B}{\sum_{i=0}^{r}\gamma_{i^{\prime}}( \mathbf{x}^{\prime})^{i}},\]
with \(A=-\lambda(ad-bc)^{r-1}c\) and \(B=\lambda(ad-bc)^{r-1}a\). In particular, the sufficient and necessary condition for \((\mathbf{x}^{\prime},\mathbf{y}^{\prime})\) being an \(r\)-Goppa code representation is \(A=0\iff c=0\). In this case
\[\mathbf{x}^{\prime}=\frac{a}{d}\mathbf{x}+\frac{b}{d}\quad\text{and}\quad\mathbf{y}^{ \prime}=\lambda d^{r-1}\mathbf{y}.\]
With the previous proposition at hand, we thus distinguish two cases:
* The Goppa code is full support. In this case \(\{cx_{i}+d\mid i\in\llbracket 1,n\rrbracket\}=\{cz+d\mid z\in\mathbb{F}_{q^{m}}\}\) does not contain \(0\) iff \(c=0\) (and \(d\neq 0\)), i.e. \(f\) is an affine map. By the existence of a Goppa code representation and by Proposition 30, it follows that \((\boldsymbol{x},\boldsymbol{y})\) was already a Goppa code representation and nothing must be done.
* The Goppa code is not full-support. By computing the interpolation polynomial on the pairs \((x_{i},\frac{1}{y_{i}})\)'s we can check whether \((\boldsymbol{x},\boldsymbol{y})\) is an \(r\)-Goppa code representation. If not, Proposition 30 implies that a necessary condition for \((\boldsymbol{x}^{\prime},\boldsymbol{y}^{\prime})\) being an \(r\)-Goppa code representation is that \(c\neq 0\). Without loss of generality, we can thus assume \(c=1\), take the linear factor \(\lambda=1\) and consider the candidate maps: \[(\boldsymbol{x},\boldsymbol{y})\mapsto(\boldsymbol{x}^{\prime},\boldsymbol{ y}^{\prime})=(\frac{a\boldsymbol{x}+b}{\boldsymbol{x}+d},(\boldsymbol{x}+d)^{r-1} \boldsymbol{y}).\] A simple method to get one good map is to choose at random \((a,b,d)\) and compute the degree of the interpolation polynomial for the pairs \((x^{\prime}_{i},\frac{1}{y^{\prime}_{i}})\)'s until this is equal to \(r\). The degrees of freedom for the affine map show that the probability of finding a sought map is \[\frac{(q^{m}-1)q^{m}}{(q^{m}-1)q^{2m}}=q^{-m},\] that is typically linear in the inverse of the length \(n\). By quotienting the maps with respect to the group of affine maps, the worst case complexity becomes \(q^{m}\) times the cost of interpolation polynomial that can be upper bounded by \(n^{2}\). In practice, few more than \(r\) pairs must be interpolated to determine whether the polynomial has degree equal to or larger than \(r\).
## 6. Conclusions
In this paper, we analyzed in detail the matrix code of quadratic relationships, introduced in [10], originated by a Goppa code. We described and categorized structured matrices with low rank, relating them to polynomial identities.
We extended the approach used in [10] to break instances of binary Goppa codes of degree \(2\), thus solving two TII challenges in a matter of a few seconds. To this aim, we first studied the variety associated with the Pfaffian ideal, proving that its dimension is at least \(3\). Then, from solutions of the Pfaffian system obtained by specializing \(3\) variables, we devised an efficient algorithm to reconstruct a GRS code that builds upon the strategy of [10]. This demonstrates the effectiveness of the Pfaffian modeling not only for distinguishing purposes but also for mounting key-recovery attacks.
Finally, we introduced the notion of Goppa code representation for Goppa codes and provided a procedure to get one of them from a generic pair of support and multiplier.
|
2309.12220 | De-authentication using Ambient Light Sensor | While user authentication happens before initiating or resuming a login
session, de-authentication detects the absence of a previously-authenticated
user to revoke her currently active login session. The absence of proper
de-authentication can lead to well-known lunchtime attacks, where a nearby
adversary takes over a carelessly departed user's running login session. The
existing solutions for automatic de-authentication have distinct practical
limitations, e.g., extraordinary deployment requirements or high initial cost
of external equipment.
In this paper, we propose "DE-authentication using Ambient Light sensor"
(DEAL), a novel, inexpensive, fast, and user-friendly de-authentication
approach. DEAL utilizes the built-in ambient light sensor of a modern computer
to determine if the user is leaving her work-desk. DEAL, by design, is
resilient to natural shifts in lighting conditions and can be configured to
handle abrupt changes in ambient illumination (e.g., due to toggling of room
lights). We collected data samples from 4800 sessions with 120 volunteers in 4
typical workplace settings and conducted a series of experiments to evaluate
the quality of our proposed approach thoroughly. Our results show that DEAL can
de-authenticate a departing user within 4 seconds with a hit rate of 89.15% and
a fall-out of 7.35%. Finally, bypassing DEAL to launch a lunchtime attack is
practically infeasible as it requires the attacker to either take the user's
position within a few seconds or manipulate the sensor readings sophisticatedly
in real-time. | Ankit Gangwal, Aashish Paliwal, Mauro Conti | 2023-09-21T16:18:51Z | http://arxiv.org/abs/2309.12220v2 | # De-authentication using Ambient Light Sensor
###### Abstract
While user authentication happens before initiating or resuming a login session, de-authentication detects the absence of a pre-viously-authenticated user to revoke her currently active login session. The absence of proper de-authentication can lead to well-known _lunchtime_ attacks, where a nearby adversary takes over a carelessly departed user's running login session. The existing solutions for automatic de-authentication have distinct practical limitations, e.g., extraordinary deployment requirements or high initial cost of external equipment.
In this paper, we propose "DE-authentication using Ambient Light sensor" (DEAL), a novel, inexpensive, fast, and user-friendly de-authentication-tion approach. DEAL utilizes the built-in ambient light sensor of a modern computer to determine if the user is leaving her work-desk. DEAL, by design, is resilient to natural shifts in lighting conditions and can be configured to handle abrupt changes in ambient illumination (e.g., due to toggling of room lights). We collected data samples from 4800 sessions with 120 volunteers in 4 typical workplace settings and conducted a series of experiments to evaluate the quality of our proposed approach thoroughly. Our results show that DEAL can de-authenticate a departing user within 4 seconds with a hit rate of 89.15% and a fall-out of 7.35%. Finally, bypassing DEAL to launch a _lunchtime_ attack is practically infeasible as it requires the attacker to either take the user's position within a few seconds or manipulate the sensor readings sophisticatedly in real-time.
Ambient light, De-authentication, Sensor, System security, Workplace.
## I Introduction
Computer users in different establishments (e.g., universities, workplaces) often share workspace. These users either work on shared computers (e.g., in a library) or have a dedicated computer1 (e.g., in an office). In either case, user authentication is critical to prevent any unauthorized access. Generally, the user authenticates via the secret PIN, password, or recently emerging biometrics-based techniques. However, such authentication typically happens only once while initiating the login session. After successful authentication, the user spends time to continuously use the computer and its services. If the user wants to leave her computer for whatever reason during this period, her currently active session must be locked/logged out; especially in shared workspace settings. Failure to do so can lead to _lunchtime_ attacks [1, 2], where an adversary (typically an insider) gains access to the user's running session and engages in potentially undesirable activities.
Footnote 1: We use the term ‘computer’ to equally represent a desktop and a laptop.
To prevent such unauthorized access, either the user must terminate the running session by explicitly locking/logging out, or the system must automatically revoke the previously-authenticated session, i.e., de-authenticate the user. Oftentimes, the users are apathetic or lazy (especially when taking short breaks) and avoid terminating the session because logging in again can be annoying. On another side, de-authenticating the user frequently with too-short inactivity timeouts can aggravate the user while choosing a too-long inactivity timeout leaves room for _lunchtime_ attacks [3].
Researchers from both academia and industry have put immense efforts into making the authentication techniques more robust, accurate, efficient, and convenient to use [4]. For instance, biometric-based authentication techniques impose less cognitive load on the users compared to the password-based approach. Nonetheless, password-based authentication still remains the most commonly used approach; mainly because it is intuitive and does not require any special hardware. But passwords have their demerits. First, recent technological advances are making passwords even more susceptible to cracking and potentially obsolete for use in the near future [5]. Second, passwords have no role in automatic user de-authentication, which means a separate mechanism is required. To this end, researchers have proposed different user de-authentication and continuous-authentication mechanisms. The state-of-the-art solutions (cf. Section II) require external equipments [1, 2, 6, 7, 8, 9, 10, 11], are relatively expensive [1, 2, 9, 11], need physical customization or specific installation [1, 2, 7, 8, 12], are complex to deploy [2, 6, 7], involve regular maintenance [2, 8, 9, 10], or sometimes cause inconvenience to the user [6, 9, 10, 11]. Such limitations hinder a broader adoption of the existing solutions. Therefore, a solution is needed that can address all of these issues while handling the automatic user de-authentication process efficiently.
On the other side, consumer devices (e.g., phones, tablets, computers) are becoming sensor-rich to provide different useful functionalities. Ambient Light Sensor (ALS) is one such sensor. ALS has been pervasively found on phones and tablets. Nonetheless, ALS has recently started to become common on consumer-grade laptops and desktops; primarily to comfort users' eyes by adapting the brightness and/or color tone of the screen in response to changing lighting conditions. ALS is generally mounted on a computer's display screen (e.g., as shown in Fig. 1). A generic ALS is both fast and efficient in capturing changes in lighting conditions.
In this paper, we propose "DE-authentication using Ambient Light sensor" (DEAL), a novel de-authentication tech
nique that utilizes the built-in ALS of a computer to decide whether the user is leaving her work-desk. In particular, DEAL takes advantage of the fact that a user normally sits/stands closer (suggested between 16 to 30 inches [13]) to the computer while working. Thus, the user can affect the illumination perceived by the computer's ALS when she moves away. In the simplest case, the user directly blocks the Line-of-Sight (LoS) path between ALS and the light source. Nonetheless, the ambient lighting conditions around the computer's ALS can also be influenced due to partial blocking of its LoS, shadowing it, or even reflection of light towards it from the departing user's body (cf. Section IV). We design DEAL to analyze the changes in lighting conditions via ALS readings to decide whether the user is departing from her work-desk. DEAL intrinsically addresses the above-mentioned issues of the state-of-the-art works by its design, i.e., (1) no external equipment is required as ALS is built-in a modern computer, (2) ALS is low-cost that too is already included in the computer's cost, (3) no physical installation of hardware is needed, (4) it is simple to deploy its software, (5) no periodic maintenance is required as an ALS is generally long-lasting and is powered directly by the computer, and (6) more importantly, it is user-friendly as the user is not required to carry or wear any apparatus.
_Contribution:_ The contributions of our work are as follows:
1. We propose DEAL, a novel, unobtrusive, fast, and inexpensive de-authentica-tion approach that is suitable for modern computers equipped with a built-in ALS.
2. We thoroughly evaluate the performance of our proposed approach using data samples collected from 4800 sessions with 120 volunteers in 4 typical workplace settings. DEAL can attain an overall hit rate of 89.15% and a fall-out of 7.35% to de-authenticate the user within 4 seconds.
3. Finally, we compare DEAL with the state-of-the-art de-auth-entication approaches and delineate their respective advantages and limitations. We argue that the said performance of DEAL comes without any extraordinary requirements, customization, or expensive equipment, which makes it suitable for practical adoption.
_Organization:_ The remainder of this paper is organized as follows. Section II presents a comparative summary of the related works. We elucidate our system and adversary models in Section III. We explain our proposed approach in Section IV and present its evaluation in Section V. Section VI elaborates on the salient features and potential limitations of our work. Finally, Section VII concludes the paper.
## II Related works
Researchers from both academia and industry have put extensive efforts over the decades to develop effective user authentication techniques. To verify a user's identity, a typical authentication procedure utilize: (1) user's knowledge (e.g., password, pin) [5], (2) user's possession (e.g., token, key-card) [14], (3) user's physical attributes (e.g., biometrics) [15], (4) user's behavior (e.g., gestures, typing patterns, eye movements) [16], or (5) a combination of these to enable two-factor authentication [17, 18].
On another side, the need of user de-authentication arises after successful authentication of a user by the system. It is worth mentioning that a user's de-authentication by the system is independent of the authentication step. Therefore, the procedures for user de-authentication are distinct. One of the commonly used mechanisms for user de-authentication is the inactivity time-out approach. However, such an approach is ineffective because: (1) determining the optimal length of a static timeout interval is not straightforward, and (2) checking the user's presence/absence in front of the system is beyond its scope [3]. Given the significance of user de-authentication to prevent _lunchtime_ attacks, different mechanisms have been proposed that aim at continuously establishing the user's presence/absence near the system.
Kaczmarek et al. [2] propose _Assufication_ to profile user's sitting posture. In particular, _Assufication_ installs 16 pressure sensors in an office chair to capture a hybrid biometric trait by combining user's behavioral and physiological characteristics. Though _Assufication_ has low false positive and false negative rates, it has two key limitations. Firstly, it has low permanence, i.e., the hybrid biometric trait that it captures naturally changes over time for a given user. Secondly, the cost involved is not trivial, i.e., about $150 per chair. While eye movement tracking has been previously employed to authenticate users [16], Eberz et al. [1] use gaze tracking to prevent _lunchtime_ attacks. Their system continuously tracks the user's eye movements with high accuracy. Since gaze tracking requires its user to keep their sight in a particular direction, any head moment taking the sight away can generate false positives. Moreover, the cost of eye-tracking equipment hinders its large-scale adoption. Rasmussen et al. [6] propose a new biometric based on the human body's response to an electric pulse signal. Their approach involves applying a low-voltage pulse signal to user's one palm and measuring the body's response in the user's other palm. Apart from the cost of specialized hardware, engaging both hands of the users with pulse-response hardware restricts its general acceptability. Similarly, authors in the work [11] use ECGs to build continuous authentication systems that require end users to wear specialized hardware.
FADEWICH [7] measures the attenuation of wireless signals due to the human body for estimating the location of a user in a room, and the user is de-authenticated based on the user's estimated position. Their system uses 9 sensors in a fixed office setup to achieve very high accuracy. The major drawback of their approach is that the structure and setup of the office heavily affect the placement of sensors. Thus, each office requires customized positioning of sensors. Moreover, the presence and movements of other persons induce false positives. Keystroke dynamics technique [19] profiles a user's typing style. It is a simpler mechanism for continuous authentication, which is easily deployable and does not need specialized hardware. However, researchers [20] have shown that a brief training is sufficient to imitate typing pattern of the target users, even when their typing patterns are only
partially known. DEB [8] instruments an office chair with two Bluetooth low-energy beacons. An application running on the target system monitors the signal strength of the received Bluetooth beacons. A human body present in the line of sight of a beacon affects the strength of the received signal, which is interpreted to keep the user logged into the system. Apart from interference due to nearby beacons, the lifespan and appropriate installation of Bluetooth beacons are the key concerns here. BLUFADE [12] employs deep learning algorithms to continuously detect the user's face in a webcam feed. However, using a camera feed for de-authentication carries apparent privacy concerns [21]. Thus, the authors propose to obfuscate the webcam with a physical blurring layer (e.g., anti-reflective obfuscating film) and use blurred images for face detection. Such an approach hampers the normal usage of the webcam. More importantly, it does not address the possibility of reconstructing the user's facial traits from blurry images.
ZIA [10] proposes monitoring the proximity of the user via a physical token borne by the user. Such a token periodically exchange information with the target system over a secure channel, and in the absence of such communication the user is de-authenticated by the system. Similarly, ZEBRA [9] uses a wrist bracelet fitted with a gyroscope, an accelerometer, and a radio. When the user interacts with the system, the bracelet captures and shares the wrist movements with an application running on the system. The application correlates the wrist movements with strokes on the keyboard to establish the user's presence. The key limitation of these approaches is that the user is required to always bear the token/bracelet. Furthermore, the tokens/bracelets also require periodic recharging or replacement of batteries. Relevant to our work, researchers have used ALS for user authentication [22] and tracking a user's activities [23, 24, 25, 26].
## III System and adversary models
In this section, we describe the system and adversary models we consider in our work. Section III-A presents the deployment scenario of the proposed de-authentication mechanism, and Section III-B elucidates the potential threat maneuvers of an adversary.
### _System model_
DEAL is designed for computers that come with a built-in ALS. The ALS data feed is processed in real-time by a simple application running in the background on the target computer. Since the primary goal of any de-authentication mechanism is to prevent _lunchtime_ attacks that are prevalent at workplaces [1, 2, 8], our proposed system is expected to be used in common workplace setups. DEAL is absolutely unobtrusive. The user arrives at her work-desk, settles in her chair, logs into her computer (a desktop or docked laptop) via a preset authentication mechanism, uses the computer, and finally gets up to leave her desk. While the user prepares to depart from her desk, the system should automatically lock her out to prevent any unauthorized access. DEAL uses the light-intensity data feed from ALS to de-authenticate a departing user in real-time.
Contrary to state-of-the-art de-/continuous-authentication mechanisms [1, 6, 9, 19], DEAL does not need the user to interact continuously with the system. In fact, there can be situations when the user is present at the work-desk, but not interacting with the system. For instance, the user may be using a smartphone, reading a document, or simply watching a photo on the system. In such scenarios, de-authenticating the user due to her inactivity is undesirable and can be annoying.
### _Adversary model_
We assume that the adversary has physical access to the user's office and, consequently, to her computer. An office colleague, a visitor in the office, a business customer, or a housekeeping person are some representative examples of such an adversary that may be interested in getting access to her computer. Since the adversary does not know the login credentials required for logging in to the user's computer, the adversary's goal is to gain access to the user's running/authenticated session.
An adversary can try the following to bypass DEAL: (1) take the user's position (and control the computer) before DEAL can de-authenticate the user, or (2) manipulate light intensity perceived by her computer's ALS in such a way that DEAL does not de-authenticate the departing user at all. The former approach represents the typical _lunchtime_ attack strategy. It is straightforward, yet effective if DEAL takes too long to de-authenticate. So, DEAL should operate fast enough to render such an attempt ineffective. The latter may involve using sophisticated tools. For instance, the adversary may use a custom beam of light to compensate for the ALS readings affected due to the departing user. Such a maneuver requires the adversary to know the exact light intensity levels observed by the target ALS when the user is departing, which may be possible by: (1) installing an ALS near the target's ALS (ineffective; as it will visible to the user), (2) compromising the target machine to get such information (beyond the scope of the _lunchtime_ attack), or (3) physically approaching the desk to measure/compensate readings (essentially the same as the first approach; a fast operating DEAL will handle it).
On another side, a different type of adversary can focus on triggering false de-authentications, e.g., **by turning the lights on or off in the room.** Although such an action can annoy the user by incorrectly de-authenticating her, the adversary does not get access to the user's computer. Nonetheless, toggling room lights is a part of routine office activities. Such sudden changes in lighting conditions significantly affect the ALS readings and induce large outliers. Therefore, we can easily identify and adapt to new lighting conditions if such large outliers in the ALS readings are consistently present.
## IV Proposed Method
We now present the conceptual and intrinsic details of DEAL. The fundamental task of a meaningful de
authentication technique is to determine the user's presence in front of the computer. To this end, DEAL utilizes data feed from the computer's ALS. The illumination perceived by ALS can be affected due to the user's movements. As a representative example, Fig. 1 demonstrates that a user's movement of getting up/down from her chair can directly affect the ambient lighting conditions around the computer's ALS. Naturally, the scale and duration of such an impact depends on a variety of factors, e.g., how much/for how long the user has intersected the LoS path between ALS and the light source. We would like to highlight that though the light sources are typically roof-mounted (or, mounted high on the wall) in workplaces, the light source may not be in the direct LoS of ALS (cf. Fig. 1). However, a user's movements can still affect the lighting conditions around ALS. In particular, due to partial2 blocking, shadowing, or even reflection of light from the departing user's body. By measuring the changes in ambient lighting conditions through ALS readings, DEAL determines whether the user is departing from her work-desk, and subsequently de-authenticates her when required.
Footnote 2: In full blocking, the user totally obstructs the illumination received by ALS. The simplest example would be to cover ALS by hands. In partial blocking, the user partially hinders the light coming from a source. For instance, when a user intercepts ALS’s LoS partially. The shadow of the user may or may not be falling around ALS in partial blocking. We call the former scenario shadowing, and the reflection of light is a natural phenomenon.
We make the following two reasonable assumptions in the implementation of DEAL: (1) the user will continue to work in the same position (standing or sitting) as she was in while initializing the current login session, and (2) if the user was sitting, she will get up before leaving. It is worth mentioning that if the user is standing while working at her work-desk, she will likely be blocking ALS' LoS. Such a case is simpler to handle for DEAL. For the sake of brevity, the rest of the paper proceeds with the scenario in which the user is sitting while using her computer.
The data feed from ALS can be modeled as a univariate time series of the observed light intensity. Thus, DEAL adopts an amended sliding window average-based approach for monitoring changes in lighting conditions. In particular, each reading (\(R\), in lux) from ALS is compared against the average of running _window_ as described in Eq. 1:
\[|\mu(window)-R|>\mu(window)*\Delta/100, \tag{1}\]
where \(\Delta\) (a natural number) is a predefined threshold. Such sliding window average-based methods are typically designed to identify an outlier outside of the current trend in a time series. However, a single outlier may not be sufficient in our case to distinguish the user's movements correctly. Because an ALS can provide several readings - according to its operating frequency (\(f\), in Hz) - within a fraction of time. Moreover, different user activities can last for a different amount of time, e.g., the act of getting up and moving away from the computer can take up to a few seconds. So, it is intuitive to say that if a user's movement intercepts ALS's LoS for a longer period of time, then it will affect more ALS readings. We design DEAL to incorporate the duration of impact on ALS to distinguish user movements. To this end, we define a parameter \(\eta\) (in seconds). While \(\Delta\) defines the minimum distance between an outlier and the average of running _window_ (cf. Eq. 1), \(\eta\) specifies the minimum duration of time during which each consecutive3\(R\) should be an outlier for recognizing the user to be departing and subsequently de-authenticating her.
Footnote 3: Our current implementation requires each consecutive \(R\) in \(\eta\) duration of time to be an outlier. We are aware that such a design decision can result in false negatives even when one of the values is not an outlier. However, such a stricter control helps us evaluate the minimum performance of our system. We can certainly optimize such checks to improve the system. Currently, \(\eta\) works with a parameter \(\ell\) to provide some relaxation to the system.
Our system has two more parameters, i.e., \(\omega\) (in seconds) and \(\ell\) (in seconds). \(\omega\) defines the size of the sliding window. \(\ell\) is a tuning parameter that defines the maximum duration of time from the occurrence of the first outlier in a wave of outliers, during which the required consecutive outliers should occur for user de-authentication. From the virtues of their respective definitions, \(\eta\leq\ell\). The system will not work if \(\eta>\ell\), because it is impossible to have \(\eta\) (say 5 seconds) of consecutive outliers within a shorter \(\ell\) (say 2 seconds). To simplify, \(\ell\) separates waves of outliers. A larger value of \(\ell\) enables us to process more values of \(R\) to satisfy constraints on \(\eta\). However, a larger \(\ell\) will cause a delay in resetting and recovering from a (short) wave of outliers. On another side, a larger value of \(\eta\) prevents false alarms due to subtle user movements. Algorithm 1 exhibits the pseudocode for the core logic of DEAL.
Since each ALS can operate at a different \(f\), we begin with aligning \(\omega\) and \(\eta\) to a given ALS via its \(f\) (lines 1-3). We next initialize the _window_ and temporary variables (lines 4-6). We compare each reading from ALS with the mean of _window_ (lines 7-9). If an outlier is found, a counter is incremented (line 10) while the time is recorded for the first outlier (lines 11-12). If \(R\) is not an outlier, the counter for outliers is reset (line 15), and \(R\) is adjusted in _window_ (lines 16-17). **It is noteworthy that the running _window_ average directly handles the natural shifts in lighting conditions.** The user is de-authenticated if the required number of outliers (\(\eta^{\prime}\)) are
Fig. 1: A representative depiction of affecting illumination perceived by ALS.
found within \(\ell\) seconds (lines 19-20). If the time elapsed since the first outlier in the current wave was seen exceeds \(\ell\), we reset _window_ and temporary variables (lines 22-23).
## V Evaluation
We describe our evaluation setup in Section V-A and data collection method in Section V-B. We discuss our experimental results in Section V-C.
### _Evaluation setup_
We evaluate DEAL in a typical office setup. To this end, we created an office space illuminated with both natural and artificial lights. As shown in Fig. 2, our office setup has two ceiling-mounted white light sources that we keep on and a standard transparent window that allows natural light to come in. Though the half-glass door was kept closed during the experiments, its transparent glass portion in the upper half remained unobstructed.
To emulate typical work-desk positions with respect to the lighting conditions, we set up four work-desks at different locations in the room (cf. Fig. 2). In particular, position _P1_ emulates a position with lower lighting since it is far from the light sources. Moreover, a user working in position _P1_ may further block the illumination perceived by the computer's ALS. Being closer to light source 2, positions _P2_ and _P4_ represent normally illuminated positions. Lastly, position _P3_ has copious lighting. In our experiments, we used a Lenovo ThinkPad Yoga 370 laptop. It comes with a built-in ALS. We modified _io-sensor-proxy_[27] to capture readings from ALS. It is important to highlight that we periodically checked the health of our computer's ALS using a phone-based ALS to avoid any bias or error in our ALS readings.
### _Data collection_
To collect ALS data for our experiments, we invited student volunteers to participate in our study. A total of 120 students volunteered for our study over a period of 90 days. Since the volunteers belong the student body of a large university, the majority of them were naturally in the 18-24 age group. Fig. 3 shows the distribution of the self-declared age groups, sex categories, and height classes4 of the volunteers.
Footnote 4: The volunteers declared their height classes based on the distribution of adult human heights [28].
Before beginning each instance of our data collection activity, we asked the volunteer to settle in a comfortable sitting/working posture at a designated desk. After which we started recording the data from the computer's ALS. At the same time, we asked the volunteer to use the computer normally for about a minute. Next, the volunteer was asked to get up and move away from the chair. It is important to highlight that to prevent any interference due to our operational activities, we remotely operated our computer to capture the data from its ALS. We also documented the time when the volunteer was instructed to get up in the activity; mainly for post-processing and analysis. Each volunteer repeated the entire activity ten times each on the four work-desk positions (i.e., _P1_, _P2_, _P3_, and _P4_). Therefore, our dataset contains a total of 4800 data samples, i.e., 1200 data samples for each position. Fig. 4
Fig. 3: The distribution of age, sex, and height of the volunteers.
Fig. 2: The top view of our office setup.
depicts a random set of data samples collected from different positions during our data collection activity. As discussed in Section V-A, the four work-desks experience different lighting conditions. This phenomenon is also evident from the light intensity scales shown in Fig. 4 (a)-(d). We now briefly describe each plot shown in Fig. 4.
As depicted in Fig. 4(a), the ALS readings remain nearly constant as long as the user remains seated in _P1_. It is so because the body of the user is blocking the illumination coming from the distant light sources. The illumination perceived by ALS further drops when the user gets up. Intuitively, ALS receives a much higher illumination when the user completely moves away from the computer.
In case of _P2_ and _P4_, the light sources are located on the right side and left side of the computer, respectively. ALS on our computer is located towards top-right of the screen. Thus, the chances of a sitting user shadowing LoS between ALS and the light sources are lesser in _P2_ when compared with _P4_. As illustrated in Fig. 4(b) and Fig. 4(d), both _P2_ and _P4_ observe similar levels of light intensity. However, ALS readings in _P2_ before and after the user moves away are at the same level, which implies that in this particular case, the user was not shadowing ALS. On another side, ALS readings in _P4_ after the user moves away achieve similar levels as in the case of _P2_, which implies that in this particular case, the user was marginally shadowing ALS.
Unsurprisingly, the light intensity levels are the highest in _P3_. When the user moves away from _P3_ in the particular case shown in Fig. 4(c), ALS readings drop even lower than the levels when the user was sitting. A possible explanation of such a case is that the light coming from the sources in front of the user was being reflected by the user towards ALS when the user was sitting. Nevertheless, the ALS readings still clearly capture the movements of the departing user.
### _Experimental results_
We empirically assess the quality of our proposed approach with real-world data. As explained in Section V-B, our dataset contains a total of 4800 data samples (i.e., 1200 data samples for each position) collected from 120 volunteers. We designed a series of experiments for a thorough analysis. We begin with investigating the general performance of DEAL. Here, we vary its input parameters to find a set of suitable configurations. Next, we study the effect of different positions (i.e., lighting conditions) considered in our work. Finally, we examine the impact of users' height. For a de-authentication system, false negatives are more severe than false positives. At the same time, true positives are also critical. Therefore, we report the hit rate5 for each of our experiments.
Footnote 5: \(Recall=HitRate=\frac{TP}{TP+FN}=1-MissRate\)
An analysis of our data samples indicates that the volunteers took roughly two to four seconds to get up and move away from a work-desk. Thus, we set \(\ell\) between 2 and 4 seconds to approximately cover the entire user movement. We observe that a portion of the ALS readings affected due to a user's movement can be treated, depending on the value of \(\Delta\), as non-outlier. It is especially witnessed for the values corresponding to the start and end of the movement; such values can still be within the threshold because the _window_ is not updated during a wave of consecutive outliers (cf. lines 9, 14-17 in Algorithm 1). Therefore, we choose \(\eta\) between 1 and 2 seconds, which is about half the time the volunteers took to move. The value of \(\omega\) is fixed at 3 seconds while \(\Delta\) is chosen between 5 and 20 based on preliminary experiments.
We now discuss the generic performance of DEAL. TABLE I shows the hit rate of our system over the entire dataset of 4800 samples for different values of \(\eta\), \(\ell\), and \(\Delta\). An increasing value of \(\Delta\) corresponds to the fact that a user should affect ALS readings substantially for the system to recognize it as an outlier. Thus, DEAL becomes resistive with increasing values of \(\Delta\). Such behavior is evident in each row6 of TABLE I. The performance of our systems is affected by \(\eta\) in a similar way. A larger value of \(\eta\) requires a longer duration of outliers, which becomes even more challenging to attain under our stringent requirement of outliers' consecutiveness. A comparison of values7 corresponding to increasing \(\eta\) over fixed \(\ell\) and \(\Delta\) reflects the same. On another side, a larger value of \(\ell\) enables us to process more values of \(R\). The performance of DEAL improves with increasing value8 of \(\ell\) over a given pair of \(\eta\) and \(\Delta\). From these experiments, we find \(\Delta=5\) and \(\ell=4s\) are suitable parameter values for DEAL. Since a larger \(\eta\) helps us avoid subtle user movements, we prefer \(\eta=1.5s\) over \(\eta=1.0s\) for our chosen values of \(\Delta\) and \(\ell\). With these values of of \(\Delta,\ell,\eta\), we observed a fall-out9 of only 7.35%.
Footnote 6: \(Recall=HitRate=\frac{TP}{TP+FN}=1-MissRate\)
Footnote 7: \(Recall=HitRate=\frac{TP}{TP+FN}=1-MissRate\)
Footnote 8: \(\text{g.o.d.}2<50.75\): \(\ell=2s,\Delta=5\).
Footnote 9: \(Fall_{Out}=FalsePositive_{Rate}=\frac{FP}{FP+TN}\)
Fig. 4: A random set of ALS data samples collected from different positions.
To understand the effect of different lighting conditions, we organize our dataset according to different positions (i.e., 1200 samples per position) considered in our study. TABLE II shows the hit rate of our system over different positions for \(\eta=1.5s\), \(\ell=4s\). Our results indicate that DEAL performs better in _P1_, where most volunteers blocked the illumination observed by ALS while working at the computer. We see the steepest decline in the hit rate at _P3_. Since _P3_ has copious lighting, affecting ALS readings substantially for higher values of \(\Delta\) is complex. _P2_ and _P4_, which represent normally illuminated positions and have similar light intensity levels, obtain comparable results. Overall, our system performs competently for \(\Delta=5\) across different positions.
Next, we consider the users' height in our study. Due to a disparity in the number of volunteers per height class, we take 250 (roughly half of the taller class samples) randomly chosen samples from each height class. Fig. 5 depicts the hit rate of DEAL over different height classes for \(\eta=1.5s\), \(\ell=4s\). While our results for \(\Delta=5\) are alike across different height classes, DEAL favors taller users for increasing values of \(\Delta\). The rationale for such behavior is related to the fact that a taller user likely remains in the LoS path of ALS while working, and when such a user moves away, the ALS readings are affected sufficiently for DEAL to operate properly.
To conclude, we argue that DEAL yields an overall effective performance. In particular, our system attains such scores without any extraordinary requirements or customization, as seen in the case of state-of-the-art solutions (cf. Section I). DEAL can de-authenticate the user within two to (more realistic) four seconds (i.e., based on the value of \(\ell\)). In a real-world deployment, an enrollment step at the end user's work-desk can help tune the system to function even better.
## VI Discussion
We specify the key attributes of our work in this section. Section VI-A compares DEAL with state-of-the-art user de-authentication schemes and highlights its salient features. Section VI-B discusses the potential limitations of DEAL.
### _Comparison with existing de-authentication schemes_
For a rigorous comparison among the key de-authentication solutions, we assess each one of them on a dozen crucial dimensions. TABLE III summarizes our comparison and underlines the prominent features and limitations of the key existing solutions.
One of the fundamental requirements for any consumer technology is its user-friendliness. In our context, it is directly related to the unobtrusiveness (cf. col. 1) of a given de-authentication solution and whether it compels the user to carry, wear, or bear anything extra (cf. col. 2). We find that ZEBRA, pulse-response, ZIA, and 1DMRLBP can cause inconvenience to the user by requiring them to bear a braceter, a pair of electrodes, a token, and an ECG apparatus, respectively. The existing solutions can be further classified as biometric or non-biometric (cf. col. 3) and continuous10 or non-continuous solutions (cf. col. 4). Biometric-based solutions (i.e., gaze tracking, _Assufication_, pulse-response, key-stroke dynamics, BLUFADE, 1DMRLBP) are certainly difficult to evade (cf. col. 5) as imitating someone else's biometry or behavioral patterns is highly complex. On the other side, some continuous solutions can be subverted. For instance, authors in the work [29] have shown that an attacker can evade ZEBRA via opportunistic observations. Both the biometric and continuous solutions are accurate. However, the performance of both the categories of solutions comes at the cost of: (1) a user enrollment phase (cf. col. 5) that can be laborious and time-consuming for the end-user, and (2) the cost of equipment required to capture their respective features is non-trivial. Only a few solutions are enrollment-free. Regarding the difficulty of evasion, FADEWICH is not suitable for a densely occupied workspace, while the classic timeout approach fails to sense the user's absence.
Footnote 10: The user is re-authenticated throughout the session, and de-authentication happens whenever she cannot prove her identity.
A user may not interact continuously with her computer (e.g., while attending a phone call). Thus, another key attribute of a user-centric de-authentication scheme is its support for a user's inactivity (cf. col. 7). Timeout, ZEBRA, gaze tracking, and keystroke dynamics depend on user interactions, and thus they violate this objective. The main limitation of the
Fig. 5: Hit rate (%) over different height classes for \(\eta=1.5s\), \(\ell=4s\).
majority of existing schemes is their dependence on external equipment for operation (cf. col. 3). Such a dependence not only hinders their widespread adoption, but it can also spawn several related concerns, i.e., maintenance, physical customization, deployment complexity, and price. Only timeout approach, keystroke dynamics, BLUFADE, and our proposal do not depend on external hardware; thus, they are not generally affected by the consequent concerns mentioned before.
ZEBRA, DEB, and ZIA demand periodic recharging or replacement of batteries while _Assentication_ requires maintenance of the wires that supply power to the chair. The external hardware in the other such solutions is powered directly by the target computer. Finally, all these solutions also involve the risk of physical damage to the external hardware that may seek a replacement (cf. col. 3).
Some of the solutions that use external hardware require a particular installation of the equipment (i.e., gaze tracking, DEB) or even customization to workplace infrastructure (i.e., _Assentication_, FADEWICH). The user simply holds/wears the external apparatuses in other such solutions (i.e., ZEBRA, ZIA, 1DMRLBP, pulse-response). As discussed in Section II, BLUFADE requires affixing a particular physical barrier on the webcam (cf. col. 3). Regarding deployment complex-(cf. col. 3), _Assentication_ and FADEWICH are not simple to deploy in practice as they require alteration to infrastructure. Similarly, pulse-response involves complex handling of multiple apparatuses (arbitrary waveform generator, oscilloscope, brass hand-electrode, etc.). All the remaining solutions are simple to deploy even when they need particular placement of hardware (e.g., gaze tracking, DEB, BLUFADE). As far as the price is concerned (cf. col. 3), gaze tracking employs an expensive eye-tracking device. Though the price of FADEWICH and pulse-response is unknown, we suppose they are slightly costlier as they use several sensors and apparatuses. The cost of the remaining schemes is low to medium. Finally, the number of subjects in the user/validation study could indicate the robustness of evaluation results, which is the highest in our case (cf. col. 3).
In light of our analysis, we find that the state-of-the-art solutions lack a few or several vital characteristics of an effective and practical de-authentication scheme. On the other hand, DEAL is the only solution that possesses all these characteristics. Therefore, we believe it is the most useful and practical de-authentication scheme.
### _Limitations_
We now ponder upon the potential limitations of DEAL.
1. _ALS' presence:_ Our proposed de-authentication approach relies on an ALS. While ALS has been present on smartphones and tablets for a long time, it has only recently started to become available on laptops (e.g., MacBooks) and desktops (e.g., iMacs). Therefore, DEAL is futuristic and suitable for newer generations of computers. Nevertheless, one can attach a USB-powered ALS to use DEAL in the absence of a built-in ALS. One related issue could be the physical placement of ALS on the computer. ALS (like other user-centric sensors, e.g., webcam) is generally mounted on the front side of the display screen. Our approach will work as long as ALS faces the users. For the sake of readers' convenience, Fig. 1 and Fig. 6 conceptualize DEAL on a laptop and a desktop, respectively.
However, any unusual sensor placement (e.g., behind the screen panel) will render our system unusable. It is worth noting that such an unusual sensor placement could be suitable for portable devices, but not for computers that can be docked near a wall.
2. _False alarms due to passersby:_ A common phenomenon in any workplace setting is the movements of passersby (e.g., colleagues). We set up a separate experiment to investigate such a scenario. Fig. 7 shows different user positions, where 1 represents the legitimate user's standing position, 2 shows a passerby crossing too close to the target user, and 3 depicts a passerby away from the target user.
We find that our system remains largely unaffected as long as a passerby (cf. 1) walks at some (about 2-3 ft) distance from the user. In particular, any wave of outliers, if induced, is sparse and short. On the other hand, our system de-authenticates the user when a passerby (cf. 2) comes too close to the user; as it affects the ALS readings. It can be seen as a false alarm. Nevertheless, such de-authentications can protect the user's privacy from shoulder surfers and onlookers.
3. _Violation of our assumptions:_ Our system will create a false alarm if the user changes her working posture (e.g., from sitting to standing). Similarly, it may possibly not de-authenticate the user if she moves away from her work-desk without getting up (e.g., by dragging the chair). Violating the assumptions or requirements of any given scheme will affect its functioning, and our work is no different. Nevertheless, one may see it as a limitation of our work.
## VII Conclusion and future work
Both user authentication and de-authentication are essential operations for the security of a computer system. It is even more critical to de-authenticate a user in a shared workspace setting because an insider can gain access to the user's active session through _lunchtime_ attacks. The research community has proposed different de-authentication and continuous-authentication techniques over the inactivity timeout-based method. The existing works unfortunately have various limitations, e.g., complex installation procedures, requirement of external hardware to assert user presence. In this paper, we propose a novel approach, called DEAL, that uses ALS present on a computer to de-authenticate the user. We assessed the quality of our proposed approach empirically in the real world. While being effective and fast, DEAL is also unobtrusive.
In the future, we would like to test DEAL in unconventional workplace settings, such as in a cafe or under different colored lighting. We will explore the possibility of assisting DEAL with machine learning-based classification techniques to further improve its performance. We will also investigate the effect of personalized tuning (e.g., via an enrollment stage for the end user) on its performance.
## IRB approval
We obtained prior approval for our experiments from the Institutional Review Board (IRB) of the institute, where the experiments were carried out. The level of review recommendation was: _Exempt_. All participants were volunteers, who were informed of the actual use of the collected data, and their informed consent was obtained before starting the data collection process. No sensitive data was collected. In particular, no participant names, contact numbers, or other Personally Identifying Information (PII) was collected. The minimal identifying information retained was also anonymized. All the data was (and is) stored in an encrypted form.
|
2309.14889 | Electronic and optical properties of core-shell InAlN nanorods: a
comparative study via LDA, LDA-1/2, mBJ and $G_0W_0$ methods | Currently, self-induced InAlN core-shell nanorods enjoy an advanced stage of
accumulation of experimental data from their growth and characterization as
well as a comprehensive understanding of their formation mechanism by the ab
initio modeling based on Synthetic Growth Concept. However, their electronic
and optical properties, on which most of their foreseen applications are
expected to depend, have not been investigated comprehensively. $G_0W_0$ is
currently regarded as a gold-standard methodology with quasi-particle
corrections to calculate electronic properties of materials in general. It is
also the starting point for higher-order methods that study excitonic effects,
such as those based on the Bethe-Salpeter equation. One major drawback of
$G_0W_0$, however, is its computational cost, much higher than
density-functional theory (DFT). Therefore, in many applications, it is highly
desirable to answer the question of how well approaches based on DFT, such as
e. g. LDA, LDA-1/2, and mBJ, can approximately reproduce $G_0W_0$ results with
respect to the electronic and optical properties. Thus, the purpose of the
present paper is to investigate how the DFT-based methodologies LDA, LDA-1/2,
and mBJ can be used as tools to approximate $G_0W_0$ in studies of the
electronic and optical properties of scaled down models of core-shell InAlN
nanorods. For these systems, we observed that band gaps, density of states,
dielectric functions, refractive indexes, absorption and reflectance
coefficients are reasonably well described by LDA-1/2 and mBJ when compared to
$G_0W_0$, however, at a much more favorable computational cost. | Ronaldo Rodrigues Pela, Ching-Lien Hsiao, Lars Hultman, Jens Birch, Gueorgui Kostov Gueorguiev | 2023-09-26T12:41:00Z | http://arxiv.org/abs/2309.14889v1 | Electronic and optical properties of core-shell InAlN nanorods: a comparative study via LDA, LDA-1/2, mBJ and \(G_{0}W_{0}\) methods
###### Abstract
Currently, self-induced InAlN core-shell nanorods enjoy an advanced stage of accumulation of experimental data from their growth and characterization as well as a comprehensive understanding of their formation mechanism by the _ab initio_ modeling based on Synthetic Growth Concept. However, their electronic and optical properties, on which most of their foreseen applications are expected to depend, have not been investigated comprehensively. \(G_{0}W_{0}\) is currently regarded as a gold-standard methodology with quasi-particle corrections to calculate electronic properties of materials in general. It is also the starting point for higher-order methods that study excitonic effects, such as those based on the Bethe-Salpeter equation. One major drawback of \(G_{0}W_{0}\), however, is its computational cost, much higher than density-functional theory (DFT). Therefore, in many applications, it is highly desirable to answer the question of how well approaches based on DFT, such as _e. g._ LDA, LDA-1/2, and mBJ, can approximately reproduce \(G_{0}W_{0}\) results with respect to the electronic and optical properties. Thus, the purpose of the present paper is to investigate how the DFT-based methodologies LDA, LDA-1/2, and mBJ can be used as tools to approximate \(G_{0}W_{0}\) in studies of the electronic and optical properties of scaled down models of core-shell InAlN nanorods. For these systems, we observed that band gaps, density of states, dielectric functions, refractive indexes, absorption and reflectance coefficients are reasonably well described by LDA-1/2 and mBJ when compared to \(G_{0}W_{0}\), however, at a much more favorable computational cost.
+
Footnote †: preprint:
## I Introduction
Wurtzite InAlN semiconductor alloys have a direct band gap that span a wide spectrum range from 0.65 eV (InN) to 6.25 eV (AlN).[1; 2; 3] Therefore, many optoelectronic devices can possibly be fabricated from InAlN alloys, which are applicable in a wide wavelength range covering deep-ultraviolet (DUV) to near infrared (NIR), such as light-emitting diodes, laser diodes, solar cells, and photodetectors.[4; 5; 6; 7; 8] However, InAlN thin film often contains large number of structural defects and compositional inhomogeneity owing to a wide-range composition immiscibility of the In\({}_{x}\)Al\({}_{1-x}\)N (\(0.1<x<0.9\)), low dissociation temperature of InN (\(\sim 550\) degC), and mismatches in lattice and coefficient of thermal expansion to common substrates.[9; 10; 11] Alternatively, InAlN grown in the form of low-dimensional nanostructures can provide an opportunity to overcome the effects of lattice mismatch like threading dislocations formation and substrate-film strain.
In the context of the InAlN low-dimensional nanostructures, self-induced core-shell InAlN nanorods (NRs) have been successfully synthesized by reactive magnetron sputter epitaxy (MSE) while their formation mechanism was elucidated by modeling the relevant precursor prevalence and their corresponding energetics using the DFT-based synthetic growth concept (SGC).[8] SGC is an extensive approach designed for modeling of nanostructures with complex morphology and accounting for the role of the precursors in their formation when employing a wide spectrum of vapor-phase deposition techniques.[12; 13; 14; 15]
Very high-crystal-quality nanomaterials can be grown on various substrates, including metals, metal nitrides, oxides and Si,[16; 17; 18] which opens the possibility of integration with mature device-fabrication technology for integrated circuit industry. Furthermore, the form of nanostructure enables to fabricate nanodevices with high performance benefited from the reduced geometry. For instance, InAlN nanospirals with tailored chirality have been demonstrated to reflect circularly polarized light with corresponding handedness, through tuning internally compositional distribution and external geometry, which is very promising for fabricating high-performance optical elements.[19; 20] High-sensitivity photodetectors based on InAlN nanophotonic structure is applicable from deep DUV to NIR region.[5; 7; 21] With controlling composition of InAlN with In-content \(\sim 0.17\), strain-less multilayer InAlN/GaN-distributed Bragg reflectors (DBRs) with high a peak reflectivity can be grown directly onto nanodevice's structures for fabricating vertical-cavity surface-emitting lasers (VCSELs).[22; 23]
To aid the development of nanodevices based on core-shell InAlN NRs, it is crucial to have a theoretical tool to test different design scenarios and to help the interpretation of the electronic properties of as-synthesized core-shell InAlN NRs. Reliable simulation of their optical properties provides a strategic tool for tuning the core-shell InAlN NRs to potential electronic and optoelectronic applications. In this sense, it is desirable a methodology that accurately describes the excitations of such nanostructures across a wide energy range, especially around the bandgap-energy region. The solution to this problem is given by the \(G_{0}W_{0}\) approximation within many-body perturbation theory, which is considered the state of the art in _ab initio_ calculations of single-particle excitations.[24; 25; 26; 27; 28] can provide accurate quasi-particle corrections to (generalized) Kohn-Sham eigenvalues, yielding electronic structures in excellent agreement with experiments and with higher
order methods.[26; 29; 30; 31; 32] However, a major drawback of \(G_{0}W_{0}\) is its high computational cost, which can complicate its application to complex systems with hundreds or thousands of atoms.[33]
For this purpose, it is interesting to find approaches based on DFT that can reproduce \(G_{0}W_{0}\) results, with reasonable accuracy, but much less computationally involved. Among the various possibilities, in this paper, we explore two: LDA-1/2 and the modified Becke-Johnson (mBJ) functional.
The LDA-1/2 approach has proven to be an efficient alternative for obtaining approximate quasi-particle corrections at low computational cost.[34; 35; 36; 37; 38; 39; 40] In particular, electronic properties of systems based on III-V semiconductors are well described by LDA-1/2.[33; 40; 41; 42] For this class of materials, LDA-1/2 also provides accurate one-particle energies and wavefunctions to solve the Bethe-Salpeter equation and obtain optical properties.[43] Regarding nanowires, LDA-1/2 calculations for Si, GaN, and GaP have been shown to describe the band gap with an accuracy comparable to \(G_{0}W_{0}\)[44; 45] and in good agreement with experiments.[46] These facts make LDA-1/2 an attractive _ab initio_ framework to study core-shell InAlN NRs. To what extent this is possible, however, has not yet been addressed.
Another promising choice is the mBJ potential,[47; 48] a semilocal meta-GGA functional shown to be quite accurate for band gap calculations. It is competitive with \(G_{0}W_{0}\) and hybrid functionals in terms of accuracy, at much lower computational cost.[47; 49; 50; 51; 52] Interestingly, band gaps of III-V semiconductors calculated with mBJ show good agreement with experiments.[53; 50] Apart from band gaps, optical properties of several materials have been obtained with mBJ,[54; 55; 56; 57; 58; 59] including III-V semiconductors,[59; 60] and mBJ at least improves over PBE when compared with experiment.[59] Studies employing mBJ for nanowires have been conducted as well,[61; 62; 63; 64] some of which have reported nice agreement with measurements.[63; 64] It it, thus, important to verify how mBJ performs for studying core-shell InAlN NRs.
In this work, for the case of core-shell InAlN NRs, we conduct _ab initio_ calculations to analyze how LDA-1/2 and mBJ improve over LDA and how they can approximate \(G_{0}W_{0}\) for the following electronic and optical properties: density of states (DOS), band gaps, dielectric function, refraction index, extinction and absorption coefficients, and the reflectivity. Nanostructures of similar structural and chemical complexity and their electronic and optical properties including in relation to electronic applications have been successfully studied previously by using both different flavors of GGA to DFT levels of theory[65], and the \(G_{0}W_{0}\) method.[66] Here, to keep the computational cost moderate in the \(G_{0}W_{0}\) calculations, we select as prototypes NRs with diameter of 14 A and with In compositions of 0, 12.5, and 25% within their core.
The paper is divided as follows: in section II, we introduce the theoretical aspects of this work; section III describes the computational methods employed; in section IV, we present and discuss our results; and lastly, in section V, we summarize the paper.
## II Theoretical framework
### The LDA-1/2 method
LDA-1/2[34; 35; 67] is inspired on Slater's half-occupation scheme, which relates the ionization potential \(I\) of a KS eigenstate labeled with \(i\) at its eigenvalue \(E_{i}\):
\[I=-E_{i}(f_{i}=1/2), \tag{1}\]
where \(f_{i}\) is the occupation of the KS state \(i\).
In LDA-1/2, instead of dealing with half-occupations, KS equations are modified as:
\[\left[-\frac{1}{2}\nabla^{2}+V_{H}(\mathbf{r})+V_{XC}(\mathbf{r})+V_{S}( \mathbf{r})\right]\phi_{\mathbf{k}}(\mathbf{r})=E_{\mathbf{k}}\phi_{\mathbf{k }}(\mathbf{r}). \tag{2}\]
Here, we consider electrons in a solid with wavevector given by \(\mathbf{k}\). \(\phi_{\mathbf{k}}\) is the corresponding KS wavefunction. The KS potential, \(V_{KS}(\mathbf{r})=V_{H}(\mathbf{r})+V_{XC}(\mathbf{r})\), written as the sum of the Hartree, \(V_{H}(\mathbf{r})\), and the exchange-correlation (XC), \(V_{XC}(\mathbf{r})\), potentials has been adjusted to include \(V_{S}(\mathbf{r})\), the so-called self-energy potential.[34] The XC potential employed here is LDA.[68] For each atom in the solid, \(V_{S}(\mathbf{r})\) is obtained from two calculations with the isolated atom as
\[V_{S}(\mathbf{r})=\Theta(\mathbf{r})[V_{KS,attom}(\mathbf{r})_{f_{i}=1/2}-V_{ KS,atom}(\mathbf{r})_{f_{i}=1}], \tag{3}\]
in which, we add an extra label \(f_{i}\) to \(V_{KS}\) to denote the occupation. \(\Theta(\mathbf{r})\) is a trimming function to avoid the divergence due to the tail \(1/(2r)\) coming from the difference of the two KS potentials in (3). Historically, \(\Theta(\mathbf{r})\) has been chosen as
\[\Theta(\mathbf{r})=\left\{\begin{array}{cc}\left[1-\left(\frac{r}{R_{CUT}} \right)^{8}\right]^{3},&r\leq R_{CUT},\\ 0,&r>R_{CUT},\end{array}\right. \tag{4}\]
where \(R_{CUT}\) is the cutoff radius, which is determined variationally[34] and has proven to be transferable among different systems.[35]
### mBJ
The mBJ potential keeps the correlation potential the same as in LDA and replaces the exchange potential with:[47; 48]
\[v_{x,\sigma}^{mBJ}(\mathbf{r})=c\chi_{x,\sigma}^{BR}(\mathbf{r})+(3c-2)\frac{1 }{\pi}\sqrt{\frac{5}{6}}\sqrt{\frac{t_{\sigma}(\mathbf{r})}{\rho_{\sigma}( \mathbf{r})}}, \tag{5}\]
where \(\rho_{\sigma}(\mathbf{r})\) is the density of electrons with spin \(\sigma\), \(t_{\sigma}(\mathbf{r})\) is the corresponding kinetic-energy density, and \(v_{x,\sigma}^{BR}(\mathbf{r})\) is the Becke-Roussel potential[69]. The factor \(c\) in (5) is evaluated as[48]
\[c=\alpha+\beta\sqrt{\frac{1}{2\Omega}\int_{\Omega}\mathrm{d}\mathbf{r}\left[ \frac{|\nabla\rho_{\uparrow}(\mathbf{r})|}{\rho_{\uparrow}(\mathbf{r})}+\frac{ |\nabla\rho_{\downarrow}(\mathbf{r})|}{\rho_{\downarrow}(\mathbf{r})}\right]}, \tag{6}\]
where \(\alpha=-0.012\) and \(\beta=1.023\) bohr\({}^{1/2}\), and \(\Omega\) is the volume of a unit cell.
### \(G_{0}w_{0}\) approach
Taking KS eigenvalues and wavefunctions as reference, quasiparticle-corrected eigenvalues \(E_{\mathbf{k}}^{QP}\) can be calculated in the \(G_{0}W_{0}\) approximation as:[25; 27; 28; 28; 70]
\[E_{\mathbf{k}}^{QP}=E_{\mathbf{k}}+Z_{\mathbf{k}}\{\text{Re}[\Sigma_{\mathbf{k}}(E_{\mathbf{k}})]-V _{XC,\mathbf{k}}\}, \tag{7}\]
where \(Z_{\mathbf{k}}\) is the quasiparticle renormalization factor, and \(\Sigma_{\mathbf{k}}(\omega)\) and \(V_{XC,\mathbf{k}}\) are matrix elements of the self-energy (\(\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)\)) and the exchange-correlation potential:
\[\Sigma_{\mathbf{k}}(\omega)=\int\text{d}\mathbf{r}\text{d}\mathbf{r}^{\prime}\phi _{\mathbf{k}}^{*}(\mathbf{r})\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)\phi_{ \mathbf{k}}(\mathbf{r}^{\prime}), \tag{8}\]
\[V_{XC,\mathbf{k}}=\int\text{d}\mathbf{r}V_{XC}(\mathbf{r})|\phi_{\mathbf{k}}(\mathbf{ r})|^{2}. \tag{9}\]
Within the \(G_{0}W_{0}\) approximation, the self-energy \(\Sigma(\mathbf{r},\mathbf{r}^{\prime},\omega)\) is given, in the time domain, as a product of the imaginary number, the single particle Green's function, \(G_{0}(\mathbf{r},\mathbf{r}^{\prime},t)\), and the screened Coulomb interaction, \(W_{0}(\mathbf{r},\mathbf{r}^{\prime},t)\), evaluated in the random-phase approximation.[28; 71]
### Optical properties
Neglecting excitonic effects and considering an electric field applied along the \(\hat{\mathbf{e}}_{\alpha}\) direction, the tensorial component \(\alpha\alpha\) of the dielectric function \(\varepsilon\), at a given frequency \(\omega\), has an imaginary part given by[72]:
\[\text{Im}[\varepsilon_{\alpha\alpha}(\omega)]=\frac{8\pi^{2}}{\Omega N_{\mathbf{k }}}\sum_{c\nu_{\mathbf{k}}}\frac{|\langle\phi_{\mathbf{k}\mathbf{k}}|-\text{i}\hat{\mathbf{ e}}_{\alpha}\cdot\nabla|\phi_{\mathbf{k}\mathbf{k}}\rangle|^{2}}{\omega^{2}}\delta( \omega-\omega_{c\nu_{\mathbf{k}}}), \tag{10}\]
where \(N_{\mathbf{k}}\) is the number of \(\mathbf{k}\)-points, \(c\) and \(v\) are labels for the conduction and valence states, respectively, and \(\phi_{\mathbf{k}\mathbf{k}}\) and \(\phi_{\mathbf{k}\mathbf{k}}\) are the corresponding KS wavefunctions. The transition energies, \(\omega_{c\nu_{\mathbf{k}}}\), are expressed in terms of the KS eigenvalues as:
\[\omega_{c\nu_{\mathbf{k}}}=E_{c\mathbf{k}}-E_{c\mathbf{k}}. \tag{11}\]
If the imaginary part is known, the real part can be obtained using the Kramers-Kronig relations[73]:
\[\text{Re}[\varepsilon_{\alpha\alpha}(\omega)]=1+\frac{2}{\pi}\int_{0}^{\infty }\text{d}\omega^{\prime}\frac{\omega^{\prime}\text{Im}[\varepsilon_{\alpha \alpha}(\omega^{\prime})]}{\omega^{\prime 2}-\omega^{2}}. \tag{12}\]
With \(\varepsilon_{\alpha\alpha}(\omega)\), it is possible to obtain other optical properties, such as the refraction index \(\tilde{n}\), the extinction coefficient \(\kappa\), the optical absorption \(\mathcal{A}\) and the reflectivity \(\mathcal{B}\)[74]:
\[\tilde{n}(\omega)=\sqrt{\frac{[\varepsilon(\omega)]+\text{Re}[\varepsilon( \omega)]}{2}},\quad\kappa(\omega)=\sqrt{\frac{[\varepsilon(\omega)]-\text{ Re}[\varepsilon(\omega)]}{2}}, \tag{13}\]
\[\mathcal{A}(\omega)=\frac{2\omega\kappa}{v_{light}},\quad\mathcal{R}(\omega) =\frac{(1-\tilde{n})^{2}+\kappa^{2}}{(1+\tilde{n})^{2}+\kappa^{2}}, \tag{14}\]
where \(v_{light}\) is the light speed in vacuum. For simplicity, we dropped down the double indexes \(\alpha\alpha\) in Eqs. (13) and (14).
## III Computational methods
In all DFT calculations, we employ the Quantum Espresso code[75; 76; 77] with optimized norm-conserving Vanderbilt pseudopotentials[78] and a planewave cutoff of 100 Ry. For the \(G_{0}W_{0}\) calculations, we make use of BerkeleyGW[79; 80], taking LDA as the starting-point. We take advantage of the static remainder approach[81] to speed up convergence with respect to the unoccupied states. To reduce the computational cost, we use the plasmon-pole approximation.[82; 80]
We start the study with bulk AlN and InN in the wurtzite phase. We employ the experimental lattice parameters[83], relaxing the ions positions with LDA. Then, the same relaxed geometry is used for all other methods. We use a k-grid of \(16\times 16\times 10\) for LDA, LDA-1/2 and mBJ. For the \(G_{0}W_{0}\) calculations, we consider an extrapolation scheme, as described in Appendix A.1: we use k-grids of \(4\times 4\times 3\) and \(8\times 8\times 6\), and vary the cutoff for the dielectric function from 30 to 60 Ry in steps of 10 Ry, and the number of KS states from 100 to 450 in steps of 50.
Then, we proceed to the core-shell InAlN NRs. We take as diameter \(d=14\) A, as illustrated in Fig. 1. Even though much larger cells may be required to study realistic NRs,[8] our goal here is to evaluate the accuracy of LDA-1/2 and mBJ in approximating \(G_{0}W_{0}\) for these systems. Keeping the computational cost of \(G_{0}W_{0}\) in mind, we selected these NRs with a relatively small diameter as prototypes for our benchmark. We rationalize that with these NRs, one is still able to draw meaningful conclusions. Then further studies can then profit from our analysis and employ LDA-1/2 or mBJ to investigate NRs with more realistic sizes.
To avoid dangling bonds which lead to spurious states at the Fermi energy, we use H passivation, and, so, the chemical formula of the NR becomes \(\text{In}_{n}\text{Al}_{38-n}\text{N}_{38}\text{H}_{40}\). We study three different In concentrations: \(n=0\), 2, and 4. In all cases, we consider an unrelaxed geometry with bond lengths determined from the AlN experimental lattice parameters.[83] In Fig. 1, we also show a possible split between core and shell regions, leading to the compositions \(\text{In}_{n}\text{Al}_{16-n}\text{N}_{16}\) for the core, and \(\text{Al}_{22}\text{N}_{22}\text{H}_{40}\) for the shell. According to this choice, the cases \(n=2\) and 4 correspond to In compositions of 12.5 and 25 % in the core. To isolate neighboring NRs, we employ a supercell with dimensions 44 Bohr \(\times\) 31.1 Bohr (23.3 A\(\times\) 20.2 A).
For \(G_{0}W_{0}\), we employ a k-grid of \(1\times 1\times 16\) to obtain the reference density with LDA, and then \(1\times 1\times 6\) to generate the reference KS wavefunctions and eigenvalues. To enable a fair comparison, the same procedure is adopted for LDA, LDA-1/2 and mBJ. For the DOS and the optical properties, we take for all methods 300, 320, and 400 KS states into account, which is sufficient to cover transitions in the energy range \(0-20\) eV. In the \(G_{0}W_{0}\) calculations, we include 900, 920, 940 bands in the summation used to build the dielectric function, with a cutoff of 20 Ry. To speed up the \(G_{0}W_{0}\) convergence with respect to the vacuum size, we employ a Coulomb truncation for nanowires.[84]
## IV Results
### Binaries: AlN and InN
The purely binary compositions, AlN and InN, can be seen as benchmarks in relation to the ternary InAlN compounds and their properties are of relevance to the present first-principles comparative study of the InAlN core-shell NRs study.
#### iv.1.1 DOS and band gaps
In Table 1, the calculated band gaps of AlN and InN are compared with the experimental ones. As usual, LDA band gaps are underestimated for both AlN and InN. With LDA-1/2, although the band gap of InN is overestimated by 0.60 eV, the band gap of AlN agrees with the experimental with an error of 0.04 eV. With mBJ, in contrast, the band gap of InN deviates from experiment by 0.22 eV, while this error is 0.55 eV for AlN. Band gaps obtained with \(G_{0}W_{0}\) agree with experiment with an error of 0.04 eV for AlN, and of 0.50 eV for InN. Overall, there is a similar degree of agreement with experiment for LDA-1/2, mBJ and \(G_{0}W_{0}\).
Figure 2 depicts the DOS of AlN and InN. For an easier comparison of the approaches, we plot the DOS of valence and conduction bands separately, on the left and on the right, respectively, placing in each case the band edges at zero. It is apparent then that LDA and \(G_{0}W_{0}\) have the best agreement, confirming for AlN and InN the common belief that \(G_{0}W_{0}\) approximately shifts states rigidly. DOS obtained with LDA-1/2 and mBJ agree well with each other and are also very close to \(G_{0}W_{0}\).
#### iv.2.2 Dielectric function
We present the \(xx\) component of dielectric function in Fig. 3.
For AlN, the dielectric functions computed with LDA, LDA-1/2 and mBJ are red-shifted when compared to \(G_{0}W_{0}\), with LDA showing the largest deviation, and LDA-1/2 presenting a slightly better agreement than mBJ. For InN, the negative gap obtained with LDA causes a qualitative wrong behavior of \(\epsilon\) for small frequencies. LDA-1/2 and mBJ show similar results, with LDA-1/2 closer to \(G_{0}W_{0}\).
### Core-shell InAlN NRs
#### iv.2.1 Dos
Fig. 4 displays the DOS of core-shell InAlN NRs passivated with hydrogen. In each case, the zero energy has been defined as follows:
Figure 1: Cell used to accommodate the passivated core-shell InAlN NRs (In\({}_{\text{n}}\)Al\({}_{38-\text{n}}\)N\({}_{\text{3}}\)H\({}_{\text{40}}\)). The shown diameter, \(d\), is 14 Å. We employ \(L=44\) Bohr (23.3 Å). We depict here the case of an AlN NR (\(n=0\)). In the case \(n=2\), Al atoms at sites labeled with 1 and 2 are replaced by In atoms. In the case \(n=4\), then all 4 labeled sites are replaced by In atoms.
Figure 3: \(xx\) component of the dielectric function: left, the real part, and right, the imaginary part.
\begin{table}
\begin{tabular}{c c c} \hline & AlN & InN \\ \hline \(G_{0}W_{0}\) & 6.29 & 0.28 \\ LDA & 4.24 & \(-\)0.23 \\ LDA-1/2 & 6.21 & 1.38 \\ mBJ & 5.70 & 0.56 \\ exp. & 6.25 & 0.78 \\ \hline \end{tabular}
\end{table}
Table 1: Calculated band gaps, compared with experimental gaps taken from Ref. [83].
Figure 2: DOS of AlN and InN. States belonging to valence and conduction bands are plotted on the left and on the right, respectively. In each case, band edges are placed at zero.
1. projecting the DOS onto core atoms;
2. identifying the valence state with the highest energy \(E_{v}\);
3. taking \(E_{v}\) as reference and referring all other energies with respect to it.
The identification of \(E_{v}\) is illustrated for \(G_{0}W_{0}\) DOS in the top panel of Fig. 4 with the dashed line on the left. Similarly, we can define \(E_{c}\) by projecting the DOS onto core atoms, and taking it as the energy of the conduction band edge. This is shown for \(G_{0}W_{0}\) as the dashed line on the right in Fig. 4 (subplot on the top).
The isolated peaks observed for energies between 0-5 eV come from states belonging to shell atoms. The agreement between LDA, LDA-1/2, and mBJ with \(G_{0}W_{0}\) for valence states with \(E<0\) is evident for the 3 NRs. For the conduction states with \(E>E_{c}\), LDA-1/2 and mBJ match \(G_{0}W_{0}\) better than LDA. It is also apparent that, the peaks in the energy range 0-5 eV are more pronounced in LDA-1/2 than in other methods.
The definition of \(E_{v}\) and \(E_{c}\) allows us to compute \(\Delta E\) as
\[\Delta E=E_{c}-E_{v}. \tag{15}\]
\(\Delta E\) can be identified as a kind of band gap for the core region of the NR, since it is obtained from band edges of states that belong to core atoms.
Table 2 presents \(\Delta E\) for each NR. The best agreement with \(G_{0}W_{0}\) is given by LDA-1/2, with \(\Delta E\) approximately 1.2 eV smaller. mBJ comes next, predicting \(\Delta E\) 1.6-1.8 eV smaller than \(G_{0}W_{0}\). LDA is the last one with \(\Delta E\) 3.4-3.7 eV smaller than \(G_{0}W_{0}\).
When compared to bulk AlN, the NRs are expected to have larger \(\Delta E\) due to quantum confinement effects. Indeed, the passivated AlN NR has \(\Delta E\) larger than bulk AlN by 2.15, 0.45, 1.05 and 0.96 and eV, when calculated with \(G_{0}W_{0}\), LDA, LDA-1/2 and mBJ respectively. As estimated in Appendix B, an enlargement of 1.9-2.5 eV is expected due to quantum confinement effects. \(G_{0}W_{0}\) best matches this expectation, followed by LDA-1/2, mBJ and LDA.
#### iii.2.2 Optical properties
Figure 5 displays the \(xx\) component of the dielectric function. LDA-1/2 and mBJ agree well with each other and are
red-shifted by 1.8 and 2.0 eV, respectively, in comparison with \(G_{0}W_{0}\). For LDA, this amounts to 3.5 eV. By blue-shifting all \(\text{Im}[\varepsilon_{xx}]\), a good agreement with \(G_{0}W_{0}\) can be observed, as shown in Appendix C. Although these shifts do not reproduce exactly the differences in \(\Delta E\), they are comparable.
Next, we consider the contribution of In atoms present in the core region to the dielectric function of the NRs. In Fig. 5, it is evident that the presence of In introduces peaks in \(\text{Im}[\varepsilon_{xx}]\), which are red-shifted in respect to the main peak observed for AlN NRs without In. These peaks, highlighted with arrows in Fig. 5, become evident when we \(\text{Im}[\varepsilon_{xx}]\) for NRs with In and by \(\text{Im}[\varepsilon_{xx}]\) for AlN NRs (not shown here). Table 3 shows the positions of these peaks due to In. Peaks within \(G_{0}W_{0}\) ap
Figure 4: DOS of passivated core-shell InAlN NRs for different In compositions. The dashed line on the top panel shows how band edges have been evaluated to obtain band gaps shown in Table 2.
Figure 5: \(xx\) component of the dielectric function, with its real and imaginary parts. The label \(n\) refers to the amount of In atoms in the NR cell according to \(\text{In}_{n}\text{Al}_{38-n}\text{N}_{38}\text{H}_{40}\). For NRs with \(n>0\), the arrows point peaks coming from In contributions.
pear blue-shifted in comparison with other methods. Peaks obtained with LDA-1/2 exhibit best agreement with \(G_{0}W_{0}\), with a difference of 1.2-1.3 eV. These numbers are 2.0 and 3.1-3.4 eV for mBJ and LDA, respectively. Also interesting is that the red-shift of the peaks observed by increasing \(n=2\) to \(n=4\) is approximately the same in \(G_{0}W_{0}\) (0.74 eV), LDA-1/2 (0.60 eV) and mBJ (0.72 eV).
In Fig. 6, we depict the refraction index \(\tilde{n}\) and the extinction coefficient \(\kappa\) for the energy range of 0 to 20 eV.
The similarity between LDA-1/2 and mBJ is apparent. Regarding the refraction index, for the energy range 10-20 eV, LDA-1/2 and mBJ show an excellent agreement with \(G_{0}W_{0}\). Although this statement does not hold for the extinction coefficient, there is a notable improvement over LDA: LDA-1/2 and mBJ approximate \(G_{0}W_{0}\) better than LDA.
Table 4 presents the static refractive index. The best agreement with respect to \(G_{0}W_{0}\) is observed for mBJ (difference of 12-13%), closely followed by LDA-1/2 (14%) and, then, by LDA (26-27%).
Figure 7 depicts the absorbance and the reflectance of the NRs. The curves for LDA-1/2 and mBJ are very similar, and both present a better agreement with \(G_{0}W_{0}\) than LDA.
## V Conclusion
We have studied electronic and optical properties of core-shell InAlN NRs with LDA, LDA-1/2, mBJ and \(G_{0}W_{0}\). For the properties, DOS, dielectric function, refractive index, extinction coefficient, absorption coefficient, and reflectance: results with LDA-1/2 and mBJ are similar and agree better with \(G_{0}W_{0}\) than those obtained with LDA. For band gaps and peaks in Im\([\varepsilon]\) coming from In contributions, LDA-1/2 agrees better with \(G_{0}W_{0}\) than mBJ. Overall, LDA-1/2 and mBJ can be used as tools to replace \(G_{0}W_{0}\) with reasonable accuracy at much less computational cost.
The authors have no conflicts to disclose. The data that support the findings of this study are available from the corresponding author upon reasonable request.
###### Acknowledgements.
The authors gratefully acknowledge the computing time granted by the Resource Allocation Board and provided on the supercomputer Lise and Emmy at NHR@ZIB and NHR@Gottingen as part of the NHR infrastructure. They also acknowledge resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at the National Supercomputer Center (NSC) in Linkoping (NAISS 2023/5-116 and NAISS 2023/23-161) partially funded by the Swedish Research Council through grant agreement no.
\begin{table}
\begin{tabular}{c c c c} \hline & \multicolumn{3}{c}{\(n\)} \\ & 0 & 2 & 4 \\ \hline \(G_{0}W_{0}\) & 1.59 & 1.61 & 1.63 \\ LDA & 2.38 & 2.40 & 2.43 \\ LDA-1/2 & 2.03 & 2.05 & 2.07 \\ mBJ & 1.97 & 1.99 & 2.02 \\ \hline \end{tabular}
\end{table}
Table 4: Refractive index \(\tilde{n}\) at zero frequency of core-shell In\({}_{n}\)Al\({}_{38-n}\)N\({}_{38}\)H\({}_{40}\) NRs.
2018-05973. G. K. G., J. B., and L. H. acknowledge support by the Swedish Government Strategic Research Area in Materials Science on Advanced Functional Materials (AFM) at Linkoping University (Faculty Grant SFO-Mat-LiU No. 2009-00971). C.-L.H. acknowledges support by the Swedish Research Council (Vetenskapsradet) through grant number 2018-04198 and the Swedish Energy Agency (Energimyn-digheten) through grant number 46658-1.
|
2309.14651 | Adsorption and Vibrational Spectroscopy of CO on the Surface of MgO from
Periodic Local Coupled-Cluster Theory | The adsorption of CO on the surface of MgO has long been a model problem in
surface chemistry. Here, we report periodic Gaussian-based calculations for
this problem using second-order perturbation theory (MP2) and coupled-cluster
theory with single and double excitations (CCSD) and perturbative triple
excitations [CCSD(T)], with the latter two performed using a recently developed
extension of the local natural orbital approximation to problems with periodic
boundary conditions. The low cost of periodic local correlation calculations
allows us to calculate the full CCSD(T) binding curve of CO approaching the
surface of MgO (and thus the adsorption energy) and the two-dimensional
potential energy surface (PES) as a function of the distance from the surface
and the CO stretching coordinate. From the PES, we obtain the fundamental
vibrational frequency of CO on MgO, whose shift from the gas phase value is a
common experimental probe of surface adsorption. We find that CCSD(T) correctly
predicts a positive frequency shift upon adsorption of
$+14.7~\textrm{cm}^{-1}$, in excellent agreement with the experimental shift of
$+14.3~\textrm{cm}^{-1}$. We use our CCSD(T) results to assess the accuracy of
MP2, CCSD, and several density functional theory (DFT) approximations,
including exchange correlation functionals and dispersion corrections. We find
that MP2 and CCSD yield reasonable binding energies and frequency shifts,
whereas many DFT calculations overestimate the magnitude of the adsorption
energy by $5$ -- $15$~kJ/mol and predict a negative frequency shift of about
$-20~\textrm{cm}^{-1}$, which we attribute to self-interaction-induced
delocalization errors that are mildly ameliorated with hybrid functionals. Our
findings highlight the accuracy and computational efficiency of the periodic
local correlation for the simulation of surface chemistry with accurate
wavefunction methods. | Hong-Zhou Ye, Timothy C. Berkelbach | 2023-09-26T04:05:17Z | http://arxiv.org/abs/2309.14651v3 | CO Adsorption on the Surface of MgO from Periodic Coupled-Cluster Theory with Local Natural Orbitals: Adding to the consensus
###### Abstract
Accurate determination of the adsorption energy of CO on the MgO (001) surface has been a challenge for both computations and experiments over the past three decades. A recent computational study by Shi and co-workers (10.26434/chemrxiv-2023-h4czl) reported good agreement within 11 meV (1 kJ/mol) between two popular theoretical methods: coupled-cluster with singles, doubles, and perturbative triples [CCSD(T)] and diffusion Monte Carlo. In this short note, we report results on the same problem from periodic Gaussian-based MP2, CCSD, and CCSD(T), with the latter two performed using a recently developed extension of the local natural orbital (LNO) approximation to problems with periodic boundary conditions. Our final periodic LNO-CCSD(T) adsorption energy (\(-198\pm 11\) meV) is in quantitative agreement with the embedded cluster-based LNO-CCSD(T) result (\(-199\pm 11\) meV) by Shi and co-workers. The computational cost of our periodic LNO-CCSD(T) calculations is comparable to that of the embedded cluster-based LNO-CCSD(T) and is 10 times less expensive than the plane-wave-based periodic canonical CCSD(T) or 50 times less expensive than the DMC calculations reported by Shi and co-workers. Our findings highlight the accuracy and computational efficiency of the periodic LNO-based approach for the simulation of surface chemistry with correlated wavefunction methods.
## I Introduction
A recent preprint by Shi and co-workers computationally studied the adsorption of a single CO molecule on the MgO (001) surface.[1] This is an intriguing problem due to an enduring challenge of achieving a consensus between theory and experiment regarding the adsorption energy, \(E_{\text{ads}}\), over the past three decades.[2] In ref 1, the authors reported a remarkable agreement within 11 meV or 1 kJ/mol between calculations using different theoretical methods, including coupled-cluster theory with single, double, and perturbative triple excitations[3] [CCSD(T)], commonly known as the "gold standard" of quantum chemistry, and diffusion Monte Carlo[4] (DMC). Moreover, these results were obtained using different computational frameworks, including periodic calculations with plane wave basis functions[5] and an embedded cluster calculation with Gaussian basis functions.[6] Their best theoretical estimate of \(E_{\text{ads}}\) is \(-199\pm 11\) meV, which was obtained by cluster-based local natural orbital (LNO)-CCSD(T)[6; 7; 8; 9] and which agrees reasonably well with early experiments.[10; 11]
In this short note, we show that results in quantitative agreement with those in ref 1 can be obtained using a periodic Gaussian-based approach. Our results demonstrate how recent developments in density fitting,[12; 13; 14] correlation-consistent Gaussian basis sets,[15] and periodic local correlation theories[16; 17] enable fast and reliable convergence along all computational axes necessary for simulating the electronic ground state of condensed-phase systems with up to 100 atoms per unit cell.[14; 16] Our main results are summarized in TABLE I. The final adsorption energy from our periodic LNO-CCSD(T)[16; 17] calculations is \(-198\pm 11\) meV and agrees almost perfectly with the cluster-based LNO-CCSD(T) result from ref 1. Importantly, the computational cost of our periodic LNO-CCSD(T) calculations is comparable to the cluster-based LNO-CCSD(T) but is about 10 times less expensive than the plane-wave-based periodic canonical CCSD(T) and about 50 times less expensive than the DMC calculations reported in ref 1. Comparing our numbers with other independent estimates of \(E_{\text{ads}}\) at a similar level of theory from recent literature highlights the challenges of fully converging the calculations even for methods like MP2, which is often considered computationally inexpensive.
## II Computational Details
Following ref 1, we calculate the adsorption energy,
\[E_{\text{ads}}=E_{\text{int}}+\Delta_{\text{geom}} \tag{1}\]
where \(E_{\text{int}}\) is the (adiabatic) interaction energy between CO and MgO calculated with their respective geometries fixed to be those in the MgO+CO composite system, and \(\Delta_{\text{geom}}\) is the geometry relaxation energy. We obtained equilibrium geometries for CO, MgO, and MgO+CO using density functional theory[20] (DFT) with the Perdew-Burke-Ernzerhof (PBE) functional[21] and the D3 dispersion correction[22] as implemented in Quantum Espresso.[23; 24] As shown in TABLE II, PBE+D3 gives a lattice constant of bulk MgO and a relaxation energy \(\Delta_{\text{geom}}\) that agree well with ref 1, which used a slightly different DFT protocol, revPBE+D4.[25; 26] However, PBE+D3 predicts an Mg-C distance that is slightly shorter, and so we adjust it to 2.460 A to be consistent with ref 1. The PBE+D3 interaction energy \(E_{\text{int}}\) is too large by nearly 100 meV compared to both revPBE+D4 and other more accurate methods in ref 1.
In what follows, we use our DFT-determined \(\Delta_{\text{geom}}=11.4\pm 10\) meV, where the uncertainty is taken from ref 1 (which is based on variations in \(\Delta_{\text{geom}}\) calculated using different DFT methods), and we calculate the interaction energy \(E_{\text{int}}\) using a series of wavefunction methods as implemented in PySCF.[27; 28] The GTH-cc-p_VXZ_ basis sets,[15] augmented by diffuse functions for the CO molecule and nearby surface
atoms, are employed with the GTH pseudopotential optimized for HF.[29; 30; 31] All interaction energies are corrected for basis set superposition error. The two-electron integrals are treated by the range-separated density fitting algorithm.[12; 13] The integrable divergence of the HF exchange is treated using a Madelung constant correction.[32; 33; 34] We first use slab models of two atomic layers (2L) to establish the protocol for converging \(E_{\rm int}(A,X)\) with both the surface size \(A\) and the basis set size \(X\) to their respective limits: the infinite surface (IS) limit, \(A_{\infty}\), and the complete basis set (CBS) limit, \(X_{\infty}\). We then apply the protocol to thicker slab models to obtain our final estimate of \(E_{\rm int}\) and hence \(E_{\rm ads}\) at both the CBS limit and the thermodynamic limit (TDL). The results are summarized in TABLEs 1 and 3.
### Hf
We start with the HF interaction energy \(E_{\rm int}^{\rm HF}\). FIG. 1 shows the fast convergence of \(E_{\rm int}^{\rm HF}(A,X)\) with both parameters for 2L slabs: using a \(4\times 4\) surface with the QZ basis set essentially reaches both the CBS limit and the IS limit, giving \(E_{\rm int}^{\rm HF}(A_{\infty},X_{\infty})\approx 18.6\pm 1\) meV for the 2L slab, where half the difference between TZ and QZ results is taken as a (conservative) estimate of the error bar. While this protocol (\(4\times 4\)/QZ) can be applied to thicker slabs to probe \(E_{\rm int}^{\rm HF}\) in the TDL, it is computationally expensive. Given the small finite-size error of \(A=3\times 3\) and the nearly perfect parallelism between TZ and QZ data for all surface sizes, we propose a computation
\begin{table}
\begin{tabular}{l c c c c} & lat. const. a & \(d\)(Mg-C) a & \(\Delta_{\rm geom}\) b & \(E_{\rm ads}\) b \\ \hline PBE+D3 & 4.222 & 2.377 & 11.4 & \(-\)306 \\ revPBE+D4 & 4.220 & 2.460 & 8 & \(-\)207 \\ \hline \end{tabular} a
b
\end{table}
Table 2: Comparison of DFT results obtained using PBE+D3 (this work) and revPBE+D4 (ref 1).
\begin{table}
\begin{tabular}{l l c c c} Method & Comput. details & \(E_{\rm int}\)/meV & \(E_{\rm ads}\)/meV & Costa & Reference \\ \hline HF & periodic & \(+17\pm 2\) & \(+28\pm 11\) & \(<0.1\) & \\ MP2 & periodic & \(-198\pm 3\) & \(-187\pm 11\) & 0.1 & this work \\ CCSD & periodic, LNO & \(-171\pm 4\) & \(-160\pm 11\) & 11 & \\ CCSD(T) & periodic, LNO & \(-209\pm 4\) & \(-198\pm 11\) & 18 & \\ \hline MP2 & cluster embedding & \(-200\pm 5\) & \(-192\pm 11\) & & \\ CCSD(T) & cluster embedding, LNO & \(-207\pm 6\) & \(-199\pm 11\) & 20 & \\ CCSD & periodic, FNOb & \(-153\pm 22\) & \(-145\pm 24\) & & ref 1 (2023) \\ CCSD(T) & periodic, FNO & \(-201\pm 22\) & \(-193\pm 24\) & 200 & \\ DMC & periodic & \(-196\pm 24\) & \(-188\pm 26\) & 1000 & \\ \hline MP24 & periodic, DMETc & \(-431\pm\)? & & & ref 18 (2022) \\ CCSDd & \(-398\pm\)? &? & & & ref 18 (2022) \\ \hline MP2 & cluster embedding & \(+60\pm\)? & & & \\ CCSD & cluster embedding & \(+90\pm\)? & & & ref 19 (2016) \\ CCSD(T) & cluster embedding & \(+70\pm\)? & & & \\ \end{tabular}
\end{table}
Table 1: Our final interaction energy (\(E_{\rm int}\)) and adsorption energy (\(E_{\rm ads}\)) for CO on MgO obtained through various wavefunction methods compared to recent results in literature. Like the cluster-based approach in ref 1, the main source of uncertainties in our \(E_{\rm ads}\) arises from the DFT-determined geometry relaxation energy \(\Delta_{\rm geom}\) [eqn (1)]. Computational cost (measured in kCPU hours) is also shown when data are available.
\begin{table}
\begin{tabular}{l c c c} & 2L & 3L & 4L \\ \hline \(E_{\rm int}^{\rm HF}(3\times 3,\,\rm{TZ})\) & +23.1 & +20.2 & +20.9 \\ \(\Delta_{\rm CBS}^{\rm HF}\) & -1.9 & -0.8 & -0.4 \\ \(\Delta_{\rm IS}^{\rm HF}\) & -3.6 & & \\ \(E_{\rm int}^{\rm HF}(A_{\infty},X_{\infty})\) & +17.6 & +15.8 & +16.9 \\ \hline \(E_{\rm int}^{\rm HF}(2X_{\infty})_{\rm corr}(A_{\infty,2\times 3,\infty})\) & \(X_{\rm{DZ,TZ}}\) & -206.5 & -206.7 & -207.4 \\ \(\Delta_{\rm IS}^{\rm HF}(2X_{\infty})_{\rm corr}\) & -4.1 & -4.3 & -3.9 \\ \(\Delta_{\rm IS}^{\rm HF}(2X_{\infty})_{\rm corr}\) & +7.1 & & \\ \(\Delta_{\rm IS}^{\rm HF}(2X_{\infty})_{\rm corr}\) & -12.3 & -12.3 & -12.6 \\ \(\Delta_{\rm IS}^{\rm HF}(2X_{\infty})_{\rm corr}\) & -198.2 & -200.5 & -199.9 \\ \hline \(\Delta_{\rm IS}^{\rm HF}(2X_{\infty})_{\rm corr}\) & +26.9 & & \\ \(E_{\rm int}^{\rm CF}(A_{\infty},X_{\infty})\) & -171.3 & & \\ \hline \(\Delta_{\rm IS}^{\rm HF}(2X_{\infty})_{\rm corr}\) & -10.9 & & \\ \(E_{\rm int}^{\rm CCSD}(A_{\infty},X_{\infty})\) & -209.1 & & \\ \end{tabular}
\end{table}
Table 3: The interaction energy (1) for CO on MgO in both the IS limit (\(A_{\infty}\)) and the CBS limit (\(X_{\infty}\)) obtained using HF, MP2, CCSD, and CCSD(T) with the respective protocols (2), (5), and (7) for 2L – 4L slabs. The IS corrections \(\Delta_{\rm IS}^{\rm HF}\) and \(\Delta_{\rm IS}^{\rm MP2}(X_{\infty})_{\rm corr}\) evaluated for 2L are used for all slab sizes.
ally efficient alternative
\[E_{\rm int}^{\rm HF}(A_{\infty},X_{\infty})\approx E_{\rm int}^{\rm HF }(3\times 3,{\rm TZ})+\Delta_{\rm CBS}^{\rm HF}+\Delta_{\rm FS}^{\rm HF} \tag{2a}\] \[\Delta_{\rm CBS}^{\rm HF}=E_{\rm int}^{\rm HF}(2\times 2,{\rm QZ})-E_{ \rm int}^{\rm HF}(2\times 2,{\rm TZ})\] (2b) \[\Delta_{\rm IS}^{\rm HF}=E_{\rm int}^{\rm HF}(4\times 4,{\rm DZ})-E_{ \rm int}^{\rm HF}(3\times 3,{\rm DZ}) \tag{2c}\]
For the 2L slab, eqn (2) gives \(E_{\rm int}^{\rm HF}(A_{\infty},X_{\infty})=17.6\) meV, which differs from the result above by only 1 meV. We take this difference to be the error bar of the protocol (2). TABLE 3 lists \(E_{\rm int}^{\rm HF}(A_{\infty},X_{\infty})\) obtained using protocol (2) for slabs of 2L to 4L, which are seen to all agree with each other within our error bar of about 2 meV. We thus obtain our final HF interaction energy, \(E_{\rm int}^{\rm HF}\approx 17\pm 2\) meV.
### Mp2
Transitioning to correlated methods, we first obtain the converged interaction energy for MP2. FIG. 2 shows the correlation part of the MP2 interaction energy evaluated with electrons in the [Ne] core of Mg being frozen (FC), \(E_{\rm int}^{\rm MP2(FC),corr}(A,X)\), for 2L slabs. The convergence of \(E_{\rm int}^{\rm MP2(FC),corr}(A,X)\) with both parameters is slower than that of HF, but reliable extrapolations to both limits can be performed based on the asymptotic behaviors
\[E_{\rm int}^{\rm MP2(FC),corr}(A,X)\approx E_{\rm int}^{\rm MP2( FC),corr}(A_{\infty},X)+c_{1}A^{-1} \tag{3a}\] \[E_{\rm int}^{\rm MP2(FC),corr}(A,X)\approx E_{\rm int}^{\rm MP2( FC),corr}(A,X_{\infty})+c_{2}X^{-3} \tag{3b}\]
A good estimate of \(E_{\rm int}^{\rm MP2(FC),corr}(A_{\infty},X_{\infty})\) is obtained by extrapolating the surface size with \(A=3\times 3\) and \(4\times 4\) and the basis set size with TZ (\(X=3\)) and QZ (\(X=4\)), which gives \(E_{\rm int}^{\rm MP2(FC),corr}(A_{3\times 3,4\times 4},X_{\rm TZ,QZ})=-202.1\) meV. An error bar of 2 meV is chosen as half the difference between the (DZ,TZ) and the (TZ,QZ) extrapolated CBS results.
Like for the HF case, the nearly perfect parallelism between results of different basis sets justifies a computationally efficient alternative to the protocol above
\[E_{\rm int}^{\rm MP2(FC),corr}(A_{\infty},X_{\infty})\approx E_{ \rm int}^{\rm MP2(FC),corr}(A_{2\times 2,3\times 3},X_{\rm DZ,TZ})\] \[+\Delta_{\rm CBS}^{\rm MP2(FC),corr}+\Delta_{\rm IS}^{\rm MP2( FC),corr} \tag{4a}\] \[\Delta_{\rm CBS}^{\rm MP2(FC),corr}=E_{\rm int}^{\rm MP2(FC),corr}( 2\times 2,X_{\rm TZ,QZ})\] \[-E_{\rm int}^{\rm MP2(FC),corr}(2\times 2,X_{\rm DZ,TZ})\] (4b) \[\Delta_{\rm IS}^{\rm MP2(FC),corr}=E_{\rm int}^{\rm MP2(FC),corr}( A_{3\times 3,4\times 4},X_{\rm DZ,TZ})\] \[-E_{\rm int}^{\rm MP2(FC),corr}(A_{2\times 2,3\times 3},X_{\rm DZ,TZ}) \tag{4c}\]
For the 2L slab, protocol (4) gives \(E_{\rm int}^{\rm MP2(FC),corr}(A_{\infty},X_{\infty})\approx-203.5\) meV, which differs from the result above by less than 2 meV. We take this difference as the uncertainty of the protocol (4). Finally, we account for the error of freezing the \([2s^{2}2p^{6}]\) semicore electrons of Mg by a composite correction
\[E_{\rm int}^{\rm MP2,corr}(A_{\infty},X_{\infty})\approx E_{ \rm int}^{\rm MP2(FC),corr}(A_{\infty},X_{\infty})+\Delta_{\rm FC}^{\rm MP2(FC),corr} \tag{5a}\] \[\Delta_{\rm FC}^{\rm MP2(FC),corr}=E_{\rm int}^{\rm MP2,corr}(2 \times 2,X_{\rm DZ,TZ})\] \[-E_{\rm int}^{\rm MP2(FC),corr}(2\times 2,X_{\rm DZ,TZ}) \tag{5b}\]
TABLE 3 lists the final MP2 interaction energy, \(E_{\rm int}^{\rm MP2}\), calculated using protocol (5) for slabs from 2L to 4L. Like in the HF case, the TDL of \(E_{\rm int}^{\rm MP2}\) is essentially reached by the 2L model, and our final estimate of \(E_{\rm int}^{\rm MP2}\) is \(-200\pm 3\) meV (the uncertainty from \(E_{\rm int}^{\rm HF}\) is included), which differs from many previous periodic or cluster-based MP2 calculations but agrees quantitatively with the cluster embedding result from ref 1.
Figure 1: Convergence of HF interaction energy, \(E_{\rm int}^{\rm HF}\), with surface and basis set sizes for CO on 2L MgO slabs.
Figure 2: Convergence of the correlation part of the frozen-core MP2 interaction energy, \(E_{\rm int}^{\rm MP2(FC),corr}\), with surface and basis set sizes for CO on 2L MgO slabs. (D,T) denotes CBS extrapolation using DZ and QZ results [similar for (T,Q)].
### CCSD and CCSD(T)
Finally, we calculate \(E_{\text{int}}\) at the more expensive CCSD and CCSD(T) levels employing our recently developed periodic extension of the LNO approximation, which we have used elsewhere to study the dissociation of water on the surface of Al\({}_{2}\)O\({}_{3}\) and TiO\({}_{2}\),[16] and whose details will be described in a separate manuscript;[17] Clearly, our periodic LNO-CC method parallels the cluster-based LNO-CCSD(T) method used in ref 1, but is in some respects simpler because it does not require the definition of a cluster and embedding protocol, which can be highly non-trivial for systems like metal oxides.[6] Moreover, periodic LNO-CC is free of the extra errors introduced at the artificial cluster boundary. The accuracy of the LNO approximation can be systematically improved by adjusting a single parameter: the threshold \(\eta\) used to truncate the LNOs by their occupation numbers[7] (we use a threshold for unoccupied orbitals that is fixed to be 10 times smaller than that of the occupied orbitals; all reported thresholds henceforth are for unoccupied orbitals). As \(\eta\to 0\), the LNO-CCSD/CCSD(T) results converge to the result of canonical CCSD/CCSD(T) calculations. To expedite this convergence, we use a standard correction based on an MP2 calculation at the same value of \(\eta\).[7]
FIG. 3 shows the \(\eta\)-convergence of the correlation part of the frozen-core LNO-CCSD and LNO-CCSD(T) interaction energy, \(E_{\text{int}}^{\text{LNO-CC(FC),corr}}\), extrapolated to the CBS limit based on DZ and TZ results for CO on a 2L, \(2\times 2\) MgO slab. The system is small enough (about 650 orbitals in the TZ basis set) to perform canonical CC calculations, results of which are also shown in FIG. 3 for comparison. We see that the error of LNO-CCSD and LNO-CCSD(T) (hollow circles) is about 12 and 6 meV with a modest threshold of \(10^{-6}\), but an error of 2 meV requires a tight threshold of \(10^{-7}\) and \(10^{-8}\). While LNO-CC calculations with a tight threshold of \(10^{-8}\) are feasible for larger systems, a more efficient alternative is to correct the bare LNO-CC results by the difference between LNO-CC and canonical CC in the DZ basis set, which we denoted as
\[\begin{split}\Delta_{\text{cano}}^{\text{LNO-CC(FC)}}(\eta)& =E_{\text{int}}^{\text{CC(FC),corr}}(2\times 2,\text{DZ})\\ &\quad-E_{\text{int}}^{\text{LNO-CC(FC),corr}}(2\times 2,\text{DZ},\eta)\end{split} \tag{6}\]
The corrected LNO-CC results are shown as filled circles in FIG. 3 and seen to converge much faster than the uncorrected ones.
Our final estimate of the CC interaction energy can be obtained by first extrapolating \(E_{\text{int}}^{\text{LNO-CC(FC),corr}}(A,X)\) to the IS and the CBS limits in an approximate manner, chosen to be \(A_{2\times 2,3\times 3}\) and \(X_{\text{DZ,TZ}}\), and then correcting the remaining finite-size and basis set incompleteness error (including the frozen-core approximation by a composite correction using the fully converged MP2 result from above. This can be equivalently formulated as correcting the fully converged MP2 energy with a CC correction term
\[E_{\text{int}}^{\text{LNO-CC,corr}}(A_{\infty},X_{\infty},\eta) \approx E_{\text{int}}^{\text{MP2,corr}}(A_{\infty},X_{\infty})\] \[\qquad\qquad\qquad\qquad\qquad+\Delta_{\text{CC}}^{\text{LNO-CC( FC)}}(\eta) \tag{7a}\] \[\Delta_{\text{CC}}^{\text{LNO-CC(FC)}}(\eta) =E_{\text{int}}^{\text{LNO-CC(FC),corr}}(A_{2\times 2,3\times 3},X_{ \text{DZ,TZ}},\eta)\] \[\qquad\qquad\qquad-E_{\text{int}}^{\text{MP2(FC),corr}}(A_{2 \times 2,3\times 3},X_{\text{DZ,TZ}})\] \[\qquad\qquad\qquad\qquad+\Delta_{\text{cano}}^{\text{LNO-CC(FC)}}(\eta) \tag{7b}\]
The \(\eta\)-convergence of protocol (7) is illustrated in FIG. 4. Our best estimate of the CCSD and CCSD(T) interaction energy (including \(E_{\text{int}}^{\text{HF}}\)), obtained from \(\eta=10^{-7}\), is \(-171.3\) meV and
Figure 3: Convergence of the correlation part of the frozen-core LNO-CCSD and LNO-CCSD(T) interaction energy, \(E_{\text{int}}^{\text{LNO-CC(FC),corr}}\), extrapolated to the CBS limit based on DZ and TZ results for CO on a 2L, \(2\times 2\) MgO slab with respect to the LNO truncation threshold. The hollow and filled circles denote results obtained before and after applying the \(\Delta_{\text{cano}}\) correction (6). The canonical results are shown as green horizontal lines, with the green shaded area indicating \(\pm 2\) meV.
Figure 4: Convergence of the final estimate of the CCSD and CCSD(T) interaction energy obtained using protocol (7) with respect to the LNO truncation threshold. The best estimate is results from the tightest threshold and highlighted by the solid horizontal lines, with the shaded area indicating \(\pm 2\) meV.
\(-209.1\) meV, also listed in TABLE 3. An estimated error bar for these numbers, estimated using the difference between the two calculations with the tightest thresholds shown in FIG. 4, is about 2 meV. Combined with the error bar of \(E_{\text{int}}^{\text{MP2}}\) (3 meV) and of \(\Delta_{\text{cano}}^{\text{LNO-CC(FC)}}\) (2 meV) used in protocol (7), the final error bar for our CCSD and CCSD(T) numbers is about 4 meV.
## III Conclusion
To summarize, we have presented the converged calculation of the adsorption energy of CO on the surface of MgO using periodic calculations with Gaussian orbitals. CCSD and CCSD(T) results were obtained using our recently developed periodic LNO-CC code, which is complementary to both the canonical periodic plane-wave method and the cluster embedding LNO-CC method used in ref 1, and the numerical agreement is excellent. Just as for molecules, LNO-CC calculations for periodic solids can be performed at a fraction of the cost of canonical periodic CC calculations, thus significantly expanding the scope of applicability for CC-based methods in periodic simulations.
The results presented here were generated within a few days from scratch, after learning of ref 1, suggesting that periodic Gaussian-based correlated theories are on their way to becoming routine in the accurate study of surface chemistry and other condensed-phase phenomena. We strongly echo the conclusion of ref 1, that the ability to achieve quantitative agreement between different methods using different codes and different basis sets, for a nontrivial problem in surface science, marks an important milestone for the community.
|
2309.04324 | Graded Modal Types for Integrity and Confidentiality | Graded type systems, such as the one underlying the Granule programming
language, allow various different properties of a program's behaviour to be
tracked via annotating types with additional information, which we call grades.
One example of such a property, often used as a case study in prior work on
graded types, is information flow control, in which types are graded by a
lattice of security levels allowing noninterference properties to be
automatically verified and enforced. These typically focus on one particular
aspect of security, however, known as confidentiality; public outputs are
prohibited from depending on private inputs. Integrity, a property specifying
that trusted outputs must not depend on untrusted inputs, has not been examined
in this context.
This short paper aims to remedy this omission. It is well-known that
confidentiality and integrity are in some sense dual properties, but simply
reversing the ordering of the security lattice turns out to be unsatisfactory
for the purpose of combining both kinds of property in a single system, at
least in our setting. We analogize the situation to recent work on embedding
both linear and uniqueness types in a graded framework, and use this framing to
demonstrate that we can enforce both integrity and confidentiality alongside
one another. The main idea is to add an additional flavour of modality
annotated for integrity, such that the existing graded comonad for tracking
confidentiality now also acts as a relative monad over the new modality, with
rules allowing information to flow from trusted to public to private. | Daniel Marshall, Dominic Orchard | 2023-09-08T13:40:52Z | http://arxiv.org/abs/2309.04324v1 | # Graded Modal Types for
###### Abstract.
Graded type systems, such as the one underlying the Granule programming language, allow various different properties of a program's behaviour to be tracked via annotating types with additional information, which we call _grades_. One example of such a property, often used as a case study in prior work on graded types, is _information flow control_, in which types are graded by a lattice of security levels allowing noninterference properties to be automatically verified and enforced. These typically focus on one particular aspect of security, however, known as _confidentiality_; public outputs are prohibited from depending on private inputs. _Integrity_, a property specifying that trusted outputs must not depend on untrusted inputs, has not been examined in this context.
This short paper aims to remedy this omission. It is well-known that confidentiality and integrity are in some sense dual properties, but simply reversing the ordering of the security lattice turns out to be unsatisfactory for the purpose of combining both kinds of property in a single system, at least in our setting. We analogize the situation to recent work on embedding both linear and uniqueness types in a graded framework, and use this framing to demonstrate that we can enforce both integrity and confidentiality alongside one another. The main idea is to add an additional flavour of modality annotated for integrity, such that the existing graded comonad for tracking confidentiality now also acts as a _relative monad_ over the new modality, with rules allowing information to flow from trusted to public to private.
## 1. Introduction and Motivation
Information flow control aims to track the flow of information through a program when it is executed, to make sure that the program handles that information in a secure way (Krishnan, 2015). Secure information flow (discussed in the literature since the 1970s (Bowden, 1970; Datta and Goyal, 1971) encompasses multiple aspects, with two of the most essential being _confidentiality_ and _integrity_. Pfleeger's textbook _Security in Computing_(Krishnan, 2015) describes these two properties as follows. Confidentiality "ensures that assets are accessed only by authorised parties", or in other words that private information is never accessed by a program which only has public clearance. Meanwhile, integrity "means that assets can be modified only by authorised parties or only in authorised ways", such that a trusted program never depends on information from an untrusted source. The strictest desirable property in both cases is _noninterference_, which only holds if public or trusted outputs may _never_ depend on private or untrusted inputs respectively (Krishnan, 2015), though this is often considered difficult to achieve in practical systems.
Much of the prior work on using graded type systems for information flow control aims to track and restrict the outputs that can be produced by a given program (Datta and Goyal, 1971; Datta and Goyal, 1971; Datta and Goyal, 1971). Implementations of such ideas also exist for more widely-used functional languages such as Haskell (Haskell, 1971). More recently, graded type systems have been designed which enforce properties based on _coeffects_, where the inputs that can be passed into a program are the focus. Such systems (with the type system underlying Granule being the one we will focus on here (Krishnan, 2015)) often make use of information flow security as a case study in how annotating types with grades can allow more properties of a program to be verified (Bowden, 1970; Datta and Goyal, 1971; Datta and Goyal, 1971; Datta and Goyal, 1971). However, these tend to only focus on confidentiality properties, which omits a host of additional properties that could be guaranteed if it were also possible to enforce integrity. Consider the following definition of a data type in Granule.1
Footnote 1: The latest release of Granule is always available to download and install from [https://github.com/granule-project/granule/releases](https://github.com/granule-project/granule/releases).
1. data Patient where
2. Patient
3. (Int [Private]) -- Patient ID
4. (String [Private]) -- Patient name
5. (Int [Public]) -- Patient age
The type a [r] means that we have a value of type a wrapped inside the \(\square\) modality which must be used according to the restriction described by the grade \(r\). Here, the patient's ID and name have the grade Private, while their age is Public. Now, consider a function with the type:
1. meanAge : List Patient \(\rightarrow\) Int [Public]
that calculates the mean age of a database (here simplified to a list) of patients and returns the result at the public security level. Since ages are also Public, this is fine-but if we made a mistake while writing the function and used private information such as IDs instead, this would be rejected by the type checker due to the security annotations, so no information can be leaked. Similar properties also apply when storing secret data such as passwords or credit card numbers.
However, imagine a case where we are constructing a patient to add to the database, as follows:
addPatient:ListPatient-String[Private]
2\(\rightarrow\)Int[Public]-ListPatient
Here, we have security level grades restricting who will be able to view various details once the database has been updated, but we have no way to stop some compromised code from passing in an attempt at SQL injection using a string such as "Alice'); OROP TABLE patients; '\(\cdots\)', for example. If this input is treated as trustworthy, we might well encounter dramatic problems later in our program's execution.2
Footnote 2: [https://xcd.com/327/](https://xcd.com/327/)
In order to avoid this, we would want some kind of grade that carries information about the _provenance_ of our data (Hanan et al., 2016), so we could declare that we can only safely add patient information into our database if the string verifiably comes from a trusted location. This would also be useful if, for instance, we were encrypting private data and wanted a way to ensure that our random numbers used for encryption were reliable and had not been tainted by an untrusted source.
It is well understood that this kind of integrity property is dual in some sense to the confidentiality properties which Granule can already express (Granule, 2016) (though it is also known that this duality is not sufficient to cover every facet of the concept of integrity, with more complex mechanisms than a lattice model being required for some applications (Granule, 2016; Granule, 2016)). It turns out that this duality is closely comparable to a similar duality between _linear_ types (forming the basis of Granule's graded type system) and _uniqueness_ types, which has been more clearly elucidated in recent work (Granule, 2016).
It turns out that while linear types are a restriction on what may happen in the _future_ (a linear value must never be duplicated or discarded), uniqueness types are a guarantee about what has happened in the _past_ (a unique value must never _have been_ duplicated). We will now show how this understanding also allows us to express integrity properties in Granule, through an additional flavour of graded modality.
## 2. Theory and Implementation
Our approach here builds on the type system described in the original Granule paper (Granule, 2016), with some extra rules for the new modality carrying integrity information.
The crucial insight here is that in order to combine confidentiality (public and private) and integrity (trusted and untrusted) in a single system, we can treat "public" and "untrusted" as the _same state_, both represented by the Public grade; these both carry the same information, telling us that there is no restriction on how the data may be used in the future (it is not restricted to private usage) and we have no guarantee about how it was used in the past (it did not necessarily come from a trusted source). The Private grade behaves exactly as described in the original work (Granule, 2016).
We introduce a new \(*\) modality (whose syntax is borrowed from the corresponding modality for uniqueness types (Granule, 2016), as mentioned above) to carry the Trusted grade, with the important rules for this modality's behaviour given below.
\[\begin{array}{l}\Gamma+t:*_{\text{Trusted}}A\\ \Gamma+\text{reveal}\ t:\Box_{\text{Public}}A\end{array}\qquad\begin{array}{l} \varnothing+t:A\\ \varnothing+*t:*_{\text{Trusted}}A\text{Nec}\\ \overline{\Gamma_{1}+\Gamma_{1}+\Gamma_{2}+\text{endorse}}\ t_{1}\text{ as }x\text{ in }t_{2}:\Box_{\text{Public}}B\end{array}\]
The Reveal rule maps a trusted value to a public (untrusted) one, allowing the information flow for integrity to behave as expected. The Endorse rule allows a public value to be temporarily used as trusted within the context of a larger computation; this mimics the common integrity pattern of _endorsement_, where a value is examined and declared to be trusted to whatever extent is necessary for a particular usage (Granule, 2016). The output, however, is required to be public, so that we can not leverage our temporary integrity to'smuggle out' a trusted value outside the context of the endorsement. These rules are accompanied by a necessitation rule, allowing values to be trusted by default if they have no dependencies.
Note that the naming and pattern of the rules suggests a structural relationship between the two modalities. The \(\Box\) modality previously acted as a _graded comonad_, but now when graded by Public also acts as a _relative monad_(Bord
* Extending Granule's capacity to track confidentiality and integrity further, going beyond lattices containing only two security levels in order to enforce more complex and fine-grained properties.
* Considering additional aspects of information flow security such as _availability_, where information which should always be available cannot depend on information that may be unavailable; the direction of information flow here is the same as for integrity, so this should be possible using the new flavour of modality.
* Borrowing the ideas currently being developed for extending Granule's uniqueness types to a more complex _ownership_ model via fractional grading on the \(*\) modality [5, 15], which here may allow us to express integrity properties relating to _separation of duties_[14]; this would be a starting point for capturing further aspects of integrity that go beyond what can be described by the current lattice-based model [8].
* It would be interesting to explore how this idea connects to recent work on bridging the gap between monadic and comonadic approaches to information flow analyses [6]. Note that Granule's current confidentiality tracking is primarily comonadic in nature, but also incorporates a touch of monadic flavour through use of a flatten operation, which allows transformations such as \(\mathsf{D}_{\mathsf{Public}}(\mathsf{D}_{\mathsf{Private}}\ A)\to\mathsf{D}_{ \mathsf{Private}}\ A\); this operation is derivable from Granule's rules for pattern matching on nested modalities [20].
This paper also forms part of a larger body of work that involves uncovering a general algebraic structure underlying _global guarantees_ about a program, which are best represented using a graded comonad that also acts as a relative monad over some functor. This work is itself ongoing.
|
2309.13880 | A Note On Simultaneous Estimation of Order Restricted Location
Parameters of a General Bivariate Symmetric Distribution Under a General Loss
Function | The problem of simultaneous estimation of order restricted location
parameters $\theta_1$ and $\theta_2$ ($-\infty<\theta_1\leq \theta_2<\infty$)
of a bivariate location symmetric distribution, under a general loss function,
is being considered. In the literature, many authors have studied this problem
for specific probability models and specific loss functions. In this paper, we
unify these results by considering a general bivariate symmetric model and a
quite general loss function. We use the Stein and the Kubokawa (or IERD)
techniques to derive improved estimators over any location equivariant
estimator under a general loss function. We see that the improved Stein type
estimator is robust with respect to the choice of a bivariate symmetric
distribution and the loss function, as it only requires the loss function to
satisfy some generic conditions. A simulation study is carried out to validate
the findings of the paper. A real-life data analysis is also provided. | Naresh Garg, Neeraj Misra | 2023-09-25T05:19:45Z | http://arxiv.org/abs/2309.13880v1 | A Note On Simultaneous Estimation of Order Restricted Location Parameters of a General Bivariate Symmetric Distribution Under a General Loss Function
###### Abstract
The problem of simultaneous estimation of order restricted location parameters \(\theta_{1}\) and \(\theta_{2}\) (\(-\infty<\theta_{1}\leq\theta_{2}<\infty\)) of a bivariate location symmetric distribution, under a general loss function, is being considered. In the literature, many authors have studied this problem for specific probability models and specific loss functions. In this paper, we unify these results by considering a general bivariate symmetric model and a quite general loss function. We use the Stein and the Kubokawa (or IERD) techniques to derive improved estimators over any location equivariant estimator under a general loss function. We see that the improved Stein type estimator is robust with respect to the choice of a bivariate symmetric distribution and the loss function, as it only requires the loss function to satisfy some generic conditions. A simulation study is carried out to validate the findings of the paper. A real-life data analysis is also provided.
I ARTICLE TEMPLATE
Improved estimator; Inadmissibility; Location equivariant estimator; Restricted MLE; Restricted Parameter Space
## 1 Introduction
The problem of estimating order restricted location parameters \(\theta_{1}\) and \(\theta_{2}\) (\(-\infty<\theta_{1}\leq\theta_{2}<\infty\)) of two distributions is of interest in many real-life situations. For example, in an engine efficiency measurement experiment where estimating the average efficiency of an internal combustion engine (IC engine) and an external combustion engine (EC engine) is of interest, it can be assumed that the average efficiency of an IC engine is higher than the average efficiency of an EC engine. For an account of such applications and relevant literature, one may refer to Barlow et al. (1972), Robertson et al. (1988), and van Eeden (2006).
Early studies in this area were focused on studying isotonic regression and/or restricted maximum likelihood estimators of order restricted parameters. Afterwards, the problem was studied using decision theoretic approach with focus on obtaining estimators improving over the unrestricted best location equivariant estimators (BLEE) and/or unrestricted maximum likelihood estimators (MLE). Several of these studies
are centered around specific distributions and specific loss functions, barring a few studies that are carried out for general probability models and general loss function. In this paper, we will obtain some unified results for simultaneous estimation of order restricted location parameters of a general bivariate symmetric distribution under a general loss function.
It is worth mentioning that Stein (1964) proposed a technique to improve the best affine equivariant estimator of the variance of a normal distribution. The Stein technique provides shrinkage type non-smooth dominating estimators. This technique is generalized by Brewster and Zidek (1974) who also proposed another technique to improve the best equivariant estimators. The Brewster and Zidek (1974) technique produces smooth dominating estimators (generally the generalized Bayes estimator with respect to a non-informative prior). Kubokawa (1994) unified the two techniques of the Stein (1964) and the Brewster and Zidek (1974) by using a representation of difference of risk functions in terms of a definite integral. He named this unified technique the integral expression risk difference (IERD) method.
Using the techniques of Stein (1964), Brewster and Zidek (1974) and Kubokawa (1994), several authors have considered the problem of improving equivariant estimators for specific probability models (mostly, having independent marginals) and specific loss functions. Kumar and Sharma (1988) dealt with simultaneous estimation of ordered means of two normal distributions, having a known common variance, under the sum of squared error loss functions and obtained a sufficient condition that ensures the inadmissibility of any location equivariant estimator. Patra and Kumar (2017) further extended the results of Kumar and Sharma (1988) to a bivariate normal distribution having known variances and a known correlation coefficient. Tsukuma and Kubokawa (2008) considered simultaneous estimation of \(p\) (\(\geq 2\)) means of a \(p\) dimensional multivariate normal distribution with the covariance matrix as the identity matrix, when it was known apriori that the means were restricted to a polyhedral convex cone. Under the sum of the squared errors loss function, they have obtained the generalized Bayes estimator against the uniform prior distribution over the polyhedral convex cone and shown that this estimator is minimax. Further, for \(p=2\) and when means are order restricted, they have shown that the generalized Bayes estimator is admissible. For a general framework, Hamura and Kubokawa (2022) considered component-wise estimation of order restricted location parameters of two independent log-concave or log-convex probability models. They have considered the Stein type truncated estimator and shown that this estimator dominates the usual unrestricted estimator, under the squared error loss function.
In this paper, we aim to unify various results in the literature by considering a general bivariate symmetric location model and a general loss function. We consider simultaneous estimation of order restricted location parameters \(\theta_{1}\) and \(\theta_{2}\) (\(\theta_{1}\leq\theta_{2}\)), under a general loss function. We use the Stein (1964) technique to derive a sufficient condition for the inadmissibility of a location equivariant estimator and obtain an improved estimator. In this case, the improved Stein (1964) type estimator is robust, as the form of the improved estimator does not depend on the choice of the bivariate symmetric distribution and the loss function, except some generic conditions. Further, we use the Kubokawa (1994) technique to obtain a class of improved estimators over the best location equivariant estimators (BLEE). To illustrate the usefulness of our results, we consider a bivariate normal distribution with unknown order restricted means, the known common variance, and the known correlation coefficient. We obtain
the Stein (1964) type improved estimator over the unrestricted BLEE/MLE under a general loss function. We see that this improved estimator is the restricted MLE. We also obtain the Brewster-Zidek (1974) type improved estimators over the unrestricted BLEE/MLE under the squared error loss and the absolute error loss. These improved estimators are also generalized Bayes estimators under the squared error loss and the absolute error loss, respectively.
The rest of the paper is organised as follows: In Section 2, we consider the estimation of location parameters of a general bivariate symmetric model. In Section 2.1, we use the Stein (1964) technique to show the inadmissibility of location equivariant estimators satisfying a sufficient condition, and in Section 2.2, we use the Kubokawa (1964) technique to obtain a class of estimators improving upon the BLEE, under a general loss function. In Section 3, we demonstrate applications of our general result to a bivariate normal distribution and report a simulation study to validate our findings. In Section 4, we present a real-life application of the findings of this paper. In Section 5, we provide concluding remarks for this paper.
## 2 Improved estimators for order restricted location parameters
Let \(\mathbf{X}=(X_{1},X_{2})\) be a random vector having the Lebesgue probability density function (p.d.f.)
\[f_{\boldsymbol{\theta}}(x_{1},x_{2})=f(x_{1}-\theta_{1},x_{2}-\theta_{2}),\;( x_{1},x_{2})\in\Re^{2},\;\boldsymbol{\theta}=(\theta_{1},\theta_{2})\in \Theta_{0}, \tag{2.1}\]
where \(f(\cdot,\cdot)\) is a specified bivariate Lebesgue p.d.f. on \(\Re^{2}=(-\infty,\infty)\times(-\infty,\infty)\), \(\boldsymbol{\theta}=(\theta_{1},\theta_{2})\) (\(\in\Theta_{0}\)) is an unknown parameter and \(\Theta_{0}\) is the parameter space. Generally, \(\mathbf{X}=(X_{1},X_{2})\) would be a minimal-sufficient statistic for \(\boldsymbol{\theta}\in\Theta_{0}\), based on a bivariate random sample or two independent random samples, as the case may be. Throughout, we make the following assumption about the probability model (2.1):
**Assumption D1:** The parameter space of interest is the restricted space \(\Theta_{0}=\{(x,y):-\infty<x\leq y<\infty\}\). Moreover, \(f(z_{1},z_{2})=f(z_{2},z_{1})=f(-z_{1},-z_{2}),\;\forall\;(z_{1},z_{2})\in \Re^{2}\).
Consider simultaneous estimation of order restricted location parameters \(\theta_{1}\) and \(\theta_{2}\) (\((\theta_{1},\theta_{2})\in\Theta_{0}\)) under the loss function
\[L(\boldsymbol{\theta},\mathbf{a})=W(a_{1}-\theta_{1})+W(a_{2}-\theta_{2}),\; \boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\Theta_{0},\;\mathbf{a}=(a_{1}, a_{2})\in\mathcal{A}=\Re^{2}, \tag{2.2}\]
where \(W:\Re\rightarrow[0,\infty)\) satisfies the following assumption:
**Assumption D2:**\(W(0)=0\), \(W(t)=W(-t),\;t\in\Re\), \(W(t)\) is strictly decreasing in \(t\in(-\infty,0)\) and strictly increasing in \(t\in(0,\infty)\). Also \(W^{{}^{\prime}}(t)\) is increasing, almost everywhere.
The above estimation problem is invariant under the group \(\mathcal{G}=\{g_{c}:\;c\in\Re\}\) of transformations, where \(g_{c}(x_{1},x_{2})=(x_{1}+c,x_{2}+c),\;(x_{1},x_{2})\in\Re^{2},\;c\in\Re\). The induced group of transformations on the parameter space \(\Theta_{0}\) and the action space \(\mathcal{A}\) are \(\overline{\mathcal{G}}=\{\overline{g}_{c}:c\in\Re\}\) and \(\tilde{\mathcal{G}}=\{\tilde{g}_{c}:c\in\Re\}\), respectively, where \(\overline{g}_{c}(\theta_{1},\theta_{2})=(\theta_{1}+c,\theta_{2}+c),\;( \theta_{1},\theta_{2})\in\Theta_{0}\), \(\tilde{g}_{c}(a_{1},a_{2})=(a_{1}+c,a_{2}+c),\;(a_{1},a_{2})\in\mathcal{A}= \Re^{2},\;c\in\Re\).
Any location equivariant estimator of \(\mathbf{\theta}\) is of the form
\[\mathbf{\delta}_{\psi}(\mathbf{X})=(X_{1}-\psi_{1}(D),X_{2}-\psi_{2}(D)), \tag{2.3}\]
for some functions \(\psi_{i}:\,\Re\to\Re,\;i=1,2\), where \(D=X_{2}-X_{1}\).
Let \(Z_{i}=X_{i}-\theta_{i},\;i=1,2\). Since \(f(z_{1},z_{2})=f(z_{2},z_{1}),\;\forall\;(z_{1},z_{2})\in\Re^{2}\) and \(f(z_{1},z_{2})=f(-z_{1},-z_{2}),\;\forall\;(z_{1},z_{2})\in\Re^{2}\), we have \((Z_{1},Z_{2})\stackrel{{\mathrm{d}}}{{=}}(Z_{2},Z_{1})\stackrel{{ \mathrm{d}}}{{=}}(-Z_{2},-Z_{1})\), where \(\stackrel{{\mathrm{d}}}{{=}}\) stands for equality in the distribution. Evidently, the problem of simultaneously estimating the order restricted location parameters \(\theta_{1}\) and \(\theta_{2}\) (\(\mathbf{\theta}\in\Theta_{0}\)), under the loss function (2.2), is also invariant under the group of transformations \(\mathcal{H}=\{h_{1},h_{2}\}\), where \(h_{1}(x_{1},x_{2})=(x_{1},x_{2}),\;h_{2}(x_{1},x_{2})=(-x_{2},-x_{1}),\;(x_{1},x_{2})\in\Re^{2}\). The induced group of transformations on the parameter space \(\Theta_{0}\) and the action space \(\mathcal{A}\) are \(\overline{H}\) and \(\tilde{H}\), respectively, where \(\overline{H}=\{\tilde{h}_{1},\overline{h}_{2}\},\;\tilde{H}=\{\tilde{h}_{1}, \tilde{h}_{2}\}\), \(\overline{h}_{1}(\theta_{1},\theta_{2})=(\theta_{1},\theta_{2}),\;\tilde{h}_{ 2}(\theta_{1},\theta_{2})=(-\theta_{2},-\theta_{1}),\;(\theta_{1},\theta_{2}) \in\Theta_{0},\;\tilde{h}_{1}(a_{1},a_{2})=(a_{1},a_{2})\) and \(\tilde{h}_{2}(a_{1},a_{2})=(-a_{2},-a_{1}),\;(a_{1},a_{2})\in\mathcal{A}\). An estimator \(\delta_{\psi}(\mathbf{X})=(X_{1}-\psi_{1}(D),X_{2}-\psi_{2}(D))\) is invariant under \(\mathcal{H}\) if, and only if,
\[(X_{1}-\psi_{1}(D),X_{2}-\psi_{2}(D))=(-(-X_{1}-\psi_{2}(D)),-(-X_{2}-\psi_{1 }(D)))\]
i.e., \(\psi_{2}(D)=-\psi_{1}(D)\).
Thus, the form of any estimator that is equivariant under \(\mathcal{G}\) as well as under \(\mathcal{H}\) is
\[\mathbf{\delta}_{\psi}(\mathbf{X})=(X_{1}-\psi(D),X_{2}+\psi(D)), \tag{2.4}\]
for some function \(\psi:\Re\to\Re\).
### Improvements Over an Location Equivariant Estimator \((X_{1}-\psi(D),X_{2}+\psi(D))\)
In this section, we use the Stein (1964) technique to obtain improved estimators over any arbitrary location equivariant estimator \(\mathbf{\delta}_{\psi}(\mathbf{X})=(X_{1}-\psi(D),X_{2}+\psi(D))\). Using \(f(z_{1},z_{2})=f(-z_{1},z_{2}),\;(z_{1},z_{2})\in\Re^{2}\) (i.e. \((Z_{1},Z_{2})\stackrel{{\mathrm{d}}}{{=}}(-Z_{2},-Z_{1})\)), and \(W(t)=W(-t),\;t\in\Re\), for \(\mathbf{\theta}=(\theta_{1},\theta_{2})\in\Theta_{0}\), the risk function of the estimator \(\delta_{\psi}(\mathbf{X})\), defined by (2.4), is obtained as
\[R(\mathbf{\theta},\delta_{\psi}) =E_{\mathbf{\theta}}[W(X_{1}-\psi(D)-\theta_{1})+W(X_{2}+\psi(D)- \theta_{2})]\] \[=2\int_{-\infty}^{\infty}\left[\int_{-\infty}^{\infty}W(s-\psi(t ))f(s,s+t-\lambda)ds\right]dt\] \[=2\int_{-\infty}^{\infty}r_{\lambda}(\psi(t),t)\,dt,\]
where \(\lambda=\theta_{2}-\theta_{1}\;(\geq\;0)\) and
\[r_{\lambda}(c,t)=\int_{-\infty}^{\infty}W(s-c)f(s,s+t-\lambda)ds,\;\;c\in\Re,\; t\in\Re.\]
The following lemma will be useful in proving the main result of the paper.
**Lemma 2.1.1**.: Suppose that the assumptions (D1) and (D2) hold. For any \(t\in\Re\), \(\lambda\geq 0\), and \(c\in\Re\),
\[r_{\lambda}(c,t)\geq r_{\lambda}\left(\frac{\lambda-t}{2},t\right).\]
Proof.: Let \(t\in\Re\), \(\lambda\geq 0\), and \(c\in\Re\) be fixed. Then
\[r_{\lambda}(c,t)=\int_{-\infty}^{\infty}W(s-d)f\left(s+\frac{ \lambda-t}{2},s-\frac{\lambda-t}{2}\right)ds=r_{\lambda}^{*}(d,t),\text{ say},\]
where \(d=c-\frac{\lambda-t}{2}\). We have
\[r_{\lambda}^{*}(d,t) =\int_{-\infty}^{\infty}W(s-d)f\left(s+\frac{\lambda-t}{2},s- \frac{\lambda-t}{2}\right)ds\] \[=\int_{-\infty}^{\infty}W(-s+d)f\left(s+\frac{\lambda-t}{2},s- \frac{\lambda-t}{2}\right)ds\qquad\text{(as }W(x)=W(-x),\ \forall\ x\in\Re)\] \[=\int_{-\infty}^{\infty}W(s+d)f\left(-s+\frac{\lambda-t}{2},-s- \frac{\lambda-t}{2}\right)ds\] \[=\int_{-\infty}^{\infty}W(s+d)f\left(s+\frac{\lambda-t}{2},s- \frac{\lambda-t}{2}\right)ds\qquad\text{(as }f(z_{1},z_{2})=f(-z_{2},z_{1}),\ \forall\ (z_{1},z_{2})\in\Re^{2})\] \[=r_{\lambda}^{*}(-d,t).\]
The assumption (D1) ensures that \(r_{\lambda}^{*}(d,t)\) is a strictly convex function of \(d\). Thus
\[r_{\lambda}^{*}(0,t) =r_{\lambda}^{*}\left(\frac{d+(-d)}{2},t\right)\] \[\leq\frac{r_{\lambda}^{*}(d,t)+r_{\lambda}^{*}(-d,t)}{2}=r_{ \lambda}^{*}(d,t)\] \[\implies r_{\lambda}\left(\frac{\lambda-t}{2},t\right) \leq r_{\lambda}(c,t).\]
The following theorem provide a sufficient condition under which a location equivariant estimator of \((\theta_{1},\theta_{2})\) is inadmissible. In such cases, the theorem also provides dominating estimators.
**Theorem 2.1.1**.: Under the assumptions (D1) and (D2), let \(\mathbf{\delta}_{\psi}(\mathbf{X})=(X_{1}-\psi(D),X_{2}+\psi(D))\) be a location equivariant estimator of \((\theta_{1},\theta_{2})\in\Theta_{0}\), where \(\psi:\Re\to\Re\). Suppose that \(P_{\mathbf{\theta}}\left(\psi(D)<\frac{-D}{2}\right)>0,\ \forall\ \mathbf{\theta}\in\Theta_{0}\). Define \(\psi^{*}(t)=\max\{\frac{-t}{2},\psi(t)\},\ t\in\Re\). Then
\[R(\mathbf{\theta},\mathbf{\delta}_{\psi^{*}})\leq R(\mathbf{\theta},\mathbf{\delta}_{\psi}),\ \forall\ \mathbf{\theta}\in\Theta_{0},\]
where \(\mathbf{\delta}_{\psi^{*}}(\mathbf{X})=(X_{1}-\psi^{*}(D),X_{2}+\psi^{*}(D))\).
Proof.: Define \(A=\{t:\psi(t)<\frac{-t}{2}\}\) and \(B=\{t:\psi(t)\geq\frac{-t}{2}\}\), so that \(\psi^{*}(t)=\frac{-t}{2}\), if \(t\in A\), and \(\psi^{*}(t)=\psi(t)\), if \(t\in B\). The Lemma 2.1.1 and the assumption (D1) imply that, for any fixed \(t\in\Re\) and \(\lambda\geq 0\),
\[r_{\lambda}(c,t)=\int_{-\infty}^{\infty}W(s-c)f(s,s+t-\lambda)ds\]
is decreasing in \(c\in(-\infty,\frac{\lambda-t}{2}]\), increasing in \(c\in[\frac{\lambda-t}{2},\infty)\), with unique minimum at \(c\equiv\frac{\lambda-t}{2}\). Since, for any fixed \(\lambda\geq 0\) and any \(t\), \(\frac{-t}{2}\leq\frac{\lambda-t}{2}<\infty\), it follows that, for any \(\lambda\geq 0\), \(r_{\lambda}(c,t)\) is decreasing in \(c\in(-\infty,\frac{-t}{2}]\). Consequently, for any \(\lambda\geq 0\), \(r_{\lambda}(\psi(t),t)\geq r_{\lambda}(\frac{-t}{2},t)\), for \(t\in A\). Therefore,
\[R(\boldsymbol{\theta},\boldsymbol{\delta}_{\psi}) =2\int_{-\infty}^{\infty}r_{\lambda}(\psi(t),t)\,dt\] \[=2\left[\int_{A}r_{\lambda}(\psi(t),t)\,dt+\int_{B}r_{\lambda}( \psi(t),t)\,dt\right]\] \[\geq 2\left[\int_{A}r_{\lambda}\left(\frac{-t}{2},t\right)\,dt+ \int_{B}r_{\lambda}(\psi(t),t)\,dt\right]\] \[=2\int_{-\infty}^{\infty}r_{\lambda}(\psi^{*}(t),t)\,0dt\] \[=R(\boldsymbol{\theta},\boldsymbol{\delta}_{\psi^{*}}),\ \ \boldsymbol{\theta}\in\Theta_{0}.\]
The proof of the following Corollary is contained in the proof of Theorem 2.1.1, and hence omitted.
**Corollary 2.1.1**.: Suppose that the assumptions (D1) and (D2) hold. Let \(\boldsymbol{\delta}_{\psi}(\mathbf{X})=(X_{1}-\psi(D),X_{2}+\psi(D))\) be a location equivariant estimator of \(\boldsymbol{\theta}\), where \(\psi:\Re\to\Re\) is such that \(P_{\boldsymbol{\theta}}\left(\psi(D)<\frac{-D}{2}\right)>0,\ \forall\ \boldsymbol{\theta}\in\Theta_{0}\). Let \(\psi_{0}:\Re\to\Re\) be such that \(\psi(t)\leq\psi_{0}(t)<\frac{-t}{2}\), whenever \(\psi(t)<\frac{-t}{2}\), and \(\psi_{0}(t)=\psi(t)\), whenever \(\psi(t)\geq\frac{-t}{2}\). Then, \(R(\boldsymbol{\theta},\boldsymbol{\delta}_{\psi_{0}})\leq R(\boldsymbol{ \theta},\boldsymbol{\delta}_{\psi}),\ \forall\ \boldsymbol{\theta}\in\Theta_{0}\), where \(\boldsymbol{\delta}_{\psi_{0}}(\mathbf{X})=(X_{1}-\psi_{0}(D),X_{2}+\psi_{0}( D))\).
Under the unrestricted parameter space \(\Theta=\Re^{2}\), it is easy to verify that the unrestricted best location equivariant estimator (BLEE) of \(\boldsymbol{\theta}\) is \(\boldsymbol{\delta}_{0}(\mathbf{X})=(X_{1},X_{2})\). Using Theorem 2.1.1, we conclude that the unrestricted BLEE \(\delta_{0}(\mathbf{X})\) is inadmissible for estimating \(\boldsymbol{\theta}\) and is dominated by the estimator
\[\boldsymbol{\delta}_{\psi_{0}^{*}}(\mathbf{X})=(X_{1}-\psi_{0}^{*}(D),X_{2}+ \psi_{0}^{*}(D))=\begin{cases}(X_{1},X_{2}),&\text{if }X_{1}\leq X_{2}\\ \left(\frac{X_{1}+X_{2}}{2},\frac{X_{1}+X_{2}}{2}\right),&\text{if }X_{1}>X_{2} \end{cases},\]
where \(\psi_{0}^{*}(D)=\max\left\{\frac{-D}{2},0\right\}\).
Now we consider isotonic regression estimators (or mixed estimators) based on the unrestricted BLEE \(\boldsymbol{\delta}_{0}(\mathbf{X})=(X_{1},X_{2})\). Let \(\mathcal{D}=\{\boldsymbol{\delta}_{\alpha}:-\infty<\alpha<\infty\}\) be the class of isotonic regression estimators of \((\theta_{1},\theta_{2})\) based on the unrestricted BLEE
\((X_{1},X_{2})\), where
\[\boldsymbol{\delta}_{\alpha}(\mathbf{X}) =(\delta_{1,\alpha}(\mathbf{X}),\delta_{2,\alpha}(\mathbf{X}))\] \[=\begin{cases}(X_{1},X_{2}),&\text{if }X_{1}\leq X_{2}\\ (\alpha X_{1}+(1-\alpha)X_{2},(1-\alpha)X_{1}+\alpha X_{2}),&\text{if }X_{1}>X_{2} \end{cases}\] \[=(X_{1}-\psi_{\alpha}(D),X_{2}+\psi_{\alpha}(D)),\]
where \(\psi_{\alpha}(t)=\begin{cases}0,&\text{if }t\geq 0\\ -(1-\alpha)t,&\text{if }t<0\end{cases}.\)
Note that \(P_{\boldsymbol{\theta}}\left(\psi_{\alpha}(D)<\frac{-D}{2}\right)>0,\;\forall \;\boldsymbol{\theta}\in\Theta_{0}\), if, and only if, \(\alpha\geq\frac{1}{2}\). Using Corollary 2.1.1, we conclude that, for \(\frac{1}{2}\leq\alpha_{0}<\alpha_{1}<\infty\), the estimator \(\boldsymbol{\delta}_{\alpha_{0}}\) dominates the estimator \(\boldsymbol{\delta}_{\alpha_{1}}\). For the independent and bivariate normal probability model and the sum of squared errors loss function, the above consequences of Theorem 2.1.1 and Corollary 2.1.1 are obtained in Kumar and Sharma (1988) and Patra and Kumar (2017).
### A Class of Improved Estimators Over the BLEE \((X_{1},x_{2})\)
In this section, we apply the Kubokawa (1994) technique to obtain a class of estimators improving over the BLEE \((X_{1},X_{2})\). Further, we obtain the Brewster-Zidek (1974) type and the Stein (1964) type improved estimators over the BLEE.
Consider estimation of \((\theta_{1},\theta_{2})\) under the loss function (2.2), when it is known apriori that \(\boldsymbol{\theta}\in\Theta_{0}\). Throughout this section, we will assume that the function \(W(\cdot)\) is absolutely continuous and satisfies the assumption (D2).
The following lemma will be useful in proving the main results of this section. The proof of the lemma is straight forward and hence omitted.
**Lemma 2.2.1**.: Let \(s_{0}\in\Re\) and let \(M:\Re\to\Re\) be such that \(M(s)\leq 0,\;\forall\;s<s_{0}\), and \(M(s)\geq 0,\;\forall\;s>s_{0}\). Let \(M_{i}:\Re\to[0,\infty),\;i=1,2\), be non-negative functions such that \(M_{1}(s)M_{2}(s_{0})\geq(\leq)\,M_{1}(s_{0})M_{2}(s),\;\forall\;s<s_{0}\), and \(M_{1}(s)M_{2}(s_{0})\leq(\geq)\;M_{1}(s_{0})M_{2}(s),\;\forall\;s>s_{0}\). Then,
\[M_{2}(s_{0})\int\limits_{-\infty}^{\infty}M(s)\,M_{1}(s)ds\leq\;(\geq)\;M_{1} (s_{0})\int\limits_{-\infty}^{\infty}M(s)\,M_{2}(s)ds.\]
The facts stated in the following lemma are well known in the theory of stochastic orders (see Shaked and Shanthikumar (2007)). The proof of the lemma is straight forward, hence skipped.
**Lemma 2.2.2**.: If, for any fixed \(\Delta\geq 0\) and \(t\in\Re\), \(\frac{f(s,s+t-\Delta)}{f(s,s+t)}\) is increasing (decreasing) in \(s\), then \(\frac{\int_{-\infty}^{t-\Delta}f(s,s+y)dy}{\int_{-\infty}^{t}f(s,s+y)dy}\) is increasing (decreasing) in \(s\) and \(\frac{f(s,s+t)}{\int_{-\infty}^{t}f(s,s+y)dy}\) is decreasing (increasing) in \(s\).
In the following theorem, we provide a class of estimators that improve upon the
BLEE \(\boldsymbol{\delta}_{0}(\boldsymbol{X})=(X_{1},X_{2})\).
**Theorem 2.2.1**.: Suppose that the assumptions (D1) and (D2) hold. Let \(\boldsymbol{\delta}_{\psi}(\boldsymbol{X})=(X_{1}-\psi(D),X_{2}+\psi(D))\) be a location equivariant estimator of \((\theta_{1},\theta_{2})\) such that \(\psi(t)\) is decreasing (increasing) in \(t\), \(\lim_{t\to\infty}\psi(t)=0\) and \(\int_{-\infty}^{\infty}\int_{-\infty}^{t}W^{{}^{\prime}}(s-\psi(t))\;f(s,s+y) \,dy\,ds\,\geq\,(\leq)\,0,\;\forall\;t\). Then
\[R(\boldsymbol{\theta},\boldsymbol{\delta}_{\psi})\leq R(\boldsymbol{\theta}, \boldsymbol{\delta}_{0}),\;\;\;\forall\;\;\boldsymbol{\theta}\in\Theta_{0}.\]
Proof.: Let us fix \(\boldsymbol{\theta}\in\Theta_{0}\) and let \(\lambda=\theta_{2}-\theta_{1}\), so that \(\lambda\geq 0\). Let \(Z_{i}=X_{i}-\theta_{i},\;i=1,2\), and \(Z=Z_{2}-Z_{1}\). Consider the risk difference
\[\Delta(\lambda) =R(\boldsymbol{\theta},\boldsymbol{\delta}_{0})-R(\boldsymbol{ \theta},\boldsymbol{\delta}_{\psi})\] \[=2E_{\boldsymbol{\theta}}[W(Z_{1})-W(Z_{1}-\psi(Z+\lambda))]\] \[=2E_{\boldsymbol{\theta}}\left[\int_{Z+\lambda}^{\infty}\Big{\{} \frac{d}{dt}W(Z_{1}-\psi(t))\Big{\}}\;dt\right]\] \[=2E_{\boldsymbol{\theta}}\left[\int_{Z}^{\infty}(-\psi^{{}^{ \prime}}(t+\lambda))W^{{}^{\prime}}(Z_{1}-\psi(t+\lambda))\;dt\right]\] \[=-2\int_{-\infty}^{\infty}\psi^{{}^{\prime}}(t+\lambda)E_{ \boldsymbol{\theta}}[W^{{}^{\prime}}(Z_{1}-\psi(t+\lambda))\;I_{(-\infty,t]}(Z )\;]\,dt,\]
where, for any set \(A\), \(I_{A}(\cdot)\) denotes its indicator function. Since \(\psi(t)\) is a decreasing (increasing) function of \(t\), it suffices to show that, for every \(t\) and \(\lambda\geq 0\),
\[E_{\boldsymbol{\theta}}[W^{{}^{\prime}}(Z_{1}-\psi(t+\lambda))\;I_{(-\infty,t ]}(Z)\;]\geq\;(\leq)\,0. \tag{2.5}\]
Since \(W^{{}^{\prime}}(t)\) is an increasing function of \(t\) and \(\psi(t)\) is a decreasing (increasing) function of \(t\), for \(\lambda\geq 0\), we have
\[E_{\boldsymbol{\theta}}[W^{{}^{\prime}}(Z_{1}-\psi(t+\lambda)) \;I_{(-\infty,t]}(Z)\;] \geq\,(\leq)\,E_{\boldsymbol{\theta}}[W^{{}^{\prime}}(Z_{1}-\psi(t ))\;I_{(-\infty,t]}(Z)\;]\] \[=\int_{-\infty}^{\infty}\int_{-\infty}^{t}W^{{}^{\prime}}(s-\psi (t))\;f(s,s+y)\,dy\,ds\]
which, in turn, implies (2.5).
Now we will prove a useful corollary to the above theorem. The following corollary provides the Brewster-Zidek (1974) type (B-Z type) improvement over the BLEE \(\delta_{0}(\mathbf{X})=(X_{1},X_{2})\).
**Corollary 2.2.1**.: **(i)** Suppose that, for any fixed \(\Delta\geq 0\) and \(t\), \(\frac{\int_{-\infty}^{t-\Delta}f(s,s+y)dy}{\int_{-\infty}^{t}f(s,s+y)dy}\) is increasing (decreasing) in \(s\). Further suppose that, for every fixed \(t\), the equation
\[k_{1}(c|t)=\int_{-\infty}^{\infty}\int_{-\infty}^{t}\;W^{{}^{\prime}}(s-c)\;f( s,s+y)\,dy\,ds=0\]
has the unique solution \(c\equiv\psi_{0,1}(t)\). Then
\[R(\boldsymbol{\theta},\boldsymbol{\delta}_{\psi_{0,1}})\leq R(\boldsymbol{ \theta},\boldsymbol{\delta}_{0}),\;\;\;\forall\;\;\boldsymbol{\theta}\in\Theta _{0},\]
where \(\boldsymbol{\delta}_{\psi_{0,1}}(\boldsymbol{X})=(X_{1}-\psi_{0,1}(D),X_{2}+\psi_{0, 1}(D))\).
**(ii)** In addition to assumptions of (i) above, suppose that \(\psi_{1,1}:\Re\to\Re\) is such that \(\psi_{1,1}(t)\leq\,(\geq)\,\psi_{0,1}(t),\ \forall\ t,\ \psi_{1,1}(t)\) is decreasing (increasing) in \(t\) and \(\lim_{t\to\infty}\,\psi_{1,1}(t)=0\). Then
\[R(\boldsymbol{\theta},\boldsymbol{\delta}_{\psi_{1,1}})\leq R(\boldsymbol{ \theta},\boldsymbol{\delta}_{0}),\ \ \forall\ \boldsymbol{\theta}\in\Theta_{0},\]
where \(\boldsymbol{\delta}_{\psi_{1,1}}(\boldsymbol{X})=(X_{1}-\psi_{1,1}(D),X_{2}+ \psi_{1,1}(D))\).
Proof.: It suffices to show that \(\psi_{0,1}(t)\) satisfies conditions of Theorem 2.2.1. Note that the hypothesis of the corollary ensure that \(\lim_{t\to\infty}\psi_{0,1}(t)=0\). To show that \(\psi_{0,1}(t)\) is a decreasing (increasing) function of \(t\), suppose that, there exist numbers \(t_{1}\) and \(t_{2}\) such that \(t_{1}<t_{2}\) and \(\psi_{0,1}(t_{1})\neq\psi_{0,1}(t_{2})\). We have \(k_{1}(\psi_{0,1}(t_{1})|t_{1})=0\). Also, using the hypotheses of the corollary and the assumption (D1), it follows that \(\psi_{0,1}(t_{2})\) is the unique solution of \(k_{1}(c|t_{2})=0\) and \(k_{1}(c|t_{2})\) is a decreasing function of \(c\). Let \(s_{0}=\psi_{0,1}(t_{1}),\ M(s)=W^{{}^{\prime}}(s-s_{0}),\ M_{1}(s)=\int_{- \infty}^{t_{2}}f(s,s+y)dy\) and \(M_{2}(s)=\int_{-\infty}^{t_{1}}f(s,s+y)dy\). Then, under assumption (D1), using Lemma 2.2.1, we get
\[\int_{-\infty}^{t_{1}}f(\psi_{0,1}(t_{1}),\psi_{0,1}(t_{1})+y)\,dy\,\left(\int _{-\infty}^{\infty}\,W^{{}^{\prime}}(s-\psi_{0,1}(t_{1}))\,\int_{-\infty}^{t_ {2}}f(s,s+y)\,dy\,ds\right)\]
\[\leq\,(\geq)\,\int_{-\infty}^{t_{2}}f(\psi_{0,1}(t_{1}),\psi_{0,1}(t_{1})+y) dy\,\left(\int_{-\infty}^{\infty}\,W^{{}^{\prime}}(s-\psi_{0,1}(t_{1}))\, \int_{-\infty}^{t_{1}}f(s,s+y)\,dy\,ds\right)=0.\]
\[\Longrightarrow\quad k_{1}(\psi_{0,1}(t_{1})|t_{2})=\int_{-\infty}^{\infty} \int_{-\infty}^{t_{2}}\,W^{{}^{\prime}}(s-\psi_{0,1}(t_{1}))f(s,s+y)\,dy\,ds \leq\ (\geq)\ 0.\]
This implies that \(k_{1}(\psi_{0,1}(t_{1})|t_{2})<\,(>)\,0\), as \(k_{1}(c|t_{2})=0\) has the unique solution \(c\equiv\psi_{0,1}(t_{2})\) and \(\psi_{0,1}(t_{1})\neq\psi_{0,1}(t_{2})\). Since \(k_{1}(c|t_{2})\) is a decreasing function of \(c,k_{1}(\psi_{0,1}(t_{2})|t_{2})=0\) and \(k_{1}(\psi_{0,1}(t_{1})|t_{2})<\,(>)\,0\), it follows that \(\psi_{0,1}(t_{1})>(<)\psi_{0,1}(t_{2})\).
The proof of part (ii) is an immediate by-product of Theorem 2.2.1 using the fact that, for any \(t\), \(k_{1}(c|t)\) is a decreasing function of \(c\in\Re\).
**Remark 2.2.1**.: It is straightforward to see that the Brewster-Zidek (1974) type estimator \(\delta_{\psi_{0,1}}(\cdot)\), derived in Corollary 2.2.1 (i), is the generalized Bayes estimator with respect to the non-informative prior \(\pi(\theta_{1},\theta_{2})=1,\ (\theta_{1},\theta_{2})\in\Theta_{0}\).
In the following section, we will provide an application of results derived in Sections 2.1-2.2 and validate the results through a simulation study.
## 3 An Application and a Simulation study: Bivariate Normal Distribution
Let \(\mathbf{X}=(X_{1},X_{2})\sim BVN(\theta_{1},\theta_{2},\sigma^{2},\sigma^{2},\rho)\), where \((\theta_{1},\theta_{2})\in\Theta_{0}\) is the vector of unknown means, \(\sigma>0\) is the common known standard deviation and \(\rho\in(-1,1)\) is the known correlation coefficient. The joint pdf of \((Z_{1},Z_{2})=(X_{1}-\theta_{1},X_{2}-\theta_{2})\) is
\[f(z_{1},z_{2})=\frac{1}{2\pi\sigma^{2}\sqrt{1-\rho^{2}}}e^{-\frac{1}{2(1-\rho^{ 2})\sigma^{2}}[z_{1}^{2}-2\rho\,z_{1}z_{2}+z_{2}^{2}]},\ \ \ \mathbf{z}=(z_{1},z_{2})\in\Re^{2}.\]
Consider estimation of \((\theta_{1},\theta_{2})\) under the general loss function
\[L(\boldsymbol{\theta},\mathbf{a})=W(a_{1}-\theta_{1})+W(a_{2}-\theta_{2}),\ \boldsymbol{\theta}=(\theta_{1},\theta_{2})\in\Theta_{0},\ \mathbf{a}=(a_{1},a_{2})\in\Re^{2}, \tag{3.1}\]
where \(W:\Re\to\Re\) is such that the assumption (D1) holds.
Let \(\boldsymbol{\delta}_{\psi}(\mathbf{X})=(X_{1}-\psi(D),X_{2}+\psi(D))\) be a location equivariant estimator of \(\boldsymbol{\theta}\) and let \(\psi^{*}(t)=\max\{\psi(t),\frac{-t}{2}\},\ t\in\Re\) be as defined in Theorem 2.1.1. Using Theorem 2.1.1, it follows that, if \(P_{\boldsymbol{\theta}}\left[\psi(D)<\frac{-D}{2}\right]>0,\ \forall\ \boldsymbol{\theta}\in \Theta_{0},\) then the estimator \(\delta_{\psi}(\mathbf{X})\) is inadmissible for estimating \(\boldsymbol{\theta}\) and is dominated by \(\boldsymbol{\delta}_{\psi^{*}}(D)=(X_{1}-\psi^{*}(D),X_{2}+\psi^{*}(D)).\)
The unrestricted BLEE of \(\boldsymbol{\theta}\) is \(\boldsymbol{\delta}_{0}(\mathbf{X})=(X_{1},X_{2}).\) Then, the BLEE \((X_{1},X_{2})\) is improved on by
\[\boldsymbol{\delta}_{RMLE}(\mathbf{X}) =\left(X_{1}-\max\Big{\{}0,\frac{-D}{2}\Big{\}},X_{2}+\max\Big{\{} 0,\frac{-D}{2}\Big{\}}\right)\] \[=\left(\min\Big{\{}X_{1},\frac{X_{1}+X_{2}}{2}\Big{\}},\max\Big{\{} X_{2},\frac{X_{1}+X_{2}}{2}\Big{\}}\right). \tag{3.2}\]
It is easy to verify that \(\boldsymbol{\delta}_{RMLE}\) is the restricted maximum likelihood estimator (MLE) of \(\boldsymbol{\theta}\) under the restricted parameter space \(\Theta_{0}\) (see Kumar and Sharma (1988) and Patra and Kumar (2017)).
When \(W(t)=t^{2},\ t\in\Re,\) using Corollary 2.2.1, under the loss function (3.1), the Brewster-Zidek (1974) type (B-Z type) improvements over the BLEE \((X_{1},X_{2})\) is
\[\boldsymbol{\delta}_{\psi_{0,1}}(\boldsymbol{X})=\left(X_{1}-\frac{\tau}{2} \ \frac{\phi\left(\frac{D}{\tau}\right)}{\Phi\left(\frac{D}{\tau}\right)},X_{2}+ \frac{\tau}{2}\ \frac{\phi\left(\frac{D}{\tau}\right)}{\Phi\left(\frac{D}{\tau}\right)} \right), \tag{3.3}\]
where \(\tau=\sigma\sqrt{2(1-\rho)}\) and, \(\phi(\cdot)\) and \(\Phi(\cdot)\) are the p.d.f. and the d.f. of the standard normal distribution, respectively. When \(W(t)=|t|,\ t\in\Re,\) using Corollary 2.2.1, under the loss function (3.1), the Brewster-Zidek (1974) type (B-Z type) improvements over the BLEE \((X_{1},X_{2})\) is
\[\boldsymbol{\delta}_{\psi_{0,1}}(\boldsymbol{X})=\left(X_{1}-C(D),X_{2}+C(D) \right), \tag{3.4}\]
where \(C\equiv C(D)\) is the solution of the following equation
\[\int_{-\infty}^{C}\Phi\left(\frac{D+s(1-\rho)}{\sigma\sqrt{1-\rho^{2}}}\right) \phi\left(\frac{s}{\sigma}\right)ds=\frac{\sigma}{2}\Phi\left(\frac{D}{\tau} \right). \tag{3.5}\]
Note that the estimators, given by (3.3) and (3.4), are the generalized Bayes estimators with respect to the non-informative prior density (the Lebesgue measure) on \(\Theta_{0}\) and the loss function (3.1), with \(W(t)=t^{2},\ t\in\Re,\) and \(W(t)=|t|,\ t\in\Re,\) respectively.
**Simulation Study:**
For the above bivariate normal model, we considered estimation of vector \(\boldsymbol{\theta}=(\theta_{1},\theta_{2})\) of means when it is known apriori that they satisfy the order restriction \(\theta_{1}\leq\theta_{2}\).
For estimation of \(\theta\), we obtained estimators (given by (3.2), (3.3) and (3.4)) improving on the BLEE \((X_{1},X_{2})\). The improved estimator (3.2) is the same as the restricted maximum likelihood estimator (MLE), and the improved estimators (3.3) and (3.4) are the generalized Bayes (GB) estimators with respect to the squared error loss and the absolute error loss, respectively. To further evaluate the performances of the improved estimators, in this section, we compare the risk performances of the BLEE \((X_{1},X_{2})\), the restricted MLE (as defined in (3.2)) and the generalized Bayes (GB) estimators (as defined in (3.3) and (3.4)), numerically, through the Monte Carlo simulations. For simulation study, we take \(W(t)=t^{2},\ t\in\Re\) (i.e., sum of squared error losses) and \(W(t)=|t|,\ t\in\Re\) (i.e., sum of absolute error losses). The simulated risks of the BLEE, the restricted MLE and the GB estimator have been computed.
For simulations, 10,000 random samples of size 1 were generated from the relevant bivariate normal distribution. The simulated values of the risks of the BLEE, the restricted MLE and the GB estimator under the sum of squared error loss functions and the sum of absolute error loss functions are plotted in Figure 1 and Figure 2, respectively. The following observations are evident from Figure 1 and Figure 2:
(i) The restricted MLE and the GB estimator perform better than the BLEE, which is in conformity with our theoretical findings;
(ii) The performance of the restricted MLE is significantly better when \(\theta_{1}\) and \(\theta_{2}\) (\(\theta_{1}\leq\theta_{2}\)) are close, otherwise the GB estimator performs better.
(iii) There is no clear cut winner between the restricted MLE and the GB estimator.
(iii) Also, note that the performance of both the GB estimators (given by (3.3) and (3.4)) remain the same, relative to other two estimators, under both the loss functions, the squared error loss function and the absolute error loss function.
Figure 1: Risk plots of estimators of location parameter \(\mathbf{\theta}\) against values of \(\theta_{2}-\theta_{1}\): when \(W(t)=t^{2},\;t\in\Re\).
Figure 2: Risk plots of estimators of location parameter \(\mathbf{\theta}\) against values of \(\theta_{2}-\theta_{1}\): when \(W(t)=|t|,\;t\in\Re\).
## 4 Real Life Data Analysis
We consider "the dental study data", discussed by Potthoff and Roy (1964), that is presented in Table 1. This study was conducted at the University of North Carolina Dental School. In this study, the size (in millimeters) of the pituitary fissure was measured in children of different ages. To test the bivariate normality of this data set, we applied the Henze-Zirkler and Anderson-Darling multivariate normality tests and observed p-values of 0.474 and 0.288, respectively, suggesting that there is no significant departure from normality. Here, it is reasonable to assume that the size of the pituitary fissure increases with age. We performed the paired t-test and got the p-value \(=0.008\) in favour of the assumption that the pituitary fissure increases with age. We also performed the variance equality test of the data with respect to 8 year and 10 year and got the p-value \(=0.542\). As a result, we can say that the variances of both datasets are the same.
Now, to illustrate the findings of our paper, suppose that the data of 5 girls and 8 boys, presented in Table 2 is reported (for reference see p.2 of Robertson et al. (1988)). Let \(X_{1}\) and \(X_{2}\) be random variables representing the average size of the pituitary fissure of 8 year and 10 year children, respectively. Then \((X_{1},X_{2})\) follows a bivariate normal distribution with means \(\theta_{1}\) and \(\theta_{2}\), common variance \(\sigma^{2}\) and correlation coefficient \(\rho\). We use the common sample variance and the sample correlation of the data of Table 1 as the plug-in values for \(\sigma^{2}=\frac{5.435}{13}=0.418\) and \(\rho=0.626\). We know that \(\theta_{1}\leq\theta_{2}\). By exploiting this information, we obtain improvements over unrestricted estimators (based on the data in Table 2) for \(\theta_{1}\) and \(\theta_{2}\).
In the starting of this section, we have seen that, under the general loss function (3.1), the improved BLEE (the Stein-type estimator (3.2)) dominates the unrestricted BLEE \((X_{1},X_{2})\). The improved BLEE (the restricted MLE) of \((\theta_{1},\theta_{2})\) is
\[\biggl{(}\min\Big{\{}X_{1},\frac{X_{1}+X_{2}}{2}\Big{\}},\max\Big{\{}X_{2}, \frac{X_{1}+X_{2}}{2}\Big{\}}\biggr{)}=(22.86,22.86).\]
Based on the theoretical results and the simulation studies, we infer that, rather than 23.077 and 22.654, respectively, 22.86 be taken as a common estimated value for \(\theta_{1}\) and \(\theta_{2}\). Also, the Brewster-Zidek (1974) type improved estimated values (as defined by (3.3) and (3.4)) under the squared error loss and the absolute error loss are \((22.77,22.96)\) and \((22.71,23.03)\), respectively.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Girl/Boy & 8 year & 10 year \\ \hline \hline Girl & 21 & 20 \\ Girl & 21 & 21.5 \\ Girl & 20.5 & 24 \\ Girl & 23.5 & 24.5 \\ Girl & 21.5 & 23 \\ Girl & 20 & 21 \\ Girl & 21.5 & 22.5 \\ Girl & 23 & 23 \\ Girl & 20 & 21 \\ Girl & 16.5 & 19 \\ Girl & 24.5 & 25 \\ Boy & 26 & 25 \\ Boy & 21.5 & 22.5 \\ Boy & 23 & 22.5 \\ Boy & 25.5 & 27.5 \\ Boy & 20 & 23.5 \\ Boy & 24.5 & 25.5 \\ Boy & 22 & 22 \\ Boy & 24 & 21.5 \\ Boy & 23 & 20.5 \\ Boy & 27.5 & 28 \\ Boy & 23 & 23 \\ Boy & 21.5 & 23.5 \\ Boy & 17 & 24.5 \\ Boy & 22.5 & 25.5 \\ Boy & 23 & 24.5 \\ Boy & 22 & 21.5 \\ \hline \hline Mean & 22.185 & 23.167 \\ \hline \end{tabular}
\end{table}
Table 1: The size of pituitary fissure of children at different ages.
## 5 Conclusions
In this paper, we considered simultaneous estimation of order restricted location parameters of a general bivariate symmetric model, under a general loss function. We used the Stein technique to obtain truncated estimators and the Kubokawa (or IERD) technique to obtain a class of smooth estimators, which dominate the BLEE estimator. Additionally, we obtained the Brewster-Zidek smooth estimator, which is also the generalized Bayes estimator. Our findings demonstrate that the Stein-type estimator is robust, as it does not depend on the choice of the probability model or the loss function. We also conducted a simulation study to confirm the findings of the paper, and also provided a real-life application of the results obtained in the paper.
One can think about extending the results of the paper from the general bivariate distribution to a general multivariate distribution. This seems to be a challenging problem, and it may be taken up in our future research.
#### Financial disclosure
This work was supported by the Council of Scientific and Industrial Research (CSIR) under Grant [number 09/092(0986)/2018].
#### Conflict of interest
There is no conflict of interest by authors.
|
2309.15229 | Fourier type operators on Orlicz spaces and the role of Orlicz Lebesgue
exponents | We deduce continuity and (global) wave-front properties of classes of Fourier
multipliers, pseudo-differential, and Fourier integral operators when acting on
Orlicz spaces, or more generally, on Orlicz-Sobolev type spaces. In particular,
we extend H{\"o}rmander's improvement of Mihlin's Fourier multiplier theorem to
the framework of Orlicz spaces. We also show how Young functions $\Phi$ of the
Orlicz spaces are linked to properties of certain Lebesgue exponents $p_\Phi$
and $q_\Phi$ emerged from $\Phi$. | Matteo Bonino, Sandro Coriasco, Albin Petersson, Joachim Toft | 2023-09-26T19:45:33Z | http://arxiv.org/abs/2309.15229v2 | # Fourier type operators on Orlicz spaces and the role of Orlicz Lebesgue exponents
###### Abstract.
We deduce continuity properties of classes of Fourier multipliers, pseudo-differential and Fourier integral operators when acting on Orlicz spaces. Especially we show classical results like Hormander's improvement of Mihlin's Fourier multiplier theorem are extendable to the framework of Orlicz spaces. We also show how some properties of the Young functions \(\Phi\) of the Orlicz spaces are linked to properties of certain Lebesgue exponents \(p_{\Phi}\) and \(q_{\Phi}\) emerged from \(\Phi\).
Key words and phrases:Orlicz, quasi-Banach, quasi-Young functionals 2010 Mathematics Subject Classification: primary: 35S05, 46E30, 46A16, 42B35 secondary: 46F10
## 0. Introduction
Orlicz spaces, introduced by W. Orlicz in 1932 [12], are Banach spaces which generalize the normal \(L^{p}\) spaces (see Section 1 for notations). Orlicz spaces are denoted by \(L^{\Phi}\) where \(\Phi\) is a Young function, and we obtain the usual \(L^{p}\) spaces, \(1\leqslant p<\infty\), by choosing \(\Phi(t)=t^{p}\). For more facts on Orlicz spaces, see [14].
An advantage of Orlicz spaces is that they are suitable when solving certain problems where \(L^{p}\) spaces are insufficient. As an example, consider the entropy of a probability density function \(f\) given by
\[E(f)=-\int f(\xi)\log f(\xi)\,d\xi.\]
In this case, it may be more suitable to work with an Orlicz norm estimate, for instance with \(\Phi(t)=t\log(1+t)\), as opposed to \(L^{1}\) norm estimates.
The literature on Orlicz spaces is rich, see e.g. [1, 4, 8, 9, 11, 13] and the references therein. Recent investigations also put pseudo-differential operators in the framework of Orlicz modulation spaces (cf [19], see also [15, 20] for further properties on Orlicz modulation spaces). In this paper, we deal with pseudo-differential operators as well as Fourier multipliers in Orlicz spaces.
Results pertaining to continuity properties on \(L^{p}\)-spaces are well-established. Our approach is to utilize a Marcinkiewicz interpolation-type theorem by Liu and Wang in [7] to extend such continuity properties to also hold on Orlicz spaces. As an initial example, the methods described in the subsequent sections allow us to obtain the following extension of Mihlin's Fourier multiplier theorem (see [10] for the original theorem).
**Theorem 0.1** (Mihlin).: _Let \(\Phi\) be a strict Young function and \(a\in L^{\infty}(\mathbf{R}^{d}\setminus\{0\})\) be such that_
\[\sup_{\xi\neq 0}\left(|\xi|^{|\alpha|}\,|\partial^{\alpha}a(\xi)|\right)\]
_is finite for every \(\alpha\in\mathbf{N}^{d}\) with \(|\alpha|\leqslant[\frac{d}{2}]+1\). Then \(a(D)\) is continuous on \(L^{\Phi}(\mathbf{R}^{d})\)._
In fact, we also obtain Hormander's improvement of Mihlin's Fourier multiplier theorem (cf [5]) in the context of Orlicz spaces. This result can be found in Section 3 (Theorem 3.4). In a similar manner, we obtain continuity results for pseudo-differential operators of order \(0\) in Orlicz spaces as well, see Theorem 3.3. Finally, we show a continuity result for a broad class of Fourier integral operators, under a condition on the order of the amplitude (that is, a loss of derivatives and decay), see Theorem 3.5.
Section 1 also include investigations of Lebesgue exponents \(p_{\Phi}\) and \(q_{\Phi}\) constructed from the Young function \(\Phi\), which are important for the interpolation theorem. These parameters were described in [7], where it was claimed that
\[p_{\Phi}<\infty\iff\Phi\text{ fulfills the }\Delta_{2}\text{ condition} \tag{0.1}\]
and
\[q_{\Phi}>1\iff\Phi\text{ is strictly convex}. \tag{0.2}\]
In Section 1, we confirm that (0.1) is correct, but that neither logical implication of (0.2) is correct. Instead, other conditions on \(\Phi\) are found which characterize \(q_{\Phi}>1\) (see Proposition 2.1). At the same time, we deduce a weaker form of the equivalence (0.2) and show that if \(q_{\Phi}>1\), then there is an equivalent Young function to \(\Phi\) which is strictly convex. (see Proposition 2.4).
## 1. Preliminaries
In this section we recall some facts on Orlicz spaces and pseudo-differential operators. Especially we recall Lebesgue exponents given in e. g. [7] and explain some of their features.
### Orlicz Spaces
In this subsection we provide an overview of some basic definitions and state some technical results that will be needed. First, we recall the definition of weak \(L^{p}\) spaces.
**Definition 1.1**.: Let \(p\in(0,\infty]\). The _weak \(L^{p}\) space_\(wL^{p}(\mathbf{R}^{d})\) consists of all Lebesgue measurable functions \(f:\mathbf{R}^{d}\to\mathbf{C}\) for which
\[\|f\|_{wL^{p}}\equiv\sup_{t>0}t\left(\mu_{f}(t)\right)^{\frac{1}{p}} \tag{1.1}\]
is finite. Here \(\mu_{f}(t)\) is the Lebesgue measure of the set \(\{\,x\in\mathbf{R}^{d}\,;\,|f(x)>t|\,\}\).
_Remark 1.2_.: Notice that the \(wL^{p}\)-norm is not a true norm, since the triangular inequality fails. Nevertheless, one has that \(\|f\|_{wL^{p}}\leqslant\|f\|_{L^{p}}\). In particular, \(L^{p}(\mathbf{R}^{d})\) is continuously embedded in \(wL^{p}(\mathbf{R}^{d})\).
Next, we recall some facts concerning Young functions and Orlicz spaces. (See [4, 14].)
**Definition 1.3**.: A function \(\Phi:\mathbf{R}\to\mathbf{R}\cup\{\infty\}\) is called _convex_ if
\[\Phi(s_{1}t_{1}+s_{2}t_{2})\leqslant s_{1}\Phi(t_{1})+s_{2}\Phi(t_{2})\]
when \(s_{j},t_{j}\in\mathbf{R}\) satisfy \(s_{j}\geqslant 0\) and \(s_{1}+s_{2}=1,\ j=1,2\).
We observe that \(\Phi\) might not be continuous, because we permit \(\infty\) as function value. For example,
\[\Phi(t)=\begin{cases}c,&\text{when }t\leqslant a\\ \infty,&\text{when }t>a\end{cases}\]
is convex but discontinuous at \(t=a\).
**Definition 1.4**.: Let \(\Phi\) be a function from \([0,\infty)\) to \([0,\infty]\). Then \(\Phi\) is called a _Young function_ if
1. \(\Phi\) is convex,
2. \(\Phi(0)=0\),
3. \(\lim_{t\to\infty}\Phi(t)=+\infty\).
It is clear that \(\Phi\) in Definition 1.4 is non-decreasing, because if \(0\leqslant t_{1}\leqslant t_{2}\) and \(s\in[0,1]\) is chosen such that \(t_{1}=st_{2}\), then
\[\Phi(t_{1})=\Phi(st_{2}+(1-s)0)\leqslant s\Phi(t_{2})+(1-s)\Phi(0)\leqslant \Phi(t_{2}),\]
since \(\Phi(0)=0\) and \(s\in[0,1]\).
The Young functions \(\Phi_{1}\) and \(\Phi_{2}\) are called _equivalent_, if there is a constant \(C\geqslant 1\) such that
\[C^{-1}\Phi_{2}(t)\leqslant\Phi_{1}(t)\leqslant C\Phi_{2}(t),\qquad t\in[0, \infty].\]
We recall that a Young function is said to fulfill the _\(\Delta_{2}\)-condition_ if there is a constant \(C\geqslant 1\) such that
\[\Phi(2t)\leqslant C\Phi(t),\qquad\qquad\qquad t\in[0,\infty].\]
We also introduce the following condition. A Young function is said to fulfill the _\(\Lambda\)-condition_ if there is a \(p>1\) such that
\[\Phi(ct)\leqslant c^{p}\Phi(t),\qquad\qquad\qquad t\in[0,\infty],\ c\in(0,1]. \tag{1.2}\]
The following characterization of Young functions fulfilling the \(\Delta_{2}\)-condition follows from the fact that any Young function is increasing. The verifications are left for the reader.
**Proposition 1.5**.: _Let \(\Phi\) be a Young function. Then the following conditions are equivalent:_
1. \(\Phi\) _satisfies the_ \(\Delta_{2}\)_-condition;_
2. _for every constant_ \(c>0\)_, the Young function_ \(t\mapsto\Phi(ct)\) _is equivalent to_ \(\Phi\)_;_
3. _for some constant_ \(c>0\) _with_ \(c\neq 1\)_, the Young function_ \(t\mapsto\Phi(ct)\) _is equivalent to_ \(\Phi\)
For any Young function \(\Phi\), t The _upper_ and _lower Lebesgue exponents_ for a Young function \(\Phi\) are defined by
\[p_{\Phi}\equiv\sup_{t>0}\left(\frac{t\Phi_{+}^{\prime}(t)}{\Phi(t)}\right)=\sup_{t >0}\left(\frac{t\Phi_{-}^{\prime}(t)}{\Phi(t)}\right) \tag{1.3}\]
and
\[q_{\Phi}\equiv\inf_{t>0}\left(\frac{t\Phi_{+}^{\prime}(t)}{\Phi(t)}\right)=\inf _{t>0}\left(\frac{t\Phi_{-}^{\prime}(t)}{\Phi(t)}\right), \tag{1.4}\]
respectively. We recall that these exponents are essential in the analysis in [7]. We observe that for any \(r_{1},r_{2}>0\),
\[t^{p_{\Phi}}\lesssim\Phi(t)\lesssim t^{q_{\Phi}}\quad\text{when}\quad t\leqslant r _{1} \tag{1.5}\]
and
\[t^{q_{\Phi}}\lesssim\Phi(t)\lesssim t^{p_{\Phi}}\quad\text{when}\quad t \geqslant r_{2}. \tag{1.6}\]
In order to shed some light on this as well as demonstrate arguments used in the next section we here show these relations.
By (1.3) we obtain
\[\frac{t\Phi_{+}^{\prime}(t)}{\Phi(t)}-p_{\Phi}\leqslant 0\quad\Leftrightarrow \quad\left(\frac{\Phi(t)}{t^{p_{\Phi}}}\right)_{+}^{\prime}\leqslant 0.\]
Hence \(\Phi(t)=t^{p_{\Phi}}h(t)\) for some decreasing function \(h(t)>0\). This gives
\[\Phi(t)=t^{p_{\Phi}}h(t)\geqslant t^{p_{\Phi}}h(r_{1})\gtrsim t^{p_{\Phi}}\]
for \(t\leqslant r_{1}\) and
\[\Phi(t)=t^{p_{\Phi}}h(t)\leqslant t^{p_{\Phi}}h(r_{2})\lesssim t^{p_{\Phi}}\]
for \(t\geqslant r_{2}\). This shows the relations between \(t^{p_{\Phi}}\) and \(\Phi(t)\) in (1.5) and (1.6). The remaining relations follow in similar ways.
In our investigations we need to assume that our Young functions are _strict_ in the following sense.
**Definition 1.6**.: The Young function \(\Phi\) from \([0,\infty)\) to \([0,\infty]\) is called _strict_ or a _strict Young function_, if
1. \(\Phi(t)<\infty\) for every \(t\in[0,\infty)\),
2. \(\Phi\) satisfies the \(\Delta_{2}\)-condition,
3. \(\Phi\) satisfies the \(\Lambda\)-condition.
In Section 2 we give various kinds of characterizations of the conditions (2) and (3) in Definition 1.6. In particular we show that (2) and (3) in Definition 1.6 are equivalent to \(p_{\Phi}<\infty\) and \(q_{\Phi}>1\), respectively. (See Proposition 2.3.)
It will also be useful to rely on regular Young functions, which is possible due to the following proposition.
**Proposition 1.7**.: _Let \(\Phi\) be a Young function which satisfies the \(\Delta_{2}\) condition. Then there is a Young function \(\Psi\) such that the following is true:_
1. \(\Psi\) _is equivalent to_ \(\Phi\) _and_ \(\Psi\leqslant\Phi\)_;_
2. \(\Psi\) _is smooth on_ \(\mathbf{R}_{+}\)_;_
3. \(\Psi_{+}^{\prime}(0)=\Phi_{+}^{\prime}(0)\)
Proof.: Let \(\phi\in C_{0}^{\infty}[0,1]\) be such that \(\phi\geqslant 0\) and \(\int_{0}^{1}\phi(s)\,ds=1\). Put
\[\Psi(t)=\int_{0}^{1}\Phi(t-\tfrac{1}{2}st)\phi(s)\,ds.\]
Then using this formula and
\[\Psi(t)=\int_{t/2}^{t}\Phi(s)\phi(s-2s/t)\frac{t}{s}\,ds,\]
we reach the result.
It follows that \(\Psi\) in Proposition 1.7 fulfills the \(\Delta_{2}\) condition, because \(\Phi\) satisfy that condition and \(\Psi\) is equivalent to \(\Phi\).
**Definition 1.8**.: Let \(\Phi\) be a Young function. The _Orlicz space_\(L^{\Phi}(\mathbf{R}^{d})\) consists of all Lebesgue measurable functions \(f:\mathbf{R}^{d}\to\mathbf{C}\) such that
\[\|f\|_{L^{\Phi}}\equiv\inf\bigg{\{}\;\lambda>0\,;\;\int_{\mathbf{R}^{d}}\Phi \left(\frac{|f(x)|}{\lambda}\right)dx\leqslant 1\,\bigg{\}}\]
is finite.
**Definition 1.9**.: Let \(\Phi\) be a Young function. The _weak Orlicz space_\(wL^{\Phi}(\mathbf{R}^{d})\) consists of all Lebesgue measurable functions \(f:\mathbf{R}^{d}\to\mathbf{C}\) such that
\[\|f\|_{wL^{\Phi}}\equiv\inf\bigg{\{}\;\lambda>0\,;\;\sup_{t>0}\left(\Phi\left( \frac{t}{\lambda}\right)\mu_{f}(t)\right)\leqslant 1\,\bigg{\}}\]
is finite. Here \(\mu_{f}(t)\) is the Lebesgue measure of the set \(\{\,x\in\mathbf{R}^{d}\,;\,|f(x)>t|\,\}\).
In accordance with the usual Lebesgue spaces, \(f,g\in wL^{\Phi}(\mathbf{R}^{d})\) are equivalent whenever \(f=g\) a. e.
### Pseudo-differential operators
Let \(\mathbf{M}(d,\Omega)\) be the set of all \(d\times d\)-matrices with entries in the set \(\Omega\), and let \(a\in\mathscr{S}(\mathbf{R}^{2d})\) and \(A\in\mathbf{M}(d,\mathbf{R})\) be fixed. Then the pseudo-differential operator \(\operatorname{Op}_{A}(a)\) is the linear and continuous operator on \(\mathscr{S}(\mathbf{R}^{d})\), given by
\[(\operatorname{Op}_{A}(a)f)(x)=(2\pi)^{-d}\iint a(x-A(x-y),\xi)f(y)e^{i\langle x -y,\xi\rangle}\,dyd\xi, \tag{1.7}\]
when \(f\in\mathscr{S}(\mathbf{R}^{d})\). For general \(a\in\mathscr{S}^{\prime}(\mathbf{R}^{2d})\), the pseudo-differential operator \(\operatorname{Op}_{A}(a)\) is defined as the linear and continuous operator from \(\mathscr{S}(\mathbf{R}^{d})\) to \(\mathscr{S}^{\prime}(\mathbf{R}^{d})\) with distribution kernel given by
\[K_{a,A}(x,y)=(2\pi)^{-d/2}(\mathscr{F}_{2}^{-1}a)(x-A(x-y),x-y). \tag{1.8}\]
Here \(\mathscr{F}_{2}F\) is the partial Fourier transform of \(F(x,y)\in\mathscr{S}^{\prime}(\mathbf{R}^{2d})\) with respect to the \(y\) variable. This definition makes sense, since the mappings
\[\mathscr{F}_{2}\quad\text{and}\quad F(x,y)\mapsto F(x-A(x-y),x-y) \tag{1.9}\]
are homeomorphisms on \(\mathscr{S}^{\prime}(\mathbf{R}^{2d})\). In particular, the map \(a\mapsto K_{a,A}\) is a homeomorphism on \(\mathscr{S}^{\prime}(\mathbf{R}^{2d})\).
An important special case appears when \(A=t\cdot I\), with \(t\in\mathbf{R}\). Here and in what follows, \(I\in\mathbf{M}(d,\mathbf{R})\) denotes the \(d\times d\) identity matrix. In this case we set
\[\operatorname{Op}_{t}(a)=\operatorname{Op}_{t\cdot I}(a).\]
The normal or Kohn-Nirenberg representation, \(a(x,D)\), is obtained when \(t=0\), and the Weyl quantization, \(\operatorname{Op}^{w}(a)\), is obtained when \(t=\frac{1}{2}\). That is,
\[a(x,D)=\operatorname{Op}_{0}(a)\quad\text{and}\quad\operatorname{Op}^{w}(a)= \operatorname{Op}_{1/2}(a).\]
For any \(K\in\mathscr{S}^{\prime}(\mathbf{R}^{d_{1}+d_{2}})\), we let \(T_{K}\) be the linear and continuous mapping from \(\mathscr{S}(\mathbf{R}^{d_{1}})\) to \(\mathscr{S}^{\prime}(\mathbf{R}^{d_{2}})\), defined by the formula
\[(T_{K}f,g)_{L^{2}(\mathbf{R}^{d_{2}})}=(K,g\otimes\overline{f})_{L^{2}( \mathbf{R}^{d_{1}+d_{2}})}. \tag{1.10}\]
It is well-known that if \(A\in\mathbf{M}(d,\mathbf{R})\), then it follows from Schwartz kernel theorem that \(K\mapsto T_{K}\) and \(a\mapsto\operatorname{Op}_{A}(a)\) are bijective mappings from \(\mathscr{S}^{\prime}(\mathbf{R}^{2d})\) to the set of linear and continuous mappings from \(\mathscr{S}(\mathbf{R}^{d})\) to \(\mathscr{S}^{\prime}(\mathbf{R}^{d})\) (cf. e. g. [6]).
In particular, for every \(a_{1}\in\mathscr{S}^{\prime}(\mathbf{R}^{2d})\) and \(A_{1},A_{2}\in\mathbf{M}(d,\mathbf{R})\), there is a unique \(a_{2}\in\mathscr{S}^{\prime}(\mathbf{R}^{2d})\) such that \(\operatorname{Op}_{A_{1}}(a_{1})=\operatorname{Op}_{A_{2}}(a_{2})\). The following result explains the relations between \(a_{1}\) and \(a_{2}\).
**Proposition 1.10**.: _Let \(a_{1},a_{2}\in\mathscr{S}^{\prime}(\mathbf{R}^{2d})\) and \(A_{1},A_{2}\in\mathbf{M}(d,\mathbf{R})\). Then_
\[\operatorname{Op}_{A_{1}}(a_{1})=\operatorname{Op}_{A_{2}}(a_{2})\quad \Leftrightarrow\quad e^{i(A_{2}D_{\xi},D_{x})}a_{2}(x,\xi)=e^{i(A_{1}D_{\xi}, D_{x})}a_{1}(x,\xi).\]
In [18], a proof of the previous proposition is given, which is similar to the proof of the case \(A=t\cdot I\) in [6, 17, 21].
Let \(r,\rho,\delta\in\mathbf{R}\) be such that \(0\leqslant\delta\leqslant\rho\leqslant 1\) and \(\delta<1\). Then we recall that the Hormander class \(S^{r}_{\rho,\delta}(\mathbf{R}^{2d})\) consists of all \(a\in C^{\infty}(\mathbf{R}^{2d})\) such that
\[\sum_{|\alpha|,|\beta|\leqslant N}\sup_{x,\xi\in\mathbf{R}^{d}}\Big{(}\langle \xi\rangle^{-r+\rho|\alpha|-\delta|\beta|}|D^{\alpha}_{\xi}D^{\beta}_{x}a(x, \xi)|\Big{)}\]
is finite for every integer \(N\geqslant 0\).
We recall the following continuity property for pseudo-differential operators acting on \(L^{p}\)-spaces (see e. g. [22]).
**Proposition 1.11**.: _Let \(p\in(1,\infty)\), \(A\in\mathbf{M}(\mathbf{R},d)\) and \(a\in S^{0}_{1,0}(\mathbf{R}^{2d})\). Then \(\operatorname{Op}_{A}(a)\) is continuous on \(L^{p}(\mathbf{R}^{d})\)._
In the next proposition we essentially recall Hormander's improvement of Mihlin's Fourier multiplier theorem.
**Proposition 1.12**.: _Let \(p\in(1,\infty)\) and \(a\in L^{\infty}(\mathbf{R}^{d}\setminus 0)\) be such that_
\[\sup_{R>0}\left(R^{-d+2|\alpha|}\int_{A_{R}}|\partial^{\alpha}a(\xi)|^{2}\,d\xi\right) \tag{1.11}\]
_is finite for every \(\alpha\in\mathbf{N}^{d}\) with \(|\alpha|\leqslant[\frac{d}{2}]+1\), where \(A_{R}\) is the annulus \(\{\,\xi\in\mathbf{R}^{d}\,;\,R<|\xi|<2R\,\}\). Then \(a(D)\) is continuous on \(L^{p}(\mathbf{R}^{d})\)._
### Fourier integral operators of \(SG\) type
We recall that the so-called \(SG\)-symbol class \(S^{m,\mu}(\mathbf{R}^{2d})\), \(m,\mu\in\mathbf{R}\), consists of all \(a\in C^{\infty}(\mathbf{R}^{2d})\) such that
\[\sum_{|\alpha|,|\beta|\leqslant N}\sup_{x,\xi\in\mathbf{R}^{d}}\Big{(}\langle x \rangle^{-m+|\alpha|}\langle\xi\rangle^{-\mu+|\beta|}|D^{\alpha}_{x}D^{\beta} _{\xi}a(x,\xi)|\Big{)}\]
is finite for every integer \(N\geqslant 0\). Following [3], we say that \(\varphi\in C^{\infty}(\mathbf{R}^{d}\times(\mathbf{R}^{d}\setminus 0))\) is a phase-function if it is real-valued, positively \(1\)-homogeneous
with respect to \(\xi\), that is, \(\varphi(x,\tau\xi)=\tau\varphi(x,\xi)\) for all \(\tau>0\), \(x,\xi\in{\bf R}^{d}\), \(\xi\neq 0\), and satisfies, for all \(x,\xi\in{\bf R}^{d}\), \(\xi\neq 0\),
\[\begin{split}|\det\partial_{x}\partial_{\xi}\varphi(x,\xi)|\geq C >0,\quad\partial_{x}^{\alpha}\varphi(x,\xi)\prec\langle x\rangle^{1-|\alpha|} |\xi|\text{ for all }\alpha\in{\bf N}^{d},\\ \langle\varphi_{\xi}^{\prime}(x,\xi)\rangle\sim\langle x\rangle, \ \langle\varphi_{x}^{\prime}(x,\xi)\rangle\sim\langle\xi\rangle.\end{split} \tag{1.12}\]
In the sequel, we will denote the set of all such phase-functions by \(\mathfrak{P}_{r}^{\rm hom}\).
For any \(a\in S^{m,\mu}({\bf R}^{2d})\) and \(\varphi\in\mathfrak{P}_{r}^{\rm hom}\), the Fourier integral operator \({\rm Op}_{\varphi}(a)\) is the linear and continuous operator from \(\mathscr{S}({\bf R}^{d})\) to \(\mathscr{S}^{\prime}({\bf R}^{d})\), given by
\[({\rm Op}_{\varphi}(a)f)(x)=\int_{{\bf R}^{d}}e^{i\varphi(x,\xi)}a(x,\xi) \widehat{f}(\xi)\,d\xi,\quad f\in\mathscr{S}({\bf R}^{d}). \tag{1.13}\]
We recall the following (global on \({\bf R}^{d}\)) \(L^{p}\)-boundedness result, proved in [3].
**Theorem 1.13**.: _Let \(p\in(1,\infty)\), \(m,\mu\in{\bf R}\) be such that_
\[m\leq-(d-1)\left|\frac{1}{p}-\frac{1}{2}\right|\text{ and }\mu\leq-(d-1) \left|\frac{1}{p}-\frac{1}{2}\right|, \tag{1.14}\]
_and suppose that \(a\in S^{m,\mu}({\bf R}^{2d})\) is such that \(|\xi|\geq\varepsilon\), for some \(\varepsilon>0\), on the support of \(a\). Then \({\rm Op}_{\varphi}(a)\) from \(\mathscr{S}({\bf R}^{d})\) to \(\mathscr{S}^{\prime}({\bf R}^{d})\) extends uniquely to a continuous operator on \(L^{p}({\bf R}^{d})\)._
_Remark 1.14_.: As it is well-known, in view of the presence of a phase function \(\varphi\in\mathfrak{P}_{r}^{\rm hom}\), assumed different from \(\varphi(x,\xi)=x\cdot\xi\) (for which (1.13) actually becomes a pseudo-differential operator), the uniform boundedness of the amplitude \(a\) is, in general, not enough to guarantee that \({\rm Op}_{\varphi}(a)\) continuously maps \(L^{p}\) into itself, even if the support of \(f\) is compact (see the celebrated paper [16]), except when \(p=2\). This is, of course, in strong contrast with Proposition 1.11. Notice, in (1.14), the _loss of decay_ (that is, the condition on the \(x\)-order \(m\) of the amplitude), together with the well-known _loss of smoothness_ (that is, the condition on the \(\xi\)-order \(\mu\) of the amplitude). Notice also that no condition of compactness of the support of \(f\) is needed in Theorem 1.13 (see [3] and the references quoted therein for more details).
## 2. The role of upper and lower Lebesgue exponents for Young functions
In this section we investigate the Orlicz Lebesgue exponents \(p_{\Phi}\) and \(q_{\Phi}\) and link conditions on these exponents to various properties on their Young functions \(\Phi\). Especially we show that both implications in (0.2) involving \(q_{\Phi}\) are wrong (see Proposition 2.4). Instead we deduce other conditions \(\Phi\) which characterize \(q_{\Phi}>1\) (see Propositions 2.1 and 2.3).
In the following proposition we list some basic properties of relations between Young functions and their upper and lower Lebesgue exponents.
**Proposition 2.1**.: _Let \(\Phi\) be a Young function which is non-zero outside the origin, and let \(q_{\Phi}\) and \(p_{\Phi}\) be as in (1.4) and (1.3). Then the following is true:_
1. \(1\leqslant q_{\Phi}\leqslant p_{\Phi}\)_;_
2. \(p_{\Phi}=1\)_, if and only if_ \(\Phi\) _is a linear map;_
_._
3. \(p_{\Phi}<\infty\)_, if and only if_ \(\Phi\) _fulfills the_ \(\Delta_{2}\)_-condition;_
4. \(q_{\Phi}>1\)_, if and only if there is a_ \(p>1\) _such that_ \(\frac{\Phi(t)}{t^{p}}\) _increases._
_Remark 2.2_.: Taking into account that \(\Phi\) in Proposition 2.1 is a Young function, we find that (4) is equivalent to
1. \(q_{\Phi}>1\)_, if and only if there is a_ \(p>1\) _such that_ \(\frac{\Phi(t)}{t^{p}}\) _increases,_ \[\lim_{t\to 0+}\frac{\Phi(t)}{t^{p}}=0\quad\text{and}\quad\lim_{t\to \infty}\frac{\Phi(t)}{t^{p}}=\infty.\]
Most of Proposition 2.1 and Remark 2.2 are well-known. In order to be self-contained we here present a proof.
Proof of Proposition 2.1.: Since \(\Phi\) and its left and right derivatives are increasing, the mean-value theorem gives that for some \(c=c_{t}\in[0,1]\), we have
\[\Phi(t)=\Phi(t)-\Phi(0)\leqslant t\Phi_{+}(ct)\leqslant t\Phi_{+}(t).\]
This gives (1).
If \(\Phi\) is linear, then \(\frac{t\Phi^{\prime}(t)}{\Phi(t)}=1\), giving that \(q_{\Phi}=p_{\Phi}=1\). Suppose instead that \(p_{\Phi}=1\). Then
\[\frac{t\Phi^{\prime}(t)}{\Phi(t)}=1,\]
in view of (1) and its proof. This implies that \(\Phi(t)=Ct\) for some constant \(C\), and (2) follows.
In order to prove (3), we first suppose that \(p_{\Phi}<\infty\). Then
\[\frac{t\Phi^{\prime}_{+}(t)}{\Phi(t)}\leqslant R\quad\Leftrightarrow\quad t \Phi^{\prime}_{+}(t)-R\Phi(t)\leqslant 0,\]
for some \(R>0\). Since \(\Phi(0)=0\), we obtain
\[\Phi(t)=t^{R}h(t),\quad t>0,\]
for some positive decreasing function \(h(t)\), \(t>0\). This gives
\[\Phi(2t)=(2t)^{R}h(2t)\leqslant 2^{R}t^{R}h(t)=2^{R}\Phi(t),\]
and it follows that \(\Phi\) satisfies the \(\Delta_{2}\)-condition when \(p_{\Phi}<\infty\).
Suppose instead that \(\Phi\) satisfies the \(\Delta_{2}\)-condition. By the mean-value theorem and the fact that \(\Phi^{\prime}_{+}(t)\) is increasing we obtain
\[\Phi^{\prime}_{+}(t)t\leqslant\Phi(2t)-\Phi(t)\leqslant\Phi(2t)\leqslant C \Phi(t),\]
for some constant \(C>0\). Here the last inequality follows from the fact that \(\Phi\) satisfies the \(\Delta_{2}\)-condition. This gives
\[\frac{t\Phi^{\prime}_{+}(t)}{\Phi(t)}\leqslant C,\]
giving that \(p_{\Phi}\leqslant C<\infty\), and we have proved (3).
Next we prove (4). Suppose that \(q_{\Phi}>1\). Then there is a \(p>1\) such that
\[\frac{t\Phi^{\prime}_{\pm}(t)}{\Phi(t)}>p\]
for all \(t>0\), which gives
\[t\Phi^{\prime}_{\pm}(t)-p\Phi(t)>0.\]
Hence
\[\frac{t^{p}\Phi^{\prime}_{\pm}(t)-pt^{p-1}\Phi(t)}{t^{2p}}>0,\]
or equivalently
\[\left(\frac{\Phi(t)}{t^{p}}\right)^{\prime}_{\pm}>0.\]
Hence, the result now holds. If we instead suppose that \(\frac{\Phi(t)}{t^{p}}\) is increasing for some \(p>1\), then applying the arguments above in reverse order now yields \(q_{\Phi}\geqslant p>1\).
For the equivalence in (4) of Proposition 2.1 we note further.
**Proposition 2.3**.: _Let \(\Phi\) be a Young function which is non-zero outside the origin, and let \(q_{\Phi}\) be as in (1.4). Then the following conditions are equivalent:_
1. \(q_{\Phi}>1\)_;_
2. _there is a_ \(p>1\) _such that_ \(\frac{\Phi(t)}{t^{p}}\) _increases;_
3. _there are_ \(p,q>1\) _such that_ \(\frac{\Phi(t)}{t^{p}}\) _increases near the origin and_ \(\frac{\Phi(t)}{t^{q}}\) _increases at infinity;_
4. _there is a_ \(p>1\) _such that for every_ \(t>0\) _and every_ \(c\in(0,1]\)_,_ \(\Phi(ct)\leqslant c^{p}\Phi(t)\)_._
Proof.: The equivalence of (1) and (2) was established in Proposition 2.1. Trivially, (2) implies (3). Moreover, \(\frac{\Phi(t)}{t^{p}}\) increases if and only if for any \(t>0\) and any \(c\in(0,1]\),
\[\frac{\Phi(ct)}{(ct)^{p}}\leqslant\frac{\Phi(t)}{t^{p}}\]
which is equivalent to (4), hence (2) is equivalent to (4). We will now show that (3) implies (1), yielding the result.
Suppose that (3) holds. Then by assumption, there are \(R_{1},R_{2}>0\) such that \(\Phi(t)\) is increasing for \(t\in(0,R_{1})\cup(R_{2},\infty)\),
\[q_{1}=\inf_{t\in(0,R_{1})}\left(\frac{t\Phi^{\prime}_{+}(t)}{\Phi(t)}\right) \geqslant p>1\quad\text{and}\quad q_{3}=\inf_{t\in(R_{2},\infty)}\left(\frac{ t\Phi^{\prime}_{+}(t)}{\Phi(t)}\right)\geqslant q>1.\]
Let \(q_{2}=\inf_{t\in[R_{1},R_{2}]}\frac{t\Phi^{\prime}_{+}(t)}{\Phi(t)}.\) We want to show that \(q_{2}>1\), which will in turn yield \(q_{\Phi}=\inf\{q_{1},q_{2},q_{3}\}>1\), completing the proof.
Let \(\varphi_{1}(t)=k_{1}t-m_{1}\) and \(\varphi_{2}(t)=k_{2}t-m_{2}\), with \(k_{j}=\Phi^{\prime}_{+}(R_{j})\) and \(m_{j}\) chosen so that \(\varphi_{j}(R_{j})=\Phi(R_{j})\), \(j=1,2\). Given that \(\Phi\) is a Young function, is convex, and fulfills (3), it is clear that \(k_{1}\leqslant k_{2}\), \(m_{1}\leqslant m_{2}\) and \(m_{j}>0\) for \(j=1,2\).
We now approximate \(\Phi(t)\) with linear segments forming polygonal chains for \(R_{1}\leqslant t\leqslant R_{2}\). Pick points \(R_{1}=t_{0}<t_{1}<\cdots<t_{n}=R_{2}\) and define functions \(f_{j}(t)=a_{j}t-b_{j}\) such that \(f_{j}(t_{j})=\Phi(t_{j})\) and \(f_{j}(t_{j+1})=\Phi(t_{j+1})\). Let \(\Phi_{n}(t)\) be the polygonal chain on \([R_{1},R_{2}]\) formed by connecting the functions \(f_{j}\), meaning \(\Phi_{n}(t)=f_{j}(t)\) whenever \(t\in[t_{j},t_{j+1}]\).
Since \(\Phi\) is convex and increasing, we have \(k_{1}\leqslant a_{j}\leqslant k_{2}\) and \(m_{1}\leqslant b_{j}\leqslant m_{2}\) for all \(j=1,\ldots,n\). Hence, for any \(j=1,\ldots,n\),
\[\inf_{t\in[t_{j},t_{j+1}]}\left(\frac{t(f_{j})^{\prime}_{+}(t)}{f_{j}(t)} \right)=\inf_{t\in[t_{j},t_{j+1}]}\left(1+\frac{b_{j}}{a_{j}t_{j}-b_{j}} \right)>1+\frac{m_{1}}{\Phi(R_{2})},\]
where the last inequality follows from the fact that \(b_{j}>m_{1}\) and \(a_{j}t_{j}-b_{j}=f_{j}(t_{j})\leqslant f_{n}(t_{n})=\Phi(R_{2})\). From this, it is clear that
\[q_{\Phi_{n}}=\inf_{t\in[R_{1},R_{2}]}\left(\frac{t(\Phi_{n})_{+}^{\prime}(t)}{ \Phi_{n}(t)}\right)>1+\frac{m_{1}}{\Phi(R_{2})}\]
independent of the choice of \(n\) and the points \(t_{j}\), \(j=1,\ldots n-1\), and therefore
\[q_{2}=\lim_{n\to\infty}q_{\Phi_{n}}\geqslant 1+\frac{m_{1}}{\Phi(R_{2})}>1.\]
This gives (1), completing the proof.
The following proposition shows that the condition \(q_{\Phi}>1\) cannot be linked to strict convexity for the Young function \(\Phi\).
**Proposition 2.4**.: _Let \(\Phi\) and \(\Psi\) be Young functions which are non-zero outside the origin, and let \(q_{\Phi}\) be as in (1.4). Then the following is true:_
1. _if_ \(q_{\Phi}>1\)_, then there is an equivalent Young function to_ \(\Phi\) _which is strictly convex;_
2. \(\Phi\) _can be chosen such that_ \(q_{\Phi}>1\) _but_ \(\Phi\) _is not strictly convex;_
3. \(\Phi\) _can be chosen such that_ \(q_{\Phi}=1\) _but_ \(\Phi\) _is strictly convex._
_Remark 2.5_.: In [7] it is stated that (1) in Proposition 2.4 can be replaced by
1. \(q_{\Phi}>1\), if and only if \(\Phi\) is strictly convex.
This is equivalent to that the following conditions should hold:
1. if \(q_{\Phi}>1\), then \(\Phi\) is strictly convex;
2. if \(\Phi\) is strictly convex, then \(q_{\Phi}>1\).
(See remark after (1.1) in [7].) Evidently, the assertion in [7] is (strictly) stronger than Proposition 2.4 (1). On the other hand, Proposition 2.4 (2) shows that \((2)^{\prime}\) can not be true and Proposition 2.4 (3) shows that \((3)^{\prime}\) can not be true. Consequently, both implications in \((1)^{\prime}\) are false.
Proof of Proposition 2.4.: We begin by proving (1). Therefore assume that \(q_{\Phi}>1\). Suppose that \(\Phi\) fails to be strict convex in \((0,\varepsilon)\), for some \(\varepsilon>0\). Then \(\Phi^{\prime\prime}(t)=0\) when \(t\in(0,\varepsilon)\). This implies that \(\Phi(t)=ct\) when \(t\in(0,\varepsilon)\), for some \(c\geqslant 0\), which in turn gives \(q_{\Phi}=1\), violating the condition \(q_{\Phi}>1\). Hence \(\Phi\) must be strict convex in \((0,\varepsilon)\), for some choice of \(\varepsilon>0\).
Let
\[\Psi(t)=\int_{0}^{t}\Phi(t-s)e^{-s}\,ds.\]
Then
\[\Psi^{\prime\prime}(t)=\Phi^{\prime}(0)+\int_{0}^{t}\Phi^{\prime\prime}(t-s)e ^{-s}\,ds\geq\int_{t-\varepsilon}^{t}\Phi^{\prime\prime}(t-s)e^{-s}\,ds>0,\]
since \(\Phi^{\prime\prime}(t-s)>0\) when \(s\in(t-\varepsilon,t)\). This shows that \(\Psi\) is a strictly convex Young function.
Since \(\Phi\) is increasing we also have
\[\Psi(t)\leqslant\Phi(t),\]
because
\[\Psi(t)=\int_{0}^{t}\Phi(t-s)e^{-s}\,ds\leq\Phi(t)\int_{0}^{t}e^{-s}\,ds\leq \Phi(t)\int_{0}^{\infty}e^{-s}\,ds=\Phi(t).\]
This implies that
\[\Phi_{1}(t)\equiv\Phi(t)+\Psi(t)\]
is equivalent to \(\Phi(t)\). Since \(\Psi\) is strictly convex, it follows that \(\Phi_{1}\) is strictly convex as well. Consequently, \(\Phi_{1}\) fulfills the required conditions for the searched Young function, and (4) follows.
In order to prove (2), we choose
\[\Phi(t)=\begin{cases}2t^{2},&\text{when }t\leqslant 1\\ 4t-2,&\text{when }1\leqslant t\leqslant 2\\ t^{2}+2,&\text{when }t\geqslant 2\end{cases}\]
which is not strictly convex. Then
\[q_{\Phi} =\inf_{t>0}\left(\frac{t\Phi^{\prime}(t)}{\Phi(t)}\right)\] \[=\min\left\{\inf_{t\leqslant 1}\left(\frac{4t^{2}}{2t^{2}} \right),\inf_{1\leqslant t\leqslant 2}\left(\frac{4t}{4t-2}\right),\inf_{t \geqslant 2}\left(\frac{2t^{2}}{t^{2}+2}\right)\right\}=\frac{4}{3}>1,\]
which shows that \(\Phi\) satisfies all the searched properties. This gives (2).
Next we prove (3). Let
\[\Phi(t)=t\ln(1+t),\quad t\geqslant 0.\]
Then \(\Phi\) is a Young function, and it follows by straight-forward computations that \(q_{\Phi}=1\). We also have \(\Phi^{\prime\prime}(t)>0\), giving that \(\Phi\) is strictly convex. Consequently, \(\Phi\) satisfies all searched properties, and (3) follows.
This gives the result.
Continuity for pseudo-differential operators, Fourier multipliers, and Fourier integral operators on Orlicz spaces
In this section we extend properties on \(L^{p}\) continuity for various types of Fourier type operators into continuity on Orlicz spaces. Especially we perform such extensions for Hormander's improvement of Mihlin's Fourier multiplier theorem (see Theorem 3.4). We also deduce Orlicz space continuity for suitable classes of pseudo-differential and Fourier integral operators (see Theorems 3.3 and 3.5). Our investigations are based on a special case of MarcinKiewicz type interpolation theorem for Orlicz spaces, deduced in [7].
We now recall the following interpolation theorem on Orlicz spaces, which is a special case of [7, Theorem 5.1].
**Proposition 3.1**.: _Let \(\Phi\) be a strict Young function and \(p_{0},p_{1}\in(0,\infty]\) are such that \(p_{0}<q_{\Phi}\leqslant p_{\Phi}<p_{1}\), where \(q_{\Phi}\) and \(p_{\Phi}\) are defined in (1.4) and (1.3). Also let_
\[T:L^{p_{0}}(\mathbf{R}^{d})+L^{p_{1}}(\mathbf{R}^{d})\to wL^{p_{0}}( \mathbf{R}^{d})+wL^{p_{1}}(\mathbf{R}^{d}) \tag{3.1}\]
_be a linear and continuous map which restricts to linear and continuous mappings_
\[T:L^{p_{0}}(\mathbf{R}^{d})\to\,wL^{p_{0}}(\mathbf{R}^{d})\qquad\text{and} \qquad T:\;\;L^{p_{1}}(\mathbf{R}^{d})\to wL^{p_{1}}(\mathbf{R}^{d}).\]
_Then (3.1) restricts to linear and continuous mappings_
\[T:\,L^{\Phi}(\mathbf{R}^{d})\to L^{\Phi}(\mathbf{R}^{d})\qquad\quad\text{ and}\qquad T:\,wL^{\Phi}(\mathbf{R}^{d})\to wL^{\Phi}(\mathbf{R}^{d}). \tag{3.2}\]
_Remark 3.2_.: Let \(\Phi\) and \(T\) be the same as in Proposition 3.1. Then the continuity of the mappings in (3.2) means
\[\|Tf\|_{L^{\Phi}}\lesssim\|f\|_{L^{\Phi}},\quad\ f\in L^{\Phi}(\mathbf{R}^{d})\]
and
\[\|Tf\|_{wL^{\Phi}}\lesssim\|f\|_{wL^{\Phi}},\ \ f\in wL^{\Phi}(\mathbf{R}^{d}).\]
A combination of Propositions 1.11 and 3.1 gives the following result on continuity properties for pseudo-differential operators on \(L^{\Phi}\)-spaces.
**Theorem 3.3**.: _Let \(\Phi\) be a strict Young function, \(A\in\mathbf{M}(d,\mathbf{R})\) and \(a\in S^{0}_{1,0}(\mathbf{R}^{2d})\). Then_
\[\operatorname{Op}_{A}(a):L^{\Phi}(\mathbf{R}^{d})\to L^{\Phi}(\mathbf{R}^{d}) \quad\text{and}\quad\operatorname{Op}_{A}(a):wL^{\Phi}(\mathbf{R}^{d})\to wL^ {\Phi}(\mathbf{R}^{d})\]
_are continuous._
Proof.: By Proposition 2.1 it follows that \(q_{\Phi}>1\) and \(p_{\Phi}<\infty\). Choose \(p_{0},p_{1}\in(1,\infty)\) such that \(p_{0}<q_{\Phi}\) and \(p_{1}>p_{\Phi}\). In view of Remark 1.2 and Proposition 1.11,
\[\|\operatorname{Op}(a)f\|_{wL^{p_{j}}}\leqslant\|\operatorname{Op}(a)f\|_{L^ {p_{j}}}\leqslant C\|f\|_{L^{p_{j}}},\quad f\in L^{p_{j}}(\mathbf{R}^{d}),\ j=0,1. \tag{3.3}\]
Then it follows that \(\operatorname{Op}_{A}(a)\) extends uniquely to a continuous map from \(L^{p_{0}}(\mathbf{R}^{d})+L^{p_{1}}(\mathbf{R}^{d})\) to \(wL^{p_{0}}(\mathbf{R}^{d})+wL^{p_{1}}(\mathbf{R}^{d})\) (see e. g. [2]). Hence the conditions of Proposition 3.1 are fulfilled and the result follows.
By using Proposition 1.12 instead of Proposition 1.11 in the previous proof we obtain the following extension of Hormander's improvement of Mihlin's Fourier multiplier theorem. The details are left for the reader.
**Theorem 3.4**.: _Let \(\Phi\) be a strict Young function and \(a\in L^{\infty}(\mathbf{R}^{d}\setminus 0)\) be such that_
\[\sup_{R>0}\left(R^{-d+2|\alpha|}\int_{A_{R}}|\partial^{\alpha}a(\xi)|^{2}\,d\xi\right) \tag{3.4}\]
_is finite for every \(\alpha\in\mathbf{N}^{d}\) with \(|\alpha|\leqslant[\frac{d}{2}]+1\), where \(A_{R}\) is the annulus \(\{\,\xi\in\mathbf{R}^{d}\,;\,R<|\xi|<2R\,\}\). Then \(a(D)\) is continuous on \(L^{\Phi}(\mathbf{R}^{d})\) and on \(wL^{\Phi}(\mathbf{R}^{d})\)._
Finally, employing Theorem 1.13, we prove the following continuity result for Fourier integral operators on \(L^{\Phi}\)-spaces.
**Theorem 3.5**.: _Let \(\Phi\) be a strict Young function, \(\varphi\in\mathfrak{P}^{\mathrm{hom}}_{r}\) a phase function, \(a\in S^{m,\mu}(\mathbf{R}^{2d})\) an amplitude function such that_
\[m<\mathfrak{T}_{d,\Phi}\ \text{and}\ \mu<\mathfrak{T}_{d,\Phi}, \tag{3.5}\]
_where_
\[\mathfrak{T}_{d,\Phi}=-(d-1)\max\left\{\left|\frac{1}{p_{\Phi}}-\frac{1}{2} \right|,\left|\frac{1}{q_{\Phi}}-\frac{1}{2}\right|\right\}.\]
_Moreover, assume that \(|\xi|\geq\varepsilon\) on the support of \(a\), for some \(\varepsilon>0\). Then,_
\[\operatorname{Op}_{\varphi}(a)\colon L^{\Phi}(\mathbf{R}^{d})\to L^{\Phi}( \mathbf{R}^{d})\quad\text{and}\quad\operatorname{Op}_{\varphi}(a):wL^{\Phi}( \mathbf{R}^{d})\to wL^{\Phi}(\mathbf{R}^{d})\]
_are continuous._
_Remark 3.6_.: Notice the strict inequality in (3.5), differently from condition (1.14) in Theorem 1.13 for the \(L^{p}\)-boundedness of the Fourier integral operators in (1.13). The sharpness of condition (3.5) will be investigated elsewhere.
Proof.: As above, by Proposition 2.1 it follows that \(q_{\Phi}>1\) and \(p_{\Phi}<\infty\). Choose \(p_{0},p_{1}\in(1,\infty)\) such that \(p_{0}<q_{\Phi}\) and \(p_{1}>p_{\Phi}\), and, as it is possible, by continuity and the hypothesis (3.5), such that
\[m<-(d-1)\left|\frac{1}{p_{j}}-\frac{1}{2}\right|\text{ and }\mu<-(d-1)\left| \frac{1}{p_{j}}-\frac{1}{2}\right|,\quad j=0,1.\]
In view of Remark 1.2 and Theorem 1.13,
\[\|\mathrm{Op}_{\varphi}(a)f\|_{wL^{p_{j}}}\leqslant\|\mathrm{Op}_{\varphi}(a )f\|_{L^{p_{j}}}\leqslant C\|f\|_{L^{p_{j}}},\quad f\in L^{p_{j}}(\mathbf{R}^{ d}),\ j=0,1. \tag{3.6}\]
By Proposition 3.1, the claim follows, arguing as in the final step of the proof of Theorem 3.3.
|
2309.00142 | Block occurrences in the binary expansion | The binary sum-of-digits function $\mathsf{s}$ returns the number of ones in
the binary expansion of a nonnegative integer. Cusick's Hamming weight
conjecture states that, for all integers $t\geq 0$, the set of nonnegative
integers $n$ such that $\mathsf{s}(n+t)\geq \mathsf{s}(n)$ has asymptotic
density strictly larger than $1/2$. We are concerned with the block-additive
function $\mathsf{r}$ returning the number of (overlapping) occurrences of the
block $\mathtt{11}$ in the binary expansion of $n$. The main result of this
paper is a central limit-type theorem for the difference
$\mathsf{r}(n+t)-\mathsf{r}(n)$: the corresponding probability function is
uniformly close to a Gaussian, where the uniform error tends to $0$ as the
number of blocks of ones in the binary expansion of $t$ tends to $\infty$. | Bartosz Sobolewski, Lukas Spiegelhofer | 2023-08-31T21:27:44Z | http://arxiv.org/abs/2309.00142v1 | # Block occurrences in the binary expansion
###### Abstract
The binary sum-of-digits function \(\mathsf{s}\) returns the number of ones in the binary expansion of a nonnegative integer. Cusick's Hamming weight conjecture states that, for all integers \(t\geq 0\), the set of nonnegative integers \(n\) such that \(\mathsf{s}(n+t)\geq\mathsf{s}(n)\) has asymptotic density strictly larger than \(1/2\).
We are concerned with the block-additive function \(\mathsf{r}\) returning the number of (overlapping) occurrences of the block 11 in the binary expansion of \(n\). The main result of this paper is a central limit-type theorem for the difference \(\mathsf{r}(n+t)-\mathsf{r}(n)\): the corresponding probability function is uniformly close to a Gaussian, where the uniform error tends to \(0\) as the number of blocks of ones in the binary expansion of \(t\) tends to \(\infty\).
+
Footnote †: Bartosz Sobolewski was supported by the grant of the National Science Centre (NCN), Poland, no. UMO-2020/37/N/ST1/02655.
Lukas Spiegelhofer acknowledges support by the FWF–ANR project ArithRand (grant numbers I4945-N and ANR-20-CE91-0006), and by the FWF project P36137-N.
## 1 Introduction
Every nonnegative integer \(n\) admits a unique representation
\[n=\sum_{j\geq 0}\delta_{j}2^{n}, \tag{1.1}\]
where \(\delta_{j}\in\{0,1\}\), which is called the _binary expansion_ of \(n\). Each digit \(\delta_{j}\) is therefore a function \(\delta_{j}(\cdot)\) of \(n\). The central question we ask is the following:
How does the binary expansion behave under addition? (1.2)
As a first step towards a possible answer to the question, we consider the _binary sum-of-digits function_\(\mathsf{s}\) of a nonnegative integer \(n\), defined by
\[\mathsf{s}(n)\coloneqq\sum_{j\geq 0}\delta_{j}(n),\]
and the differences
\[d_{\mathsf{s}}(t,n)\coloneqq\mathsf{s}(n+t)-\mathsf{s}(n).\]
The sum-of-digits function \(\mathsf{s}\) appears when the \(2\)-valuation of binomial coefficients is considered. We have the identity
\[\mathsf{s}(n+t)-\mathsf{s}(n)=\mathsf{s}(t)-\nu_{2}\biggl{(}\binom{n+t}{t} \biggr{)}, \tag{1.3}\]
where \(\nu_{2}(a)=\max\{k\in\mathbb{N}:2^{k}\mid a\}\), which follows from Legendre's formula. The \(2\)-valuation of \(\binom{n+t}{t}\) is also the number of _carries_ that appears when adding \(n\) and \(t\) in binary (Kummer [13]). It appears that both sides in (1.3) are nonnegative more than half of the time -- more precisely, T. W. Cusick's Hamming weight Conjecture [7] states that for each integer \(t\geq 0\), we have
\[\sum_{j\geq 0}\sigma_{\mathsf{s}}(t,j)>1/2, \tag{1.4}\]
where
\[\sigma_{\mathsf{s}}(t,j)\coloneqq\operatorname{dens}\bigl{\{}n\in\mathbb{N}: d_{\mathsf{s}}(t,n)=j\bigr{\}}, \tag{1.5}\]
and \(\operatorname{dens}A\) is the asymptotic density of a set \(A\subseteq\mathbb{N}\). The asymptotic density exists in this case, as the sets in (1.5) are unions of arithmetic progressions (Besineau [3]), and
\[\sum_{j\in\mathbb{Z}}\sigma_{\mathsf{s}}(t,j)=1\]
for all \(t\in\mathbb{N}\). We have the recurrence [7]
\[\sigma_{\mathsf{s}}(1,j) =\begin{cases}0,&j>1,\\ 2^{j-2},&j\leq 1,\end{cases} \tag{1.6}\] \[\sigma_{\mathsf{s}}(2t,j) =\sigma_{\mathsf{s}}(t,j),\] \[\sigma_{\mathsf{s}}(2t+1,j) =\frac{1}{2}\sigma_{\mathsf{s}}(t,j-1)+\frac{1}{2}\sigma_{ \mathsf{s}}(t+1,j+1),\]
valid for all integers \(t\geq 1\) and \(j\). Making essential use of this recurrence, the second author and Wallner [18] proved an _almost-solution_ to Cusick's conjecture.
**Theorem A**.: _Under the hypothesis that \(\mathtt{O}\mathtt{1}\) occurs at least \(N_{0}\) times in the binary expansion of \(n\), where \(N_{0}\) can be made explicit, the statement (1.4) holds._
T. W. Cusick remarked upon learning about this result (private communication) that the "hard cases" of his conjecture remain open!
In the same paper [18, Theorem 1.2] a central limit-type result is proved.
**Theorem B**.: _For integers \(t\geq 1\), let us define_
\[\kappa(1)=2,\qquad\kappa(2t)=\kappa(t),\qquad\kappa(2t+1)=\frac{\kappa(t)+ \kappa(t+1)}{2}+1.\]
_Assume that \(\mathtt{O}\mathtt{1}\) appears \(N\) times in the binary expansion of the positive integer \(t\), and \(N\) is larger than some constant \(N_{0}\). Then the estimate_
\[\sigma_{\mathsf{s}}(t,j)=\frac{1}{\sqrt{2\pi\kappa(t)}}\exp\left(-\frac{j^{2} }{2\kappa(t)}\right)+\mathcal{O}\bigl{(}N^{-1}(\log N)^{4}\bigr{)}\]
_holds for all integers \(j\). The multiplicative constant of the error term can be made explicit._
This theorem sharpens the main result in [9], see also [10].
The value \(\kappa(t)\) is the variance of the probability distribution given by the densities \(\sigma_{\mathsf{s}}(t,j)\) (where \(j\in\mathbb{Z}\)). It equals the second moment, as the mean is zero:
\[\sum_{j\in\mathbb{Z}}j\sigma_{\mathsf{s}}(t,j)=0. \tag{1.7}\]
Note that the function \(\frac{1}{2}\kappa\) appears in another context too: it is the _discrepancy of the van der Corput sequence_[8].
Returning to Cusick's conjecture (1.4), we note that other partial results are known [7, 15, 16]. We also wish to draw attention to the related conjecture by Tu and Deng [19, 20], coming from cryptography. This conjecture implies Cusick's conjecture, and holds _almost surely_[17]. Partial results exist [4, 5, 6, 11, 12, 14], but the general case is wide open. Cusick's conjecture arose while T. W. Cusick was working on the Tu-Deng conjecture [5], and thus the present paper traces back to cryptography.
### Notation
For a finite word \(\omega\) over \(\{\mathtt{0},\mathtt{1}\}\) containing \(\mathtt{1}\), let \(|n|_{\omega}\) denote the number of (overlapping) occurrences of the word \(\omega\) in the binary expansion of \(n\), padded with suitably many \(\mathtt{0}\mathtt{s}\) to the left. Note that in the case \(\omega=\mathtt{0}\mathtt{1}\), the integer \(|n|_{\omega}\) is the number of maximal blocks of \(\mathtt{1}\mathtt{s}\) in the binary expansion of \(n\), where a "block" is a contiguous finite subsequence. This is the case as each occurrence of \(\mathtt{0}\mathtt{1}\) marks the beginning of such a block.
For real \(\vartheta\), we will use the notation \(\mathrm{e}(\vartheta)=\exp(i\vartheta)\). Moreover, in this paper, we stick to the convention that \(0\in\mathbb{N}\).
## 2 Main result
In the present paper, we are going to establish a central limit-type result in the spirit of Theorem B, where the sum-of-digits function \(\mathsf{s}\) is replaced by a factor-counting function \(|\cdot|_{\omega}\). More precisely, we establish a result analogous to Theorem B, for \(\omega=\mathtt{1}\mathtt{1}\).
Let us define
\[\mathsf{r}(n)\coloneqq|n|_{\mathtt{1}\mathtt{1}}=\#\big{\{}j\geq 0:\delta_{j+1}( n)=\delta_{j}(n)=\mathtt{1}\big{\}}.\]
This sequence is A014081 in Sloane's OEIS1, and starts with the values
Footnote 1: The Online Encyclopedia of Integer Sequences, [https://oeis.org](https://oeis.org)
\[(\mathsf{r}(n))_{0\leq n<32}=(0,0,0,1,0,0,1,2,0,0,0,1,1,1,2,3,0,0,0,1,0,0,1,2, 1,1,1,2,2,2,3,4).\]
For example, \(31=(\mathtt{1}\mathtt{1}\mathtt{1}\mathtt{1}\mathtt{1}\mathtt{1})_{2}\) has four (overlapping) blocks \(\mathtt{1}\mathtt{1}\) in binary. Also note that \((\mathsf{r}(n)\bmod 2)_{n\in\mathbb{N}}\) is the famous Golay-Rudin-Shapiro sequence.
The object of interest will be the difference
\[d(t,n)\coloneqq\mathsf{r}(n+t)-\mathsf{r}(n), \tag{2.1}\]
As we will show, for each \(t\in\mathbb{N}\) and \(k\in\mathbb{Z}\) the set
\[C_{t}(k)\coloneqq\{n\in\mathbb{N}:d(t,n)=k\}\]
is a finite union of arithmetic progressions (see Proposition 3.2 below). Consequently, the densities
\[c_{t}(k)\coloneqq\operatorname{dens}C_{t}(k)\]
exist and induce a family of probability distributions on \(\mathbb{Z}\) with probability mass function \(c_{t}\). In the sequel we will identify these notions and say "distribution \(c_{t}\)" in short.
We also define the sequence \((v_{t})_{t\in\mathbb{N}}\) by \(v_{0}=0\), \(v_{1}=3/2\), and
\[v_{4t} =v_{2t}, \tag{2.2}\] \[v_{4t+2} =v_{2t+1}+1,\] \[v_{2t+1} =\frac{v_{t}+v_{t+1}}{2}+\frac{3}{4}.\]
As we will see (in Proposition 3.5 below), \(v_{t}\) is the variance of the associated probability distribution.
_Remark_.: From the above relations it follows that \((v_{t})_{t\in\mathbb{N}}\) is a \(2\)-regular sequence [1], see [2, Theorem 6].
Our main result says that when \(|t|_{\texttt{01}}\) is large, the distribution \(c_{t}\) is close to a Gaussian distribution with mean \(0\) and variance \(v_{t}\).
**Theorem 2.1**.: _There exist effective absolute constants \(C\), \(N_{0}\) such that the following holds. If the nonnegative integer \(t\) satisfies \(|t|_{\texttt{01}}\geq N_{0}\), we have_
\[\left|c_{t}(k)-\big{(}2\pi v_{t}\big{)}^{-1/2}\exp\!\left(-\frac{k^{2}}{2v_{t} }\right)\right|\leq C\frac{(\log N)^{2}}{N}, \tag{2.3}\]
_where \(N=|t|_{\texttt{01}}\)._
_Remarks_.:
* In analogy to the discussion in [18] after Theorem 1.2, we see that the main term is dominant (for large \(N\)) if \(|k|\leq C_{1}\sqrt{N\log N}\), and \(C_{1}\) is any constant in \((0,\sqrt{3}/2)\). For this, we need both the lower and the upper bound for \(v_{t}\), that is, \(3N/4\leq v_{t}\leq 5N\), proved in Proposition 3.11 further down.
* The statement of the theorem remains true for all \(N\) if we choose a larger value for \(C\). Using our method, this necessitates a much larger value, while no mathematical content is gained.
* In analogy to [18, Corollary 1.3], we obtain the corollary \[\sum_{k\geq 0}c_{t}(k)=1/2-C_{2}N^{-1/2}\big{(}\log N\big{)}^{5},\] where \(N=|t|_{\texttt{01}}\), and \(C_{2}\) is another absolute constant.
* Is it true that \[\sum_{k\geq 0}c_{t}(k)>1/2\] (2.4) for all integers \(t\geq 0\)? This fundamental question is an analogue of Cusick's conjecture (1.4) for \(r\) in place of \(\mathfrak{s}\), and forms part of the guiding question (1.2). Just like Cusick's original conjecture, this question has to remain open for the moment. By numerical computation, (2.4) holds for \(t<2^{20}\). Among such \(t\), the minimal value of the sum is attained for \(t=1013693=(11110111011110111101)_{2}\), and equals approximately \(0.535\).
* Adapting our proof of Theorem 2.1 below to the original situation concerning \(\mathfrak{s}\), it should be possible to improve the error term in Theorem B to \(\mathcal{O}\big{(}N^{-1}(\log N)^{2}\big{)}\).
Proof of the main result
We first outline the general idea of the proof. Let \(\gamma_{t}\) be the characteristic function of the distribution \(c_{t}\), i.e.,
\[\gamma_{t}(\vartheta)\coloneqq\sum_{k\in\mathbb{Z}}c_{t}(k)\operatorname{e}(k \vartheta).\]
To approximate \(c_{t}(k)\) we will use the identity
\[c_{t}(k)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\gamma_{t}(\vartheta)\operatorname{e }(-k\vartheta)\,d\vartheta.\]
We want to show that for \(\vartheta\) in a small interval \(I=[-\vartheta_{0},\vartheta_{0}]\) around \(0\), the function \(\gamma_{t}\) is well approximated by the characteristic function of Gaussian distribution with mean \(0\) and variance \(v_{t}\). This is done in Proposition 3.9. Evaluating the integral over \(I\), where \(\gamma_{t}\) is replaced with said characteristic function, yields (roughly) the main term in (2.3), while the error term comes from the approximation. On the other hand, the contribution for \(\vartheta\not\in I\) does not exceed said error term due to a strong upper bound on \(|\gamma_{t}(\vartheta)|\), given in Proposition 3.10. As discussed in Section 2, we also establish an upper and lower bound on the variance \(v_{t}\) (given in Proposition 3.11) in order to show that the error term in (2.3) is indeed small compared to the main term.
### Basic properties
We first show that the functions \(c_{t}\) are indeed well-defined and describe probability distributions, and establish some of their basic properties. Our starting point is a set of recurrence relations satisfied by the values \(d(t,n)\).
**Lemma 3.1**.: _For all \(t,n\in\mathbb{N}\), we have \(d(0,n)=0\) and_
\[\begin{array}{llll}d(4t+0,4n+0)=d(2t+0,2n+0),&d(4t+2,4n+0)=d(2t+1,2n+0),\\ d(4t+0,4n+1)=d(2t+0,2n+0),&d(4t+2,4n+1)=d(2t+1,2n+0)+1,\\ d(4t+0,4n+2)=d(2t+0,2n+1),&d(4t+2,4n+2)=d(2t+1,2n+1),\\ d(4t+0,4n+3)=d(2t+0,2n+1),&d(4t+2,4n+3)=d(2t+1,2n+1)-1,\\ \end{array}\]
\[\begin{array}{llll}d(4t+1,4n+0)=d(2t+0,2n+0),&d(4t+3,4n+0)=d(2t+1,2n+0)+1, \\ d(4t+1,4n+1)=d(2t+1,2n+0),&d(4t+3,4n+1)=d(2t+2,2n+0),\\ d(4t+1,4n+2)=d(2t+0,2n+1)+1,&d(4t+3,4n+2)=d(2t+1,2n+1),\\ d(4t+1,4n+3)=d(2t+1,2n+1)-1,&d(4t+3,4n+3)=d(2t+2,2n+1)-1.\\ \end{array}\]
Proof.: All equalities can be quickly derived from \(\mathsf{r}(0)=0\) and the following relations:
\[\mathsf{r}(2n) =\mathsf{r}(n),\] \[\mathsf{r}(4n+1) =\mathsf{r}(n),\] \[\mathsf{r}(4n+3) =\mathsf{r}(2n+1)+1.\qed\]
Note that the relations all involve \(d(\cdot,2n)\) or \(d(\cdot,2n+1)\) on the right-hand side (though some can be "merged"). This makes it tricky to directly describe the sets \(C_{t}(k)\) by a collection of recurrence relations, since they have \(d(t,n)\) in their definition. Instead, we consider their "odd" and "even" components:
\[\begin{array}{l}A_{t}(k)\coloneqq\big{\{}n\in\mathbb{N}:d(t,2n)=k\big{\}}, \\ B_{t}(k)\coloneqq\big{\{}n\in\mathbb{N}:d(t,2n+1)=k\big{\}},\end{array}\]
so that
\[C_{t}(k)=2A_{t}(k)\cup(2B_{t}(k)+1).\]
As we will see in Proposition 3.2 below, the densities of sets \(A_{t}(k)\) and \(B_{t}(k)\) exist. We denote
\[a_{t}(k) \coloneqq\operatorname{dens}\bigl{\{}n\in\mathbb{N}:d(t,2n)=k \bigr{\}},\] \[b_{t}(k) \coloneqq\operatorname{dens}\bigl{\{}n\in\mathbb{N}:d(t,2n+1)=k \bigr{\}},\]
which yields
\[c_{t}(k)=\frac{a_{t}(k)+b_{t}(k)}{2}. \tag{3.1}\]
**Proposition 3.2**.: _For all \(t\in\mathbb{N}\) and \(k\in\mathbb{Z}\) the sets \(A_{t}(k),B_{t}(k)\) (and thus also \(C_{t}(k)\)) are finite unions of arithmetic progressions. Their densities \(a_{t}(k)\) and \(b_{t}(k)\) satisfy the following relations:_
\[a_{4t}(k) =\frac{1}{2}\bigl{(}a_{2t}(k)+b_{2t}(k)\bigr{)}, b_{4t}(k) =\frac{1}{2}\bigl{(}a_{2t}(k)+b_{2t}(k)\bigr{)},\] \[a_{4t+1}(k) =\frac{1}{2}\bigl{(}a_{2t}(k)+b_{2t}(k-1)\bigr{)}, b_{4t+1}(k) =\frac{1}{2}\bigl{(}a_{2t+1}(k)+b_{2t+1}(k+1)\bigr{)},\] \[a_{4t+2}(k) =\frac{1}{2}\bigl{(}a_{2t+1}(k)+b_{2t+1}(k)\bigr{)}, b_{4t+2}(k) =\frac{1}{2}\bigl{(}a_{2t+1}(k-1)+b_{2t+1}(k+1)\bigr{)},\] \[a_{4t+3}(k) =\frac{1}{2}\bigl{(}a_{2t+1}(k-1)+b_{2t+1}(k)\bigr{)}, b_{4t+3}(k) =\frac{1}{2}\bigl{(}a_{2t+2}(k)+b_{2t+2}(k+1)\bigr{)},\]
_with initial conditions_
\[a_{0}(k)=b_{0}(k)=\begin{cases}1&\text{if }k=0,\\ 0&\text{if }k\neq 0,\end{cases}\qquad a_{1}(k)=\begin{cases}\frac{1}{2}&\text{if }k=0,1,\\ 0&\text{otherwise},\end{cases}\qquad b_{1}(k)=\begin{cases}0&\text{if }k>1,\\ \frac{1}{4}&\text{if }k=1,\\ 3\cdot 2^{k-3}&\text{if }k<1.\end{cases}\]
Proof.: We first deal with the initial conditions. Trivially, we have \(A_{0}(0)=B_{0}(0)=\mathbb{N}\) and \(A_{0}(k)=B_{0}(k)=\varnothing\) for \(k\neq 0\). It is also easy to check that \(A_{1}(0)=2\mathbb{N}\), \(A_{1}(1)=2\mathbb{N}+1\), and \(A_{1}(k)=\varnothing\) for \(k\neq 0,1\). Furthermore, we have \(B_{1}(1)=4\mathbb{N}+2\) and \(B_{1}(k)=\varnothing\) for \(k>1\). Finally, for each \(k\leq 0\) the set \(B_{1}(k)\) consists of \(n\in\mathbb{N}\) such that the binary expansion of \(2n+1\) ends with \(\texttt{001}^{|k|+1}\) or \(\texttt{101}^{|k|+2}\). Hence, \(b_{1}(k)=2^{-|k|-2}+2^{-|k|-3}=3\cdot 2^{k-3}\).
To simplify the notation, for \(t\in\mathbb{N}\) and \(k_{A},k_{B}\in\mathbb{Z}\) let
\[E_{t}(k_{A},k_{B})\coloneqq 2A_{t}(k_{A})\cup(2B_{t}(k_{B})+1).\]
Then the identities for \(a_{t}(k)\) and \(b_{t}(k)\) follow straight from corresponding relations for the sets \(A_{t}(k)\) and \(B_{t}(k)\):
\[A_{4t}(k) =E_{2t}(k,k), B_{4t}(k) =E_{2t}(k,k),\] \[A_{4t+1}(k) =E_{2t}(k,k-1), B_{4t+1}(k) =E_{2t+1}(k,k+1),\] \[A_{4t+2}(k) =E_{2t+1}(k,k), B_{4t+2}(k) =E_{2t+1}(k-1,k+1),\] \[A_{4t+3}(k) =E_{2t+1}(k-1,k) B_{4t+3}(k) =E_{2t+2}(k,k+1).\]
Since all these relations are proved similarly, we verify only the one for \(B_{4t+2}(k)\) and leave the
rest to the reader. We have
\[B_{4t+2}(k) =\{n:d(4t+2,2n+1)=k\}\] \[=\{2n:d(4t+2,4n+1)=k\}\cup\{2n+1:d(4t+2,4n+3)=k\}\] \[=2\{n:d(2t+1,2n)+1=k\}\cup(2\{n:d(2t+1,2n+1)-1=k\}+1)\] \[=2A_{2t+1}(k-1)\cup(2B_{2t+1}(k+1)+1)\] \[=E_{2t+1}(k-1,k).\qed\]
Note that a bound for the differences of the arithmetic progressions which constitute \(A_{t}(k)\) and \(B_{t}(k)\) can be derived easily from this proof. These differences are always powers of two, and a rough upper bound is given by \(2^{|k|+2\ell(t)+1}\) where \(\ell(t)\) is the length of the binary expansion of \(t\).
We now define the characteristic functions of the probability distributions \(a_{t}\) and \(b_{t}\):
\[\alpha_{t}(\vartheta) :=\sum_{k\in\mathbb{Z}}a_{t}(k)\operatorname{e}(k\vartheta),\] \[\beta_{t}(\vartheta) :=\sum_{k\in\mathbb{Z}}b_{t}(k)\operatorname{e}(k\vartheta).\]
Clearly, our function of interest \(\gamma_{t}\) satisfies
\[\gamma_{t}(\vartheta)=\frac{\alpha_{t}(\vartheta)+\beta_{t}(\vartheta)}{2}.\]
The identities in Proposition 3.2 translate to relations for the characteristic functions \(\alpha_{t}\) and \(\beta_{t}\), which we can write concisely using matrix notation. We arrange them into a column vector \(S_{t}(\vartheta)\in\mathbb{C}^{6}\), defined by
\[S_{t}(\vartheta)=\left(\alpha_{2t+0}(\vartheta)\quad\beta_{2t+0}(\vartheta) \quad\alpha_{2t+1}(\vartheta)\quad\beta_{2t+1}(\vartheta)\quad\alpha_{2t+2}( \vartheta)\quad\beta_{2t+2}(\vartheta)\right)^{T}.\]
We also define \(6\times 6\) matrices \(D_{0}(\vartheta),D_{1}(\vartheta)\) by
\[D_{0}(\vartheta)=\frac{1}{2}\begin{pmatrix}1&1&0&0&0&0\\ 1&1&0&0&0&0\\ 1&\operatorname{e}(\vartheta)&0&0&0&0\\ 0&0&1&\operatorname{e}(-\vartheta)&0&0\\ 0&0&1&1&0&0\\ 0&0&\operatorname{e}(\vartheta)&\operatorname{e}(-\vartheta)&0&0\end{pmatrix}, \quad D_{1}(\vartheta)=\frac{1}{2}\begin{pmatrix}0&0&1&1&0&0\\ 0&0&\operatorname{e}(\vartheta)&\operatorname{e}(-\vartheta)&0&0\\ 0&0&0&\operatorname{e}(\vartheta)&1&0&0\\ 0&0&0&0&1&\operatorname{e}(-\vartheta)\\ 0&0&0&0&1&1\\ 0&0&0&0&1&1\end{pmatrix}.\]
We have the following proposition.
**Proposition 3.3**.: _For all \(t\in\mathbb{N}\) we have the recurrence relations_
\[S_{2t}(\vartheta) =D_{0}(\vartheta)S_{t}(\vartheta),\] \[S_{2t+1}(\vartheta) =D_{1}(\vartheta)S_{t}(\vartheta),\]
_with initial conditions_
\[S_{0}(\vartheta)=\begin{pmatrix}1&1&\frac{\operatorname{e}(\vartheta)+1}{2} &\frac{\operatorname{e}(\vartheta)+1}{2(2-\operatorname{e}(-\vartheta))}& \frac{3\operatorname{e}(\vartheta)+2-\operatorname{e}(-\vartheta)}{4(2- \operatorname{e}(-\vartheta))}&\frac{2\operatorname{e}(2\vartheta)+ \operatorname{e}(\vartheta)+\operatorname{e}(-\vartheta)}{4(2-\operatorname{e }(-\vartheta))}\end{pmatrix}^{T}.\]
_In particular, we have_
\[\alpha_{8t}=\alpha_{4t}=\beta_{8t}=\beta_{4t}.\]
Proof.: Recurrence relations for \(S_{t}\) as well as initial values \(\alpha_{0},\beta_{0},\alpha_{1},\beta_{1}\) follow immediately from Proposition 3.2. Last two components of \(S_{0}\), namely \(\alpha_{2},\beta_{2}\), are obtained by an application of the identity \(S_{0}=D_{0}S_{0}\) (they only depend on \(\alpha_{1},\beta_{1}\)).
Furthermore, we have \(\alpha_{4t}=(\alpha_{2t}+\beta_{2t})/2=\beta_{4t}\) by the relation \(S_{2t}=D_{0}S_{t}\). This also implies \(\alpha_{8t}=(\alpha_{4t}+\beta_{4t})/2=\alpha_{4t}\) and similarly \(\beta_{8t}=\beta_{4t}\).
We move on to give a recursion for the mean and variance of \(a_{t}\) and \(b_{t}\). We use the notation
\[m_{t}^{\alpha}=\sum_{k\in\mathbb{Z}}ka_{t}(k),\qquad m_{t}^{\beta}=\sum_{k\in \mathbb{Z}}kb_{t}(k)\]
for the means, and
\[v_{t}^{\alpha}=\sum_{k\in\mathbb{Z}}\bigl{(}k-m_{t}^{\alpha}\bigr{)}^{2}a_{t} (k),\qquad v_{t}^{\beta}=\sum_{k\in\mathbb{Z}}\bigl{(}k-m_{t}^{\beta}\bigr{)}^{ 2}b_{t}(k)\]
for the variances. As with the characteristic functions, we arrange them in the same way into column vectors
\[M_{t} =\left(m_{2t+0}^{\alpha}\quad m_{2t+0}^{\beta}\quad m_{2t+1}^{ \alpha}\quad m_{2t+1}^{\beta}\quad m_{2t+2}^{\alpha}\quad m_{2t+2}^{\beta} \right)^{T},\] \[V_{t} =\left(v_{2t+0}^{\alpha}\quad v_{2t+0}^{\beta}\quad v_{2t+1}^{ \alpha}\quad v_{2t+1}^{\beta}\quad v_{2t+2}^{\alpha}\quad v_{2t+2}^{\beta} \right)^{T}.\]
Using the recursion in Proposition 3.3, we can easily obtain relations for \(M_{t}\) and \(V_{t}\). In particular, it turns out that \(M_{t}\) is constant.
**Proposition 3.4**.: _For all \(t\in\mathbb{N}\) we have_
\[M_{t}=\left(0\quad 0\quad\frac{1}{2}\quad-\frac{1}{2}\quad 0\quad 0\right)^{T}\]
_and_
\[V_{2t}=\frac{1}{2}\begin{pmatrix}1&1&0&0&0&0\\ 1&1&0&0&0&0\\ 1&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&1&1&0&0\\ 0&0&1&1&0&0\end{pmatrix}V_{t}+\frac{1}{4}\begin{pmatrix}0\\ 0\\ 1\\ 4\\ 1\\ 9\end{pmatrix},\qquad V_{2t+1}=\frac{1}{2}\begin{pmatrix}0&0&1&1&0&0\\ 0&0&1&1&0&0\\ 0&0&1&1&0&0\\ 0&0&0&0&1&1\\ 0&0&0&0&1&1\\ 0&0&0&0&1&1\end{pmatrix}V_{t}+\frac{1}{4}\begin{pmatrix}1\\ 9\\ 4\\ 1\\ 0\\ 0\end{pmatrix},\]
_with initial conditions_
\[V_{0}=\frac{1}{4}\left(0\quad 0\quad 1\quad 9\quad 6\quad 14\right)^{T}.\]
Proof.: We prove the claim for \(M_{t}\) by induction on \(t\). The base case \(t=0\) is easily verified. Now, by differentiating the first relation in Proposition 3.3, for any \(t\geq 1\) we have
\[M_{2t}=-iS_{2t}^{\prime}(0)=-iD_{0}^{\prime}(0)\mathbf{1}+D_{0}(0)M_{t},\]
where \(\mathbf{1}\) is the column vector of \(1\)s of length \(6\). Using the inductive assumption for \(M_{t}\), after a simple calculation we obtain the claimed value of \(M_{2t}\). A similar computation also works for \(M_{2t+1}\).
Moving on to the variances, for \(j=0,1\) we have
\[S^{\prime\prime}_{2t+j}(0)=D^{\prime\prime}_{j}(0)\mathbf{1}+2iD^{\prime}_{j}(0) M_{t}+D_{j}(0)S^{\prime\prime}_{t}(0),\]
After plugging in \(S^{\prime\prime}_{t}(0)=-V_{t}-\frac{1}{4}(0,0,1,1,0,0)^{T}\), and an analogous expression for \(S^{\prime\prime}_{2t+j}(0)\), after a short calculation we get the desired relations.
We can now show that \(v_{t}\), defined by (2.2), is indeed the variance of the distribution \(c_{t}\).
**Proposition 3.5**.: _For all \(t\) in \(\mathbb{N}\) the distribution \(c_{t}\) has mean \(0\) and variance \(v_{t}\)._
Proof.: The mean of \(c_{t}\) is \((m^{\alpha}_{t}+m^{\beta}_{t})/2\), which is equal to \(0\) by Proposition 3.4.
Let us momentarily denote the variance by \(\widetilde{v}_{t}\). It satisfies
\[\widetilde{v}_{t}=\frac{1}{2}\big{(}v^{\alpha}_{t}+(m^{\alpha}_{t})^{2}+v^{ \beta}_{t}+(m^{\beta}_{t})^{2}\big{)}=\frac{v^{\alpha}_{t}+v^{\beta}_{t}}{2}+ \begin{cases}0&\text{if $t$ is even},\\ \frac{1}{4}&\text{if $t$ is odd}.\end{cases}\]
Proposition 3.4 implies that \(\widetilde{v}_{0}=0=v_{0},\widetilde{v}_{1}=3/2=v_{1}\), and \(\widetilde{v}_{t}\) satisfies relations (2.2) defining \(v_{t}\), hence we must have \(\widetilde{v}_{t}=v_{t}\) for all \(t\in\mathbb{N}\).
### Approximation of the characteristic function
The first main ingredient that we need for the central limit-type result is analogous to [18, Proposition 3.1]. We roughly follow the proof of Proposition 2.5 in that paper. We approximate \(\gamma_{t}\) by the characteristic function \(\gamma_{t}^{*}\) of the Gaussian distribution with the same mean (equal to \(0\)) and variance \(v_{t}\), namely
\[\gamma_{t}^{*}(\vartheta)=\exp\left(-\frac{v_{t}}{2}\vartheta^{2}\right).\]
We are interested in bounding the error of approximation
\[\widetilde{\gamma}_{t}(\vartheta)=\gamma_{t}(\vartheta)-\gamma_{t}^{*}( \vartheta).\]
By definition we have \(\widetilde{\gamma}_{t}(\vartheta)=\mathcal{O}(\theta^{3})\) in the sense that its power series expansion only has terms of order \(\geq 3\). Indeed, \(\log\gamma_{t}^{*}\) agrees with \(\log\gamma_{t}\), the cumulant generating function, up to terms of order \(2\). Hence, after exponentiating both functions still agree up to terms of order \(2\). This means that for \(|\vartheta|\leq\pi\) we have a bound of the form
\[|\widetilde{\gamma}_{t}(\vartheta)|\leq K_{t}|\vartheta|^{3},\]
where constant \(K_{t}\) depends on \(t\), and we will need to make this dependence more explicit.
In order to do this, we define normal approximations \(\alpha_{t}^{*}\) and \(\beta_{t}^{*}\) to the characteristic functions \(\alpha_{t}\) and \(\beta_{t}\), as well as the errors \(\widetilde{\alpha}_{t}\) and \(\widetilde{\beta}_{t}\) appearing in these approximations. Let
\[\alpha_{t}^{*}(\vartheta) \coloneqq\exp\biggl{(}m^{\alpha}_{t}i\vartheta-\frac{1}{2}v^{ \alpha}_{t}\vartheta^{2}\biggr{)} \beta_{t}^{*}(\vartheta) \coloneqq\exp\biggl{(}m^{\beta}_{t}i\vartheta-\frac{1}{2}v^{ \beta}_{t}\vartheta^{2}\biggr{)},\] \[\widetilde{\alpha}_{t}(\vartheta) \coloneqq\alpha_{t}(\vartheta)-\alpha_{t}^{*}(\vartheta), \widetilde{\beta}_{t}(\vartheta) \coloneqq\beta_{t}(\vartheta)-\beta_{t}^{*}(\vartheta)\]
(recall that \(m^{\alpha}_{2t}=m^{\beta}_{2t}=0\) and \(m^{\alpha}_{2t+1}=-m^{\beta}_{2t+1}=1/2\)). Set also
\[S_{t}^{*}(\vartheta) \coloneqq \bigl{(}\,\alpha^{*}_{2t+0}(\vartheta)\quad\beta^{*}_{2t+0}( \vartheta)\quad\alpha^{*}_{2t+1}(\vartheta)\quad\beta^{*}_{2t+1}(\vartheta) \quad\alpha^{*}_{2t+2}(\vartheta)\quad\beta^{*}_{2t+2}(\vartheta)\,\,\bigr{)} ^{T},\] \[\widetilde{S}_{t}(\vartheta) \coloneqq \bigl{(}\,\widetilde{\alpha}_{2t+0}(\vartheta)\quad\widetilde{ \beta}_{2t+0}(\vartheta)\quad\widetilde{\alpha}_{2t+1}(\vartheta)\quad \widetilde{\beta}_{2t+1}(\vartheta)\quad\widetilde{\alpha}_{2t+2}(\vartheta) \quad\widetilde{\beta}_{2t+2}(\vartheta)\,\,\bigr{)}^{T},\]
so that \(\widetilde{S}_{t}(\vartheta)=S_{t}(\vartheta)-S_{t}^{*}(\vartheta)\).
By Proposition 3.3 we get the relations
\[\widetilde{S}_{2t} =D_{0}\widetilde{S}_{t}-X_{2t},\] \[\widetilde{S}_{2t+1} =D_{1}\widetilde{S}_{t}-X_{2t+1},\]
where
\[X_{2t} =S_{2t}^{*}-D_{0}S_{t}^{*},\] \[X_{2t+1} =S_{2t+1}^{*}-D_{1}S_{t}^{*}.\]
Roughly speaking, \(\|X_{t}(\vartheta)\|_{\infty}\) measures how far the vector of approximations \(S_{t}^{*}(\vartheta)\) is from \(S_{t}(\vartheta)\) after a single application of the recursion. Before we give an upper bound on this quantity, we need an auxiliary lemma.
**Lemma 3.6**.: _For all \(t\in\mathbb{N}\) we have_
\[|v_{t}^{\alpha}-v_{t}^{\beta}|\leq 48.\]
Proof.: We first show by induction on \(t\) that
\[|v_{t+1}^{\alpha}-v_{t}^{\alpha}| \leq 6,\] \[|v_{t+1}^{\beta}-v_{t}^{\beta}| \leq 6.\]
This is easily verified for the base case \(t=0\). Let us denote
\[w_{t}^{\alpha}=v_{t+1}^{\alpha}-v_{t}^{\alpha},\qquad w_{t}^{\beta}=v_{t+1}^{ \beta}-v_{t}^{\beta}.\]
By Proposition 3.4, for \(t\in\mathbb{N}\) we have
\[w_{4t}^{\alpha} =\frac{1}{4}, w_{4t}^{\beta} =\frac{1}{2}\big{(}w_{2t}^{\alpha}+w_{2t}^{\beta}\big{)}+1,\] \[w_{4t+1}^{\alpha} =\frac{1}{2}\big{(}w_{2t}^{\alpha}+w_{2t}^{\beta}\big{)}, w_{4t+1}^{\beta} =\frac{5}{4},\] \[w_{4t+2}^{\alpha} =\frac{3}{4}, w_{4t+2}^{\beta} =\frac{1}{2}\big{(}w_{2t+1}^{\alpha}+w_{2t+1}^{\beta}\big{)}-2,\] \[w_{4t+3}^{\alpha} =\frac{1}{2}\big{(}w_{2t+1}^{\alpha}+w_{2t+1}^{\beta}\big{)}-1, w_{4t+3}^{\beta} =-\frac{1}{4}.\]
This already implies that \(|w_{2t}^{\alpha}|\leq 3/4\) and \(|w_{2t+1}^{\beta}|\leq 5/4\). Applying these inequalities combined with the inductive assumption to the remaining identities, we obtain the claim. For example, we have
\[|w_{4t+2}^{\beta}|\leq\frac{1}{2}\left(|w_{2t+1}^{\alpha}|+|w_{2t+1}^{\beta}| \right)+2\leq\frac{1}{2}\left(6+\frac{5}{4}\right)+2<6.\]
Moving on to the proof of our statement, by Proposition 3.3 (or Proposition 3.4) we have \(v_{4t}^{\alpha}=v_{4t}^{\beta}.\) This implies
\[|v_{4t+1}^{\alpha}-v_{4t+1}^{\beta}|\leq|v_{4t+1}^{\alpha}-v_{4t}^{\alpha}|+|v _{4t+1}^{\beta}-v_{4t}^{\beta}|\leq 12.\]
In a similar fashion we can bound \(|v_{4t+j}^{\alpha}-v_{4t+j}^{\beta}|\) for \(j=2,3\)
_Remark_.: With additional effort it should be possible to prove that \(|v_{t}^{\alpha}-v_{t}^{\beta}|\leq 2\). However, for the purpose of our proof we only need to know that the difference is bounded uniformly in \(t\).
**Lemma 3.7**.: _There exists an absolute constant \(C\) such that for all \(t\in\mathbb{N}\) and \(|\vartheta|\leq\pi\) we have_
\[\|X_{t}(\vartheta)\|_{\infty}\leq C|\vartheta|^{3}.\]
Proof.: First, observe that each component of \(X_{t}(\vartheta)\), written as a power series, is \(\mathcal{O}(\vartheta^{3})\) because this is the case for \(\widetilde{S}_{t},\widetilde{S}_{2t+j}\).
Using Proposition 3.4 together with
\[\log S_{t}^{*}(\vartheta)=iM_{t}\vartheta-\frac{1}{2}V_{t}\vartheta^{2},\]
(where the logarithm is applied component-wise) we obtain the following relations:
\[\log S_{2t}^{*}(\vartheta) =D_{0}(0)\log S_{t}^{*}(\vartheta)+\ \frac{1}{2}\begin{pmatrix}0&0&1&-1&0&0 \end{pmatrix}^{T}\vartheta-\frac{1}{8}\begin{pmatrix}0&0&1&4&1&9\end{pmatrix}^{ T}\vartheta^{2}, \tag{3.2}\] \[\log S_{2t+1}^{*}(\vartheta) =D_{1}(0)\log S_{t}^{*}(\vartheta)+\ \frac{i}{2}\begin{pmatrix}0&0&1&-1&0&0 \end{pmatrix}^{T}\vartheta-\frac{1}{8}\begin{pmatrix}1&9&4&1&0&0\end{pmatrix}^{ T}\vartheta^{2}.\]
We now bound individual components of \(X_{2t}\) and \(X_{2t+1}\). Since the procedure is very similar in each case, we only perform it for only one component. For example let \(\xi(\vartheta)\) denote the fourth component of \(X_{2t}(\vartheta)\), namely
\[\xi(\vartheta)=\beta_{4t+1}^{*}(\vartheta)-\frac{1}{2}(\alpha_{2t+1}^{*}( \vartheta)+\mathrm{e}(-\vartheta)\beta_{2t+1}^{*}(\vartheta)).\]
Extracting the fourth component of (3.2) and exponentiating, we get
\[\beta_{4t+1}^{*}(\vartheta)=(\alpha_{2t+1}^{*}(\vartheta)\beta_{2t+1}^{*}( \vartheta))^{1/2}\exp\left(-\frac{1}{2}i\vartheta-\frac{1}{2}\vartheta^{2} \right),\]
where we take the principal value of the square root. This yields
\[\xi(\vartheta)=\frac{\alpha_{2t+1}^{*}(\vartheta)}{2}\left[2\left(\frac{\beta _{2t+1}^{*}(\vartheta)}{\alpha_{2t+1}^{*}(\vartheta)}\right)^{1/2}\exp\left(- \frac{1}{2}i\vartheta-\frac{1}{2}\vartheta^{2}\right)-1-\exp(-i\vartheta) \frac{\beta_{2t+1}^{*}(\vartheta)}{\alpha_{2t+1}^{*}(\vartheta)}\right].\]
We have \(|\alpha_{2t+1}^{*}(\vartheta)|\leq 1\). Also, because \(\xi(\theta)=\mathcal{O}(\vartheta^{3})\) and \(\alpha_{2t+1}^{*}(\vartheta)=1+\mathcal{O}(\vartheta)\), we get
\[2\left(\frac{\beta_{2t+1}^{*}(\vartheta)}{\alpha_{2t+1}^{*}(\vartheta)} \right)^{1/2}\exp\left(-\frac{1}{2}i\vartheta-\frac{1}{2}\vartheta^{2}\right) -1-\exp(-i\vartheta)\frac{\beta_{2t+1}^{*}(\vartheta)}{\alpha_{2t+1}^{*}( \vartheta)}=O(\vartheta^{3}).\]
We now consider the terms of order \(\geq 3\) of each summand, since the terms of order \(\leq 2\) cancel out. First, we have
\[\left(\frac{\beta_{2t+1}^{*}}{\alpha_{2t+1}^{*}}\right)^{1/2} \exp\left(-\frac{1}{2}i\vartheta-\frac{1}{2}\vartheta^{2}\right) =\exp\left(-i\vartheta-\frac{1}{4}(v_{2t+1}^{\beta}-v_{2t+1}^{ \alpha}+2)\vartheta^{2}\right)\] \[=\sum_{k=0}^{\infty}\frac{1}{k!}\left(-i\vartheta-\frac{1}{4}(v_ {2t+1}^{\beta}-v_{2t+1}^{\alpha}+2)\vartheta^{2}\right)^{k}.\]
Because \(|\vartheta|\leq\pi\), and \(|v_{2t+1}^{\beta}-v_{2t+1}^{\alpha}|\leq 48\) as per Lemma 3.6, we get
\[\left|i\vartheta+\frac{1}{4}(v_{2t+1}^{\beta}-v_{2t+1}^{\alpha}+2)\vartheta^{2} \right|\leq K|\vartheta|\]
for some absolute constant \(K\) (independent of \(t\)). As a result, the contribution of terms of order \(\geq 3\) are can be bounded by
\[\frac{1}{2}\left|\frac{i}{2}(v_{2t+1}^{\beta}-v_{2t+1}^{\alpha}+2)\vartheta^{ 3}+\left(\frac{1}{4}(v_{2t+1}^{\beta}-v_{2t+1}^{\alpha}+2)\vartheta^{2}\right) ^{2}\right|+\sum_{k=3}^{\infty}\frac{(K|\vartheta|)^{k}}{k!}\leq\]
\[\frac{25}{2}|\vartheta|^{3}+\frac{25^{2}}{8}|\vartheta|^{4}+\exp(K\pi)| \vartheta|^{3}\leq C_{1}|\vartheta|^{3}\]
for a suitable absolute constant \(C_{1}\).
In a similar fashion, we can show that the total contribution of terms of order \(\geq 3\) in \(\frac{1}{2}\operatorname{e}(-\vartheta)\beta_{2t+1}^{*}(\vartheta)/\alpha_{2 t+1}^{*}(\vartheta)\) is bounded by \(C_{2}|\vartheta|^{3}\) for some absolute constant \(C_{2}\). Therefore,
\[\left|\beta_{4t+1}^{*}(\vartheta)-\frac{1}{2}\alpha_{2t+1}^{*}(\vartheta)- \frac{1}{2}\operatorname{e}(-\vartheta)\beta_{2t+1}^{*}(\vartheta)\right| \leq(C_{1}+C_{2})|\vartheta|^{3}.\]
Repeating this argument for other components of \(X_{2t},X_{2t+1}\) and taking \(C\) to be the maximal constant on the right-hand side, we get the result.
We now use the lemma just proved to bound the error of approximation \(\widetilde{S}_{t}(\vartheta)\) after multiple steps of the recursion.
**Lemma 3.8**.: _There exists an absolute constant \(K\) such that for all \(t\in\mathbb{N}\) and \(|\vartheta|\leq\pi\) we have_
\[\|\widetilde{S}_{t}(\vartheta)\|_{\infty}\leq KN|\vartheta|^{3},\]
_where \(t\in\mathbb{N}\) and \(N=|t|_{01}\)._
Proof.: By simple induction, for any \(k\in\mathbb{N}\) we have
\[\widetilde{S}_{2^{k}t}=D_{0}^{k}\widetilde{S}_{t}-\sum_{j=1}^{k}D_{0}^{k-j}X_ {2^{j}t}.\]
We now show that the sum is bounded uniformly in \(k\). First, for \(j=1\) and \(j=k\) we use Lemma 3.7, which gives
\[\|D_{0}^{k-j}(\vartheta)X_{2t}(\vartheta)\|_{\infty}\leq C|\vartheta|^{3}. \tag{3.3}\]
Furthermore, by virtue of Proposition 3.3 we have \(\alpha_{8t}=\alpha_{4t}\) and \(\beta_{8t}=\beta_{4t}\), which means that the first two components of \(X_{2^{j}t}\) are \(0\) for all \(j\geq 2\). Let \(\hat{X}_{2^{j}t}\) denote the vector obtained by deleting these two components. Then we can write in block matrix form
\[D_{0}^{k-j}X_{2^{j}t}=\begin{pmatrix}F&0\\ G&\hat{D}_{0}^{k-j}\end{pmatrix}\begin{pmatrix}0\\ \hat{X}_{2^{j}t}\end{pmatrix}=\begin{pmatrix}0\\ \hat{D}_{0}^{k-j}\hat{X}_{2^{j}t}\end{pmatrix},\]
where \(F\) is a \(2\times 2\) matrix, \(G\) a \(4\times 2\) matrix, and \(\hat{D}_{0}\) is the submatrix of \(D_{0}\) obtained by deleting its first two rows and columns, namely
\[\hat{D}_{0}(\vartheta)=\frac{1}{2}\begin{pmatrix}0&0&0&0\\ 1&\operatorname{e}(-\vartheta)&0&0\\ 1&1&0&0\\ \operatorname{e}(\vartheta)&\operatorname{e}(-\vartheta)&0&0\end{pmatrix}.\]
Also notice that \(\|\hat{D}_{0}^{l}(\vartheta)\|_{\infty}=1/2^{l-1}\) for any \(l\geq 1\) and \(\vartheta\), which implies for \(j<k\) the inequality
\[\|D_{0}^{k-j}(\vartheta)X_{2^{j}t}(\vartheta)\|_{\infty}=\|\hat{D}_{0}^{k-j}( \vartheta)\hat{X}_{2^{j}t}(\vartheta)\|_{\infty}\leq\frac{1}{2^{k-j-1}}C| \vartheta|^{3}.\]
Combining this and (3.3), we get
\[\|\widetilde{S}_{2^{k}t}\|_{\infty}\leq\|\widetilde{S}_{t}\|_{\infty}+2C| \vartheta|^{3}+\sum_{j=2}^{k-1}\frac{1}{2^{k-j-1}}C|\vartheta|^{3}\leq\| \widetilde{S}_{t}\|_{\infty}+4C|\vartheta|^{3}.\]
In other words, appending a block of zeros of arbitrary length to the binary expansion of \(t\) increases \(\|\widetilde{S}_{t}(\vartheta)\|_{\infty}\) by at most \(4C|\vartheta|^{3}\).
A similar argument also works for appending a block of 1's so we omit some of the details. We have the identity
\[\widetilde{S}_{2^{k}t+2^{k}-1}=D_{1}^{k}\widetilde{S}_{t}-\sum_{j=1}^{k}D_{1} ^{k-j}X_{2^{j}t+2^{j}-1}.\]
This time, for \(j\geq 2\) we have that the last two components of \(X_{2^{j}t+2^{j}-1}\) are 0. Let\(\hat{X}_{2^{j}t+2^{j}-1}\) be the vector obtained by deleting these components, and \(\hat{D}_{1}(\vartheta)\) -- the matrix obtained by deleting the last two rows and columns from \(D_{1}(\vartheta)\). Then for we get \(2\leq j\leq k-1\) we get
\[\|D_{1}^{k-j}(\vartheta)X_{2^{j}t+2^{j}-1}(\vartheta)\|_{\infty}=\|\hat{D}_{1} ^{k-j}(\vartheta)\hat{X}_{2^{j}t+2^{j}-1}(\vartheta)\|_{\infty}\leq\frac{1}{2 ^{k-j-1}}C|\vartheta|^{3}.\]
As a consequence, we again arrive at the inequality
\[\|\widetilde{S}_{2^{k}t+2^{k}-1}(\vartheta)\|_{\infty}\leq\|\widetilde{S}_{t} (\vartheta)\|_{\infty}+4C|\vartheta|^{3}.\]
Hence, our claim holds with \(K=4C\), since \(\widetilde{S}_{0}\) is the zero vector.
Finally, we are ready to give an upper bound on the error \(\widetilde{\gamma}_{t}\) of approximation of \(\gamma_{t}\) by \(\gamma_{t}^{*}\). We will use the equality
\[\gamma_{t}^{*}(\vartheta)=(\alpha_{t}^{*}(\vartheta)\beta_{t}^{*}(\vartheta) )^{1/2}\cdot\begin{cases}1&\text{if $t$ is even,}\\ \exp(-\vartheta^{2}/8)&\text{if $t$ is odd,}\end{cases}\]
which follows straight from the definition of \(\gamma_{t}^{*}\).
**Proposition 3.9**.: _There exists an absolute constant \(L\) such that for all \(t\in\mathbb{N}\) and \(|\vartheta|\leq\pi\) we have_
\[|\widetilde{\gamma}_{t}(\vartheta)|\leq LN|\vartheta|^{3},\]
_where \(t\in\mathbb{N}\) and \(N=|t|_{01}\)._
Proof.: By Lemma 3.8 for all \(t\in\mathbb{N}\) we have
\[|\widetilde{\alpha}_{t}(\vartheta)| \leq KN|\vartheta|^{3},\] \[|\widetilde{\beta}_{t}(\vartheta)| \leq KN|\vartheta|^{3},\]
which means that also
\[\left|\gamma_{t}(\vartheta)-\frac{\alpha_{t}^{*}(\vartheta)+\beta_{t}^{*}( \vartheta)}{2}\right|\leq KN|\vartheta|^{3}.\]
Furthermore, if \(t\) is even, then we get
\[\frac{\alpha_{t}^{*}(\vartheta)+\beta_{t}^{*}(\vartheta)}{2}-\gamma_{t}^{*}( \vartheta)=\frac{\alpha_{t}^{*}(\vartheta)}{2}\left(\left(\frac{\beta_{t}^{*}( \vartheta)}{\alpha_{t}^{*}(\vartheta)}\right)^{1/2}-1\right)^{2}.\]
If \(t\) is odd, then
\[\frac{\alpha_{t}^{*}(\vartheta)+\beta_{t}^{*}(\vartheta)}{2}-\gamma_{t}^{*}( \vartheta)=\frac{\alpha_{t}^{*}(\vartheta)}{2}\left(1+\frac{\beta_{t}^{*}( \vartheta)}{\alpha_{t}^{*}(\vartheta)}-2\left(\frac{\beta_{t}^{*}(\vartheta)} {\alpha_{t}^{*}(\vartheta)}\right)^{1/2}\exp\left(-\frac{\vartheta^{2}}{8} \right)\right).\]
In either case, in the same fashion as in Lemma 3.7 we can show that
\[\left|\frac{\alpha_{t}^{*}(\vartheta)+\beta_{t}^{*}(\vartheta)}{2}-\gamma_{t }^{*}(\vartheta)\right|\leq K_{1}|\vartheta|^{3}\]
for some constant \(K_{1}\). Choosing \(L=K+K_{1}\), we get the result.
### An upper bound on the characteristic function
We now obtain the second main ingredient of our proof, namely an upper bound on \(|\gamma_{t}(\theta)|\).
**Proposition 3.10**.: _Assume that \(t\in\mathbb{N}\). If \(|t|_{\texttt{01}}=N\), then for \(|\vartheta|\leq\pi\) we have_
\[|\gamma_{t}(\vartheta)|\leq\left(1-\frac{1}{128}\vartheta^{2}\right)^{\lfloor N /2\rfloor}.\]
Proof.: The statement will follow immediately from the following, more general inequality:
\[\|S_{t}(\vartheta)\|_{\infty}\leq\left(1-\frac{1}{128}\vartheta^{2}\right)^{ \lfloor N/2\rfloor}\|S_{0}(\vartheta)\|_{\infty}.\]
Let \(t\) have binary expansion \(\varepsilon_{\nu}\varepsilon_{\nu-1}\cdots\varepsilon_{1}\varepsilon_{0}\). Then by Proposition 3.3 we have
\[S_{t}=D_{\varepsilon_{0}}D_{\varepsilon_{1}}\cdots D_{\varepsilon_{\nu}}S_{0}.\]
Because \(D_{0}S_{0}=S_{0}\), we can add a leading zero to the expansion of \(t\), so that it contains \(N\) occurrences of 01. Hence, it contains at least \(\lfloor N/2\rfloor\) non-overlapping strings from the set \(\{\texttt{0001},\texttt{0101},\texttt{1001},\texttt{1101}\}\) (strings of length 4 ending with 01). These in turn correspond to "disjoint" subproducts of the form \(D_{1}D_{0}D_{0}D_{0},D_{1}D_{0}D_{1}D_{0},D_{1}D_{0}D_{0}D_{1},D_{1}D_{0}D_{1}D_ {1}\) in the matrix product. We now bound the row-sum norm of each of these subproducts.
Letting \(x=\mathrm{e}(\vartheta)\) for brevity, we have for example
\[D_{1}(\vartheta)D_{0}^{3}(\vartheta)=\\ \frac{1}{16}\begin{pmatrix}3x+3+x^{-1}&3x+4&x^{-2}&x^{-3}&0&0\\ 2x^{2}+2x+1+x^{-1}+x^{-2}&2x^{2}+2x+1+2x^{-1}&x^{-3}&x^{-4}&0&0\\ 2x^{2}+3x+1+x^{-1}&2x^{2}+3x+2&x^{-2}&x^{-3}&0&0\\ 2x+3+x^{-2}&3x+2+x^{-1}&x^{-1}+x^{-3}&x^{-2}+x^{-4}&0&0\\ x^{2}+2x+2+x^{-1}&x^{2}+3x+2&x^{-1}+x^{-2}&x^{-2}+x^{-3}&0&0\\ x^{2}+2x+2+x^{-1}&x^{2}+3x+2&x^{-1}+x^{-2}&x^{-2}+x^{-3}&0&0\end{pmatrix}.\]
Observe that in each row there is an entry in which contains a subsum of the form \(\mathrm{e}(k\vartheta)+\mathrm{e}((k+1)\vartheta)\) for some \(k\in\mathbb{Z}\). The absolute value of this expression satisfies
\[|\,\mathrm{e}(k\vartheta)+\mathrm{e}((k+1)\vartheta)|=|1+\exp(i\vartheta)|= \sqrt{2(1+\cos\vartheta)}=2\left|\cos\frac{\vartheta}{2}\right|\leq 2-\frac{ \vartheta^{2}}{8},\]
where we use the inequality \(|\cos\varphi|\leq 1-\varphi^{2}/4\) for \(|\varphi|\leq\pi/2\). By trivially bounding the remaining terms in each row, we get
\[\|D_{1}(\vartheta)D_{0}^{3}(\vartheta)\|_{\infty}\leq\frac{1}{16}\left(16- \frac{\vartheta^{2}}{8}\right)=1-\frac{\vartheta^{2}}{128}.\]
The same argument works for the other length-4 matrix products. Since \(\|D_{0}(\vartheta)\|_{\infty}=\|D_{1}(\vartheta)\|_{\infty}=1\) and \(\|S_{0}(\vartheta)\|_{\infty}=1\), our result follows by submultiplicativity of \(\|\cdot\|_{\infty}\).
### Bounds on the variance
Finally, we show that \(v_{t}\asymp N\), where \(N=|t|_{01}\) is the number of maximal blocks of 1s in the binary expansion of \(t\).
**Proposition 3.11**.: _Let \(n\in\mathbb{N}\) and \(N=|t|_{01}\). We have_
\[\frac{3}{4}N\leq v_{t}\leq 5N.\]
Proof.: We first prove by induction that
\[|v_{t+1}-v_{t}|\leq 3/2. \tag{3.4}\]
This holds for the base case \(t=0\). Using the relations (2.2), we get
\[v_{4t+1}-v_{4t} =\frac{1}{2}(v_{2t+1}-v_{2t})+\frac{3}{4},\] \[v_{4t+2}-v_{4t+1} =\frac{1}{2}(v_{2t+1}-v_{2t})+\frac{1}{4},\] \[v_{4t+3}-v_{4t+2} =\frac{1}{2}(v_{2t+2}-v_{2t+1})-\frac{1}{4},\] \[v_{4t+4}-v_{4t+3} =\frac{1}{2}(v_{2t+2}-v_{2t+1})-\frac{3}{4}.\]
Our claim quickly follows from the inductive assumption.
Starting with the lower bound in the statement, by (2.2) we get \(v_{2t}\geq v_{t}\) and
\[v_{2t+1}-v_{t}=\frac{1}{2}(v_{t+1}-v_{t})+\frac{3}{4}\geq 0,\]
where we have used (3.4). In other words, appending a digit to the binary expansion of \(t\) does not decrease \(v_{t}\). At the same time, we have
\[v_{4t+1}=\frac{1}{2}(v_{2t}+v_{2t+1})+\frac{3}{4}\geq\frac{3}{4}v_{t}+\frac{1} {4}v_{t+1}+\frac{9}{8}.\]
Subtracting \(v_{t}\) from both sides and using (3.4), we get
\[v_{4t+1}-v_{t}\geq\frac{3}{4}.\]
Hence, for all \(p,q\geq 1\) appending the block \(\mathsf{0}^{p}\mathtt{1}^{q}\) to the binary expansion of \(t\) increases \(v_{t}\) by at least \(3/4\). The lower bound in the statement follows.
Moving on to the upper bound, for any \(k\geq 1\) by (2.2) we have
\[v_{2^{k}t}=v_{2t}\in\{v_{t},v_{t}+1\},\]
as well as
\[|v_{2^{k}t+2^{k}-1}-v_{t}|\leq|v_{2^{k}(t+1)-1}-v_{2^{k}(t+1)}|+|v_{2^{k}(t+1) }-v_{t+1}|+|v_{t+1}-v_{t}|\leq\frac{3}{2}+1+\frac{3}{2}=4.\]
This means that for all \(p,q\geq 1\) appending the block \(\mathsf{0}^{p}\mathtt{1}^{q}\) to the binary expansion of \(t\) increases \(v_{t}\) by at most \(5\).
### Finishing the proof of the main result
In order to complete the proof of Theorem 2.1, we recall the paper [18] by the second author and Wallner. The line of argument we are going to present is analogous, however we establish a refinement of the error bound. Our Proposition 3.10 takes the role of Lemma 2.7 in that paper, while Proposition 3.9 is analogous to [18, Proposition 3.1]. In our argument, we will see that the Gauss integral
\[\int_{-\infty}^{+\infty}\exp\left(-\frac{v_{t}}{2}\theta^{2}-ik\vartheta\right) \,\mathrm{d}\vartheta\]
is responsible for the emergence of a Gaussian in the main term of (2.3).
Let us start with the definition
\[\vartheta_{0}=16\sqrt{\frac{\log N}{N}}\]
of a _cutoff point_, at which we split our integral. We have
\[2\pi c_{t}(k) =\int_{-\pi}^{\pi}\gamma_{t}(\vartheta)\operatorname{e}(-k \vartheta)\,\mathrm{d}\vartheta\] \[=\int_{-\vartheta_{0}}^{\vartheta_{0}}\gamma_{t}^{*}(\vartheta) \operatorname{e}(-k\vartheta)\,\mathrm{d}\vartheta+\int_{-\vartheta_{0}}^{ \vartheta_{0}}\widetilde{\gamma}_{t}(\vartheta)\operatorname{e}(-k\vartheta) \,\mathrm{d}\vartheta+\int_{\vartheta_{0}\leq|\vartheta|\leq\pi}\gamma_{t}( \vartheta)\operatorname{e}(-k\vartheta)\,\mathrm{d}\vartheta\] \[=I_{1}+I_{2}+I_{3}.\]
Expanding the definition of \(\gamma_{t}^{*}\), we get
\[I_{1}=\int_{-\infty}^{+\infty}\exp\!\left(-\frac{v_{t}}{2}\vartheta^{2}-ik \vartheta\right)\mathrm{d}\vartheta-\int_{|\vartheta|\geq\vartheta_{0}}\exp \!\left(-\frac{v_{t}}{2}\vartheta^{2}-ik\vartheta\right)\mathrm{d}\vartheta= I_{1}^{(1)}-I_{1}^{(2)}.\]
By completing the square, we have
\[\frac{v_{t}}{2}\vartheta^{2}+ik\vartheta=\frac{v_{t}}{2}\left(\vartheta+\frac {ik}{v_{t}}\right)^{2}+\frac{k^{2}}{2v_{t}}.\]
Evaluating a complete Gauss integral, where we may discard the imaginary shift \(ik/v_{t}\), we obtain
\[I_{1}^{(1)}=\sqrt{\frac{2\pi}{v_{t}}}\exp\!\left(-\frac{k^{2}}{2v_{t}}\right)\!,\]
which gives the main term after division by \(2\pi\). Meanwhile, the first error term satisfies
\[\big{|}I_{1}^{(2)}\big{|}\leq\int_{|\vartheta|\geq\vartheta_{0}}\exp\!\left(- \frac{v_{t}}{2}\vartheta^{2}\right)\mathrm{d}\vartheta\leq\frac{2}{v_{t}\theta _{0}}\exp\!\left(-\frac{v_{t}}{2}\theta_{0}^{2}\right)\leq\frac{N^{-96-1/2}}{6 \sqrt{\log N}},\]
where the second inequality follows from the estimate
\[\int_{x_{0}}^{\infty}\exp\!\left(-cx^{2}\right)\mathrm{d}x\leq\int_{x_{0}}^{ \infty}\frac{x}{x_{0}}\exp\!\left(-cx^{2}\right)\mathrm{d}x=\frac{1}{2cx_{0}} \exp\!\left(-cx_{0}^{2}\right)\!,\]
valid for any \(c,x_{0}>0\), and the third one follows from \(v_{t}\geq\frac{3}{4}N\) and the choice of \(\vartheta_{0}\).
Furthermore, by Proposition 3.9 we have
Finally, by Proposition 3.10 we get
\[\begin{split}|I_{3}|&\leq\int_{\vartheta_{0}\leq| \vartheta|\leq\pi}\!\left(1-\frac{1}{128}\vartheta^{2}\right)^{\left\lfloor N /2\right\rfloor}\mathrm{d}\vartheta\leq 2\int_{\vartheta_{0}}^{\pi}\exp\!\left(- \frac{N-1}{256}\vartheta^{2}\right)\mathrm{d}\vartheta\leq 2\pi\exp\! \left(-\frac{N-1}{256}\vartheta_{0}^{2}\right)\\ &=2\pi N^{-(N-1)/N}=\mathcal{O}\!\left(N^{-1}\right)\!.\end{split}\]
The largest error term is thus \(\mathcal{O}\!\left(N^{-1}\log^{2}N\right)\). This finishes the proof of our main theorem.
### Acknowledgements
The research topic treated in the present paper was proposed, independently, to the first author (by Maciej Ulas), and to the second author (by Jean-Paul Allouche).
Part of the research for this paper was conducted when B. Sobolewski was visiting L. Spiegelhofer at the Montanuniversat Leoben.
|
2309.13670 | Discovery of a one-sided radio filament of PSR J0538+2817 in S147:
escape of relativistic PWN leptons into surrounding supernova remnant? | We report the discovery of a faint radio filament near PSR J0538+2817 in the
NVSS, CGPS, and the Rapid ASKAP Continuum Survey data. This pulsar is plausibly
associated with the supernova that gave rise to the Spaghetti Nebula (Simeis
147). The structure is one-sided and appears to be almost aligned (within 17
degrees) with the direction of the pulsar's proper motion, but in contrast to
the known cases of pulsar radio tails, it is located ahead of the pulsar. At
the same time, this direction is also approximately (within 5 degrees)
perpendicular to the axis of the extended non-thermal X-ray emission around the
pulsar. No X-ray or optical emission is detected from the filament region,
although the end point of the radio filament appears to be adjacent to a
filament of H$_\alpha$ emission. We speculate that this structure might
represent a filament connecting pulsar wind nebula with the ambient
interstellar medium filled with relativistic electrons escaping the pulsar
nebula, i.e. a radio analogue of X-ray filaments of Guitar and Lighthouse PWNs
and filaments of non-thermal radio emission in the Galactic Center. | Ildar Khabibullin, Eugene Churazov, Andrei Bykov, Nikolai Chugai, Igor Zinchenko | 2023-09-24T15:35:45Z | http://arxiv.org/abs/2309.13670v2 | # Discovery of a one-sided radio filament of PSR J0538+2817 in S147:
###### Abstract
We report the discovery of a faint radio filament near PSR J0538+2817 in the NVSS, CGPS, and the Rapid ASKAP Continuum Survey data. This pulsar is plausibly associated with the supernova that gave rise to the Spaghetti Nebula (Simeis 147). The structure is one-sided and appears to be almost aligned (within 17 degrees) with the direction of the pulsar's proper motion, but in contrast to the known cases of pulsar radio tails, it is located ahead of the pulsar. At the same time, this direction is also approximately (within 5 degrees) perpendicular to the axis of the extended non-thermal X-ray emission around the pulsar. No X-ray or optical emission is detected from the filament region, although the end point of the radio filament appears to be adjacent to a filament of H\({}_{\alpha}\) emission. We speculate that this structure might represent a filament connecting pulsar wind nebula with the ambient interstellar medium filled with relativistic electrons escaping the pulsar nebula, i.e. a radio analogue of X-ray filaments of Guitar and Lighthouse PWNs and filaments of non-thermal radio emission in the Galactic Center.
keywords: ISM: supernova remnants - Interstellar Medium (ISM), Nebulae, radiation mechanisms: thermal - Physical Data and Processes, X-rays: general - Resolved and unresolved sources as a function of wavelength, Galaxy: halo - The Galaxy
## 1 Introduction
The collapse of the stellar core of massive (\(M\gtrsim 8M_{\odot}\)) stars results in the energetic shock wave being launched, which is capable of disrupting the star, accelerating the debris to large (>10,000 km/s) velocities, and giving rise to the spectacular supernova phenomenon (e.g. Janka 2012). In many cases, the collapse of the core also leads to the formation of a rotating and magnetized neutron star, which are believed to form a diverse population of isolated neutron stars, magnetars and pulsars (e.g. Popov and Turolla 2012).
Pulsars are capable of tapping a certain fraction of their rotational energy into flows of highly relativistic particles in the form of equatorial wind, which, after being stopped by the surrounding medium, form the so-called Pulsar Wind Nebula (PWN, e.g., Gaensler and Slane 2006). PWNe are considered as an important source of Galactic leptons of very high energies, but the exact way how this escape takes place is still unclear (Bykov et al. 2017; Bucciantini 2018).
Since the lifetime of pulsars is long, many of them having high enough initial velocity manage to overrun the decelerating supernova forward shock wave and start propagating through the undisturbed ISM (e.g., Faucher-Giguere and Kaspi 2006). Interaction of a PWN with the ISM gives rise to the bow shock, advancing ahead of the rapidly moving pulsars (e.g., Brownsberger and Romani 2014).
Besides that, the linear tail-like structures are observed in some cases, probably being remnants of the PWN stripped by ram pressure of the inflowing ISM. For instance, the cases of the bow shock and 6'-7' radio tails were found in PSR J0002+6216 (the Cannonball Pulsar, Schinzel et al. 2019; Kumar et al. 2023).
Radio imaging and polarization observations of a bow-shock PWN produced by PSR J1437-5959 (at Molonglo Observatory Synthesis Telescope and ATCA, Ng et al. 2012) showed about 10' extension nearly linear filament in the Frying Pan (G315.9-0.0) supernova remnant directed nearly radially outward from the rim of the shell. The magnetic field geometry inferred from radio polarimetry shows a good alignment with the tail orientation, which could be a result of high flow speed. There are also hints that the postshock wind has a low magnetization and is dominated by electrons and positrons in energy. Also, a tail is seen in the fast-moving pulsar PSR J0908-4913 associated with SNR (Johnston and Lower 2021). Numerous radio-emitting filaments observed in the MeerKAT (Heywood et al. 2022; Yusef-Zadeh et al. 2022) and Karl Jansky Very Large Array (Pare et al. 2022) data of the Galactic Center might also be relics of the pulsar-injected particles (e.g., Barkov and Lyutikov 2019).
Even more intriguing are perpendicular linear structures observed in X-rays in the Guitar (see e.g. Hui and Becker 2007; Johnson and Wang 2010; de Vries et al. 2022) and Lighthouse nebulae (Pavan et al. 2014), as well as from PSR J1509-5850 (Klingler et al. 2016). Recently, Klingler et al. (2023) reported on _NuSTAR_ observations of PSR J1101-6101 and its misaligned outflow (which is the Lighthouse nebula) and detected the outflow up to 25 keV. They find clear evidence of spectral X-ray cooling with distance from the pulsar. These
might be a result of reconnection of the PWN magnetic field lines with the field lines of the ISM, resulting in a flow of very energetic particles escaping PWNe and producing elongated synchrotron X-ray filaments (Bandiera, 2008; Bykov et al., 2017; Barkov et al., 2019; Olmi & Bucciantini, 2019).
On the other hand, some of the extended non-thermal emission structures associated with pulsars, are observed inside supernova remnants. An elongated structure called Vela X cocoon which is bright in X-rays and gamma-rays is apparently located inside the Vela SNR and can be associated with Vela PWN (Slane et al., 2018). From the X-ray spectral data analysis the authors found in the cocoon the likely presence of both shocked ejecta material and the non-thermal PWN emission components. The X-ray data can possibly be interpreted as the result of a disruption of the Vela PWN by the asymmetric reverse shock of the Vela SNR. The asymmetry of the reverse shock was attributed to the large scale density gradient as it was predicted by Blondin et al. (2001).
Here we report the discovery of a radio-emitting filament pointing towards PSR J0538+2817 associated with the Simes 147 SNR (S147, e.g. Lozinskaia, 1976), commonly referred to as the Spaghetti Nebula. We argue that the pulsar is likely still located within the realm of its parent supernova remnant and might be interacting either with the unshocked ejecta, shocked ejecta, or shocked interstellar medium. Orientation of the filaments precludes its interpretation as a PWN's stripping tail, and allows us to put forward several possibilities for its formation, depending on the assumed 3D position of the PSR inside the nebula. We discuss them in relation to the previous works which considered different phases of the PWN-SNR interaction (Blondin et al., 2001; van der Swaluw et al., 2004; Blondin & Chevalier, 2017; Olmi, 2023) and formulate predictions of possible scenarios for future observations.
## 2 Radio Observations
Both PSR J0538+2817 (Anderson et al., 1996; Kramer et al., 2003; Ng et al., 2007; Yao et al., 2021) and its host Spaghetti nebula (Denoyer, 1974; Sofue et al., 1980; Fuerst & Reich, 1986; Xiao et al., 2008) have been extensively studied at radio frequencies, what allowed the pulsar period (134 ms), its derivative (corresponding to spin-down luminosity \(5\times 10^{34}\) erg/s), as well as its parallax (corresponding to the distance of 1.4 kpc, e.g., Ng et al., 2007) and proper motion (corresponding to picture plane velocity 400 km/s in the direction away from the S147's center) being measured. Moreover, recent observations (Yao et al., 2021) put a constrain on the line-of-sight velocity \(v=81^{+158}_{-150}\) km/s, implying the full 3D velocity being smaller than 500 km/s (at 1\(\sigma\) level). Assuming close-to-spherical shape of the main S147 shell, this also implies location of PSR J0538+2817 well within its parent SNR boundaries. 3D alignment of the suggested pulsar rotation axis and its proper motion might be an indication of an explosion featuring electromagnetic rocket effect and launching of a powerful relativistic jet (Xu et al., 2022).
Here we report images of the S147 region based on the publicly available data of the Rapid ASKAP Continuum Survey (RACS) by Australian Square Kilometre Array Pathfinder (ASKAP) at 887.5 MHz (RACS-low, McConnell et al., 2020) and 1367.5 MHz (RACSA-mid, Duchesne et al., 2023). Figure 1 shows an RGB composite images (in Galactic coordinates) combining slightly smoothed (to suppress noise) RACS maps at 887.5 MHz (in red) and 1367.5 MHz (in green) along with a wavelet-filtered (to emphasise the filamentary structure) H\({}_{\alpha}\) image from the IGAPS survey (in blue Greimel et al., 2021). Clearly visible in the left panel of Figure 1 is a close correspondence of some of the radio (primarily at lower frequencies) and H\({}_{\alpha}\) emitting filaments, while some of the bright H\({}_{\alpha}\) features lack radio counterparts. Although being of great interest as sites of energetic interaction between cold, hot and relativistic phases of the ISM, further consideration of these filaments is not the subject of the current paper.
Instead, a distinct filamentary structure which is visible (in the centre of the dashed rectangular 1x1 deg\({}^{2}\) region) both in 887.5 MHz and 1367.5 MHz images, lacks optical counterpart, as highlighted in the right panel of Figure 1, and appears to be closely connected to the pulsar location is of interest here.
The length of this almost linear structure is \(\approx 6\) arcmin (\(\sim 2.5\) pc in projection at \(d=1.4\) kpc), while its direction turns out to be aligned with the direction of the pulsar's proper motion (the motion of the pulsar over the last 40 kyrs is shown with the white arrow in Figure 1, based on the proper motion measurements by Ng et al., 2007), but the filament is located in front (or ahead) of the pulsar, contrarily to the known radio tails of other pulsars.
At the same time, the direction of the filament appears to be as well close to the direction perpendicular to the elongation axis of the non-thermal X-ray emission from the PWN observed by _Chandra_ in the direct vicinity of the pulsar (this direction is shown via dashed line in the right panel of Figure 1). In contrast to the H\({}_{\alpha}\) filaments of the Spaghetti nebula, which are also visible in (mostly low frequency) radio emission, the pulsar filament lacks optical counterpart (but see Section 4 for the discussion of possible connection of the filament's endpoint to the bright H\({}_{\alpha}\) substructure). No similar radio structures are visible along the way of pulsar from the center of S147.
Figure 2 shows morphology of the radio emission at 887.5 MHz (left panel) and 1367.5 MHz (right panel) in equatorial coordinates (J2000) and on the linear scale. The colour-scheme is set symmetrically around the zero level, so that the white colour corresponds to zero level, while blue regions demonstrate level noise fluctuations. Given the differences in the beam size in the spectral channels, no significant difference in the morphology between the bands is visible (black contours on both left and right panels of Figure 2 show 3 and 5 RMS levels on the low frequency map allowing comparison of the emission morphologies across the bands). The only exception might be presence of the Northern non-linear extension visible in the 1367.5 MHz map, which significance is rather low, however.
Given that at low significance level, interferometric radio maps contain plenty of artefacts, in particular caused by bright point sources (cf. the bright red stripes in the bottom left part of Figure 1 caused by the Crab nebula), we also examine the archival data of the NVSS (Condon et al., 1998) and CGPS (Taylor et al., 2003) surveys at 1.4 GHz. Figure 3 shows comparison the RACS-mid (left panel), CGPS (middle panel) and NVSS (right panel) images, where black contours reflect morphology of the filament emission in the ASKAP image and green regions mark positions and approximate extents of the sources in the NVSS catalogue (Condon et al., 1998). Remarkably, the filament emission is visible in both NVSS and CGPS images, including the possible Northern extension. Moreover, this emission is recognized as a pair of mildly extended sources in the NVSS catalogue, with the combined flux density \(\sim 10\) mJy at 1.4 GHz, being \(\sim 3\) times brighter than the pulsar itself, having \(3.5\pm 0.5\) mJy at 1.4 GHz (Condon et al., 1998).
For PSR J0538+2817, the flux density at 887.5 MHz is \(7.5\pm 1\) mJy at 888 MHz (ASKAP), so the the spectral index is close to \(\sim 0.5\), and its radio luminosity is at the level of \(\sim 1.5\times 10^{28}\) erg/s at \(d=1.4\) kpc. The quality of the publicly available RACS data precludes us from drawing firm conclusions regarding the spectral shape of the diffuse filament emission, but simple "hardness ratio" comparison of the
images indicates that it is not strongly dissimilar to the pulsar itself. As a result, we estimate \(\nu L_{\nu}\) luminosity of the diffuse emission at the level of at least \(\sim 5\times 10^{28}\) erg/s, which is six order of magnitude below the pulsars spin-down luminosity \(L_{\rm sd}\sim 5\times 10^{34}\) erg/s.
More robust radio data are needed to improve on these estimates, and the required sensitivity level is well within the reach of the currently operating facilities, so we leave a more elaborate analysis for future work. Here, we conclude that the radio filament emission is not dissimilar to the pulsars emission in spectral shape an luminosity and does not show strong variations between the epochs of radio observations separated by more than 20 yrs or so.
Finally, we note that direction of discovered filament is also close to the magnetic field vector direction derived from the polarimetric observations of the Spaghetti nebula at \(\lambda=6\)cm (5 GHz) by Urumqi telescope (Xiao et al. 2008). Figure 4 shows the intensity map of the \(\lambda=6\)cm radio emission (linear scale, equatorial coordinates) with the direction of the magnetic field vector overlaid in black on top of it. Although, according with expectations in compression scenario, the magnetic field direction follows the brightest regions of the SNR's rim, no sharp change in the B direction across the SNR boundary is visible, indicating that the observed B direction is strongly affected, if not determined, by the B direction in the interstellar medium unaffected by the SNR shock wave (Xiao et al. 2008). For the region of the Galaxy in direction of the S147 nebula, the magnetic field is known to aligned parallel to the Galactic disk, as demonstrated by the synchrotron and dust polarization maps obtained by _Planck_ (e.g. Fig. 23 and 25 in Planck Collaboration et al. 2016). This direction, corresponding to vertical orientation in the equatorial coordinates is indeed very close to the orientation of the discovered radio feature. Given poor spatial resolution of the Urumqi map, however, it is difficult to draw any firm conclusions on the significance of this alignment and whether it would still hold when scales comparable to the size of the filament are resolved. Polarimetric observations with ASKAP or MeerKAT will certainly be invaluable for clarifying this.
## 3 H\({}_{\alpha}\) view
As was mentioned earlier, no optical counterpart of the radio structure is visible on deep IGAPS images as illustrated more explicitly on the composite image shown in Figure 5, combining NVSS image at 1.4 GHz in red and IGAPS H\(\alpha\) image in cyan (equatorial coordinates) with the contours of 1367.5 MHz from RACS-mid overlaid in white. Clearly, the main straight body of the radio filament corresponds to a slight depression of the H\({}_{\alpha}\) emission, if anything, which is likely simply a result of fluctuations in the surface brightness of the emission from the Spaghetti nebula.
The Northern extension of the radio filament visible at lower significance appears to coincide with the bright portion of an H\({}_{\alpha}\) filament, which might be either an indication of its unrelated (with respect to the pulsar) nature, or reveal a connection between the two structures.
The direct vicinity of the pulsar itself also appears to be dark in H\({}_{\alpha}\) emission (see Figure 6), with no signatures of a PWN bow shock at least at the level corresponding to the H\({}_{\alpha}\) emission of the Spaghetti nebula, most likely arising from an SNR-driven shock propagation in the cold and neutral unperturbed interstellar medium. Thus, we conclude that PSR J0538+2817 is indeed most likely still well within the boundary of its parent supernova remnant and propagates through a relatively tenuous unshocked ejecta or the hot interior between the forward and reverse shocks.
## 4 X-ray view
PSR J0538+2817 was a target of both _Chandra_ and _XMM-Newton_ observations, that on the one hand allowed to resolve the X-ray PWN (e.g. Romani & Ng 2003; Ng et al. 2007) and discover X-ray pulsations (McGowan et al. 2003). Here we return to these observations primarily in order to search for a possible X-ray counterpart of the newly discovered radio filament.
The initial processing of the _Chandra_ data was performed using the
Figure 1: Composite RGB images showing slightly smoothed RACS maps at 887.5 MHz (in red) and 1367.5 MHz (in green) and wavelet-filtered (to emphasise the filamentary structure) H\({}_{\alpha}\) image from the IGAPS survey (in blue). The left panel shows the full extent of the S147 nebula with the location of the PSR pointed by the arrow, which direction and length correspond to the proper motion of the pulsar for the last 40 kyrs. The square is 1 deg on a side and depicts the zoom-in region shown in the right panel. In addition to the direction of the proper motion, the right panel also shows the orientation of the mildly-extended X-ray emission detected by _Chandra_ above 4 keV, and the direction perpendicular to it. The circle is centred on the pulsar and has a radius of 6’. The white compass region shows equatorial (J2000) North (N) and East (E) directions.
latest calibration data and following the standard procedure described in Vikhlinin et al. 2005. Corrections for the exposure map variations, the vignetting effect of the telescope and subtraction of the particle background were done similarly to Churazov et al. 2012. The ObsIDs used for the analysis here were 2796, 5338, 6242 (PI: Roger Romani).
Exquisite spatial resolution of _Chandra_ allows to resolve rich morphology of the X-ray emission at the arcsec scales around the pulsar. The soft X-ray emission extends away to \(\sim 7\) arcsec and has a wind-like shape. At higher energies,likely dominated by non-thermal emission of the PWN, the emission extends to 4.5 arcsec, and its axis is consistent with the measurement by Ng et al. 2007.
Ng et al. 2007 estimated the (unabsorbed) flux of the PWN emission at the level of \(2.4\times 10^{-14}\) erg/s/cm\({}^{2}\) (0.5-5 keV), resulting in the luminosity of \(\sim 6.5\times 10^{30}\) erg/s, i.e. \(\eta\sim 10^{-4}\) of the spin-down luminosity.
PSR J0538+2817 was observed with _XMM-Newton_ in 2002 (OB-SID: 0112200401). Only one instrument (MOS1) of the European Photon Imaging Camera (EPIC) was operating in the imaging mode and covered the filament area. The other two detectors were in timing mode. We use MOS1 data for imaging and spectral analysis after subtraction of the particle and blank field background contributions and correction for exposure map variations and vignetting of the telescope. Figure 8 shows images (equatorial coordinates) of the X-ray surface brightness in 0.5-3 keV (left panel) and 3-7 keV (right panel) bands. No excess emission is visible from the filament region highlighted by the green contours taken from the map of radio emission at 887.5 MHz.
To obtain an upper limit on the total X-ray emission from the filament region, we define three box regions as indicated in cyan in Figure 8, with the middle one fully encompassing the radio filament, and the other adjacent two used for the background estimation. As a result we get an upper limit on the 0.5-8 keV surface brightness at the level of \(\sim 2\times 10^{-15}\) erg/s/cm\({}^{2}\)/arcmin\({}^{2}\) (which is \(\sim 10\%\) of the background level), resulting in the flux limit within the radio bright region at the level of \(\sim 10^{-14}\) erg/s/cm\({}^{2}\).
Figure 3: Possible extension of the radio emission at 1.4 GHz based on the RACS-mid (left panel), CGPS (middle panel), and NVSS (right panel) data. The contours correspond to the RACS-mid image, the other black regions are as in Figure 2. Green regions show the locations and approximate shapes of the sources from the NVSS catalogue.
Figure 2: Morphology of the radio emission (in Jy/beam) at RACS low (887.5 MHz, left) and mid (1367.5 MHz, right) frequencies versus the noise level and beam shapes, and in the equatorial coordinates (J2000) for easier comparison with previous studies. The black contours show the 3 and 5 RMS levels of the low-frequency map (\(\sigma_{\rm low}\simeq 135\mu\)Jy/beam) both on the left and right panels. Regions are as in Fig. 1, the circle is centred on the pulsar and has a radius of \(6\arcmin\).
This translates into the luminosity limit at \(d=1.4\) kpc \(L_{X}\lesssim 3\times 10^{30}\) erg/s. If the radio luminosity is at the level of \(L_{1\rm\ GHz}\sim 4\times 10^{28}\) erg/s at 1 GHz, this upper limit translates into a lower limit on the cross-band spectral index of \(\alpha_{XR}\gtrsim 0.75\).
On the other hand, we can put a lower limit on the magnetic field inside the filament, given that no X-ray emission coming from the Inverse Compton radiation of the same population of the electrons is observed at the level 100 times the radio luminosity (e.g. Felten & Morrison 1966, see also eq.5.10 in Sarazin 1988).
For the spectral index of radio and X-ray emission \(\alpha_{X}=\alpha_{R}=0.5\), this limit corresponds to \(\sim 40\) nG, conservatively assuming that the radiation field responsible for the IC comes primarily from the Cosmic Microwave Background radiation. For steeper spectra, this limit becomes even higher, corresponding to 300 nG for \(\alpha_{X}=\alpha_{R}=1.0\) (cf. Fig. 9 showing dependence of the X-ray-to-radio ratio on the magnetic field strength and slope of the particle distribution function). Although these values are still much smaller than one might expect for the (shocked) interstellar medium, in the unshocked ejecta case much smaller filed strengths might take place, given that there is no mechanism of the field amplification effectively operating in the free expansion phase.
## 5 Discussion
Let us first summarise the main observational findings regarding the newly discovered radio filament:
* the radio filament is almost straight and narrow, with the length of \(\sim\)6 arcmin (\(\sim 2.5\) pc in projection at \(d=1.4\) kpc) and at least \(\sim 10\) times smaller width
* the filament is one-sided and, unlike ram pressure stripped tails, it points approximately _in the direction_ of the PSR proper motion (within 17 degrees)
* filament's spectral shape and luminosity at GHz frequencies are not very dissimilar from the pulsar and its immediate vicinity, with the filament being a factor of 3 more luminous. Possible signs for spectral hardening can be seen towards the end of the filament and in particular in the Northern extension region. However, the latter could be an unrelated feature associated with H\({}_{\alpha}\) filaments of the Spaghetti nebula
* comparison of the NVSS, CGPS, and ASKAP surveys does not reveal dramatic epoch-to-epoch variations on the time span of more than 20 years
Figure 4: Intensity and the magnetic field direction map (equatorial coordinates) derived from the Stokes parameters map at \(\lambda=6\) cm by Urumqi telescope (Xiao et al. 2008). An \(r=6\) arcmin region around the pulsar is marked with the magenta colour, showing close-to-vertical orientation of the magnetic field, similarly to the orientation of the discovered radio filament.
Figure 5: Composite of H\({}_{\alpha}\) (IGAPS, cyan) and radio emission at 1.4 GHz (NVSS, red) with white contours showing morphology of the extended radio emission in the RACS 1367.5 MHz map. The circle is 6 arcmin in radius.
Figure 6: H\({}_{\alpha}\) image of the PSR J0538+2817 vicinity based on mosaic from IGAPS. The small circle is 4.5” in radius, and the large circle is 20 times bigger. No signatures of the extended emission associated with a bow shock is visible at the level comparable to the diffuse emission from S147.
* direction of the filament is also close to the perpendicular direction to the axis of X-ray emission of the PWN (within 5 deg). The filament is also aligned with the magnetic field direction derived from the low-resolution synchrotron polarization maps, although the latter could reflect global magnetic field orientation along the Galactic plane rather than inside S147
* no X-ray or optical counterpart is detected, implying a lower limit on the magnetic field strength of \(\sim 40\) nG (for a power law distribution of electrons)
At least four stages of the interaction of a pulsar and its PWN with the surrounding medium have been studied (see e.g. a recent review by Olmi 2023, and their Fig. 1), broadly corresponding to the PSR position relative to the SNR shock waves. We briefly discuss below all four scenarios for the PSR J0538+2817 and acknowledge related issues.
These stages are
1. PSR moving together with free-expanding ejecta
2. PSR passing through the reverse shock and moving through the shocked ejecta or shocked ISM
3. The reverse shock bounces from the center and overtakes outwardly moving PSR
4. PSR escapes from the SNR and moves through the ISM.
The first three stages are schematically shown in Fig. 10.
Figure 8: _XMM-Newton_ images in the 0.5-3 keV (left panel) and 3-7 keV (right panel) bands (equatorial coordinates, linear scale) with contours of 887.5 MHz emission from the radio filament overlaid in green. So signatures of the excess X-ray emission from the location of the radio-emitting filament can be spotted neither in soft nor in hard X-rays.
Figure 7: _Chandra_ X-ray \(0.5^{\prime}\times 0.5^{\prime}\) images (equatorial coordinates) smoothed with \(\sigma=0.5^{\prime\prime}\) Gaussian window in the 0.5-7 keV (top) and 3-7 keV (bottom bands) on a logarithmic scale. The solid circle is centred on the pulsar and has a radius of \(4.5^{\prime\prime}\). The solid line shows the direction of the pulsar’s proper motion, while the oppositely directed dashed arrows show the X-ray axis proposed by Ng et al. 2007 and the perpendicular direction.
We first comment on the latest stage, when the pulsar has already escaped from the SNR's boundaries and propagates through the unshocked ISM. Given that in projection the PSR is within the SNR, a high velocity along the line of sight is needed. This appears to be barely consistent with the recent measurement of the pulsar's line-of-sight velocity (Yao et al., 2021), although large uncertainties preclude firm conclusions. If the ISM around S147 is cold, one might expect signs of a strong bow shock in H\({}_{\alpha}\) and X-rays ahead of the pulsar. We consider the lack of such signs as an argument against this scenario.
Three other options include, respectively, propagation of the pulsar in the free expanding ejecta, in the medium behind the reverse shock, and in the medium behind the reflected reverse shock, which has overtaken the pulsar after the rebound in the center. For the last two options, the medium might be composed either of the ejecta or shocked ISM gas, and in the more realistic situation, it is likely a mixture of both. What could be the mechanisms capable of producing an almost straight and narrow 2.5 pc-long radio emitting structure?
The early phase of a PWN evolution within the free expanding ejecta has been considered recently by Blondin & Chevalier 2017, concluding that the PWN-ejecta interface might be susceptible to instabilities, resulting in strong distortions of the interface and even penetrating filaments left behind the PWN boundary. An important aspect of this situation is, however, the relative smallness of the resulting structures - even taking into account the fact that spin down luminosity of PSR J0538+2817 should have been much (e.g. a hundred times) higher than the present day one, the size of the PWN is unlikely to exceed a fraction of a pc at most.
An interesting opportunity, in this case, might be connected to the freely expanding nature of the ejecta flow resembling closely the cosmological Hubble flow for any fluid element. Indeed, every structure connecting two points is getting stretched, given that it evolves passively following the flow streamlines. Hence a straight perturbation e.g. in the form of a small magnetised filament, seeded in a very early stage of the PWN formation might get stretched to a length of a few pc or so, and later filled with the relativistic particles produced by the pulsar over its lifetime.
In light of this scenario, particularly important is the lower limit on the magnetic field strength obtained from the upper limit on X-ray surface brightness. Indeed, the unshocked free expanding ejecta is expected to possess only very weak magnetic field following \(B\propto(R/R_{0})^{-2}\), where \(R\) is the radial coordinate and \(R_{0}\sim R_{\rm v}\) is the initial radius comparable to the size of the progenitor star. For \(R\gtrsim 15\) pc and any realistic size of the star, the obtained lower limit on the magnetic field strength inside the filament would translate into way too high magnetic field of the progenitor star.
In the "pulsar-in-ejecta" scenario another possibility could be that a magnetized channel in the ejecta was created by an single (early) collimated energy release episode by the pulsar and directed along its rotation axis. However, if this structure has been created recently, a relatively low bulk Lorenz factor (\(\Gamma\sim 1\)) and/or very high degree of collimation (\(\theta\sim 0.1\) deg) of the outflow is required even assuming that the ejecta gas density is low (\(n_{\rm ej}\sim 10^{-3}\) cm\({}^{-3}\)) and the integrated outflow energy is as high as \(E\sim L_{\rm ssf}\sim 10^{47}\) erg (\(L_{\rm sd}/10^{35}{\rm erg/s}\)) (\(r/30{\rm kyr}\)) in order produce the length \(l\sim 2.8\) pc (\(E/10^{47}{\rm erg/s}^{1/3}(n/10^{-3}{\rm cm}^{-3})^{-1/3}(\Gamma\theta_{\rm deg })^{-2/3}\) (cf. e.g. Eq. 5 in Heinz 2002).
Unless the magnetic field strength in the radio filament is extremely high, allowing particles with \(\gamma\ll 1000\) being responsible for the observed GHz radio emission, this would imply the presence of some hidden flow dominating the energy and even more the momentum budget. Although such a case of mass loading might be relevant for the jets of accreting black holes and neutron stars (e.g. Cyg X-1 or Cir X-1), such a situation for an isolated pulsar appears very exotic. Creation of this structure at an early stage and then its stretch by a factor of \(\delta\sim 100\) does not change this argument substantially, as the density of the ejecta was \(\delta^{2}\sim 10^{4}\) times more while the available energy accumulation time \(\delta\sim 100\) times shorter.
In a spherically symmetric case (which of course is a severe oversimplification), the reverse shock propagates inwards, so its interaction with the PWN is not expected to produce a radio-emitting substructure directed away from the SNR's centre (e.g. van der Swaluw et al., 2003; Kolb et al., 2017). Hence, in this scenario, the filament might be associated with a magnetic field substructure present in the shocked ejecta or ISM medium. Since certain mechanisms of the magnetic field generation are conceivable to operate in the shocked ejecta, the required magnetic field strength is not a big problem anymore. This is of course even more true for the shocked ISM case. Given the likely inhomogeneity of the ejecta and ISM, in particular the possible presence of clumps and small clouds capable of shadowing radially elongated "shadows" after the passage of a shock behind them, the radial orientation of the radio filament might look rather natural.
A potentially serious issue with this model is the transient nature of this phenomenon and the absence of traces from previous such episodes along the track of the pulsar (given that the lifetime of the radio-emitting electrons is longer than the characteristic lifetime of the system). A possible explanation might be that the pulsar is only now entering the region with significantly strong and structured magnetic fields, e.g. corresponding to the compressed magnetic field of the interstellar medium or the fingers produced by the Rayleigh-Taylor (RT) instability (Blondin & Chevalier, 2017).
In such a case, the filament might be a radio analogue of the Guitar (Hui & Becker, 2007; de Vries et al., 2022), Lighthouse (Pavan et al., 2014; Klingler et al., 2023), and PSR J1509-5850 (Klingler
Figure 9.— Ratio of the X-ray (at 4 keV) and radio (at 1 GHz) \(\nu L_{\nu}\) luminosities for a single population of relativistic particles emitting in Inverse Compton (from CMB radiation field) and synchrotron regimes, respectively. The slope \(p\) of the particle distribution function \(dN/d\gamma\propto\gamma^{-p}\) is determined by the spectral index in X-ray and \(\alpha_{\rm X}=(p-1)/2\), which ranges from 0.5 to 2, as indicated next to each line. The red line marks the upper limit on this ratio obtained from the upper limit on the non-thermal X-ray emission from the region of the radio-emitting filament. For \(\alpha_{\rm X}\gtrsim 0.5\), \(B\gtrsim 40\) nG is required.
et al., 2016) and PSR J2030+4415 (de Vries and Romani, 2020, 2022) X-ray filaments, possibly resulting from the escape of the relativistic particles from the PWN in action (Bandiera, 2008; Bykov et al., 2017; Barkov et al., 2019; Olmi and Bucciantini, 2019). In particular, the direction of the extended X-ray filament for PSR J1509-5850 also appears to be in front of the pulsar's direction of motion (Klingler et al., 2016). In contrast to these cases, however, the newly discovered radio-emitting filament might present a case when low-energy relativistic particles are capable of escaping the nebula. Hence, the PSR J0538+2817's radio filament might offer a link between the X-ray filaments and non-thermal filaments observed in the Galactic Center (Yusef-Zadeh et al., 1984; Heywood et al., 2022; Yusef-Zadeh et al., 2022), if the latter are indeed powered by particles escaping from PWNe (e.g. Barkov and Lyutikov, 2019). Relatively low magnetic filed inside the ejecta and/or absence of the converging shocks configuration might be the factor allowing escape and energetic dominance of the lower energy particles in the PSR J0538+2817's case (Bykov et al., 2017).
The scenario of the outwardly propagating _reflected_ reverse shock that moves faster than the pulsar can explain the direction of the filament by the most recent episode of the PWN's ram pressure stripping, however leaving open the question of the apparent absence of the tail created by the primary reverse shock. For a strongly non-spherical geometry (e.g. presence of a dense cloud or pre-existing asymmetric cavity), even a regular reverse shock might move at a large angle to the radius and sweep the tail. In such a case, this could be an analogue of the tail in Vela X (e.g., Slane et al., 2018). In this regard, it is interesting also no note that S147 shows slight asphericity in form of elongation perpendicular to the pulsar's proper motion, as well to the direction of the radio filament (e.g. Gvaramadze, 2006).
Finally, we summarize how the future (radio) observations will be helpful in revealing the nature of the newly discovered radio filament:
* spectral index gradient \(\rightarrow\) age and \(B\)
* polarization in radio \(\rightarrow\) morphology of the field
* surrounding region at lower frequencies \(\rightarrow\) search for "earlier" episodes (relevant only for weak fields)
* imaging of the starting and ending regions of the filament \(\rightarrow\) resolving the region between the pulsar and the filament, as well possible transition/interaction with the H\({}_{\alpha}\)-bright filament.
## 6 Conclusions
We report the discovery of a 2.5-pc-long (in projection) and narrow (with aspect ratio \(\gtrsim 10\)) one-sided radio-emitting (at \(\sim\) GHz frequencies) filament tentatively associated with the pulsar PSR J0538+2817 in the Spaghetti Nebula. Contrary to the known cases of pulsar's radio tail, the filament of PSR J0538+2817 appears to be directed ahead in the direction of the pulsar's proper motion. While to establish the exact nature of this filament more observations are needed, e.g. providing polarization and spectral index measurements, this object might be a radio analogue of X-ray bright filaments in the Lighthouse and Gutier nebulae, indicating that relatively low energy radio-emitting electrons are able to escape from the PWN. A tantalising connection can also be made with the radio filaments in the Galactic Centre region. We speculate on several scenarios including (i) motion of the PSR together with the freely expanding ejecta threaded by magnetic field lines (young age scenario) and (ii) motion of the PSR+PWN relative to the shocked gas and an episode of a by-chance reconnection of the bow shock with the ambient magnetic field. These and other scenarios require some strong assumptions, emphasising the unique properties of this object.
## Acknowledgments
We are grateful to Barbel Koribalski for a helpful discussion. We thank the papers referee, Dr. Maxim Lyutikov, for useful suggestions. IK acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679. AB was supported by the RSF grant 21-72-20020, his simulations were performed at the Joint Supercomputer Center JSCC RAS and at the"Tornado" subsystem of the Peter the Great Saint-Petersburg Polytechnic University Supercomputing Center. This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. CSIRO's ASKAP radio telescope is part of the Australia Telescope National Facility ([https://ror.org/05qajv042](https://ror.org/05qajv042)). Operation of ASKAP is funded by
Figure 10: Sketch of the three possible scenarios for location of the pulsar and its PWN (marked with the red circle and magenta line showing orientation of the radio filament) inside its parent supernova remnant (bounded by the outgoing spherically symmetric forward shock shown in black): left - pulsar is still propagating (direction of the proper motion is shown in green) along with the ejecta (which velocity field is schematically depicted in grey) yet unshocked by the reverse shock (blue); middle - the situation when the reverse shock has already passed the pulsar; right - the pulsar has been already passed by the outgoing reflected shock wave. Real geometry of the SNR shocks can differ strongly from the spherically symmetric one, resulting in more complicated flow patterns.
the Australian Government with support from the National Collaborative Research Infrastructure Strategy. ASKAP uses the resources of the Pawsey Supercomputing Research Centre. Establishment of ASKAP, Inyarrimanna Ilgari Bundara, the CSIRO Murchison Radioastronomy Observatory and the Pawsey Supercomputing Research Centre are initiatives of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. This paper includes archived data obtained through the CSIRO ASKAP Science Data Archive, CASDA ([https://data.csiro.au](https://data.csiro.au)).
This research made use of Montage1. It is funded by the National Science Foundation under Grant Number ACI-1440620, and was previously funded by the National Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. Tables manipulations have been performed using the TOPCAT/STILTS software (Taylor, 2005). We acknowledge the use of data provided by the Centre d'Analyse de Donnees Etendues (CADE), a service of IRAP-UPS/CNRS 2. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. Some of the figures were produced using the cubehelix color scheme developed by Dave Green (Green, 2011).
Footnote 1: [http://montage.ipac.caltech.edu](http://montage.ipac.caltech.edu)
Footnote 2: [http://cade.irap.omp.eu](http://cade.irap.omp.eu)
## Data Availability
All data used in this work is publicly available at astrophysical databases for radio observations (including NVSS, CGPS, ASKAP and Urumqi data), _Chandra_ and _XMM-Newton_ archives, and IGAPS web page ([http://www.star.ucl.ac.uk/IGAPS/](http://www.star.ucl.ac.uk/IGAPS/)).
|
2309.05012 | Canonical coordinates for moduli spaces of rank two irregular
connections on curves | In this paper, we study a geometric counterpart of the cyclic vector which
allow us to put a rank 2 meromorphic connection on a curve into a ``companion''
normal form. This allow us to naturally identify an open set of the moduli
space of $\mathrm{GL}_2$-connections (with fixed generic spectral data, i.e.
unramified, non resonant) with some Hilbert scheme of points on the twisted
cotangent bundle of the curve. We prove that this map is symplectic, therefore
providing Darboux (or canonical) coordinates on the moduli space, i.e.
separation of variables. On the other hand, for $\mathrm{SL}_2$-connections, we
give an explicit formula for the symplectic structure for a birational model
given by Matsumoto. We finally detail the case of an elliptic curve with a
divisor of degree $2$. | Arata Komyo, Frank Loray, Masa-Hiko Saito, Szilard Szabo | 2023-09-10T12:19:48Z | http://arxiv.org/abs/2309.05012v2 | # Canonical coordinates for moduli spaces of rank two irregular connections on curves.
###### Abstract.
In this paper, we study a geometric counterpart of the cyclic vector which allow us to put a rank \(2\) meromorphic connection on a curve into a "companion" normal form. This allow us to naturally identify an open set of the moduli space of \(\mathrm{GL}_{2}\)-connections (with fixed generic spectral data, i.e. unramified, non resonant) with some Hilbert scheme of points on the twisted cotangent bundle of the curve. We prove that this map is symplectic, therefore providing Darboux (or canonical) coordinates on the moduli space, i.e. separation of variables. On the other hand, for \(\mathrm{SL}_{2}\)-connections, we give an explicit formula for the symplectic structure for a birational model given by Matsumoto. We finally detail the case of an elliptic curve with a divisor of degree \(2\).
Key words and phrases:Moduli space of connections, cyclic vector, canonical coordinates The first author is supported by JSPS KAKENHI: Grant Numbers JP17H06127 and JP19K14506. The second author is supported by CNRS and Centre Henri Lebesgue, program ANR-11-LABX-0020-0. The third author is supported by JSPS KAKENHI:Grant Number 17H06127, 22H00094 and 22K18669. The fourth author was supported by _Lendulet_ Low Dimensional Topology grant of the Hungarian Academy of Sciences and by the grants K120697 and KKP126683 of NKFIH
part of) the moduli space of rank \(2\) meromorphic projective connections by using apparent singularities. For our purpose, we take this strategy. That is, we will also introduce canonical coordinates on the moduli space of meromorphic connections by using apparent singularities. On the other hand, in this paper, we are interested in the isomonodromic deformations of \(\mathrm{GL}_{2}\)-connections and of \(\mathrm{SL}_{2}\)-connections. The coordinates using apparent singularities are an analog of the separation of variables in the Hitchin system, which is a birational map from the moduli space of stable Higgs bundles to the Hilbert scheme of points on the cotangent bundle over the underlying curve of the Higgs bundles (see [19] and [16]). This approach has been generalized to Higgs bundles with unramified irregular singularities [38, Section 8.3], [52], [37, Section 4.2]. The moduli spaces of Higgs bundles corresponding to the Painleve cases were analyzed from this perspective in [25], [26], [27]. Here this map is a symplectomorphism of the open dense subsets of the moduli space. The definition of the apparent singularities for general rank meromorphic connections will be given in [50].
### Our setting
Let \(\nu\) be a positive integer. We set \(I:=\{1,2,\ldots,\nu\}\). Let \(C\) be a compact Riemann surface of genus \(g\) (\(g\geq 0\)), and \(D=\sum_{i\in I}m_{i}[t_{i}]\) be an effective divisor on \(C\). Let \(E\) be a vector bundle over \(C\) and \(\nabla\colon E\to E\otimes\Omega^{1}_{C}(D)\) be a meromorphic connection acting on \(E\). We assume that the leading term of the expansion of a connection matrix of \(\nabla\) at \(t_{i}\) has distinct eigenvalues. If \(m_{i}=1\), then we assume that the difference of eigenvalues of the residue matrix at \(t_{i}\) is not integer. That is, \(t_{i}\) is an generic unramified irregular singular point of \(\nabla\) or a non-resonant regular singular point of \(\nabla\).
When \(C\) is the projective line and \(E\) is the trivial bundle, the moduli space of meromorphic connections has been studied by Boalch [7] and Hiroe-Yamakawa [18]. This moduli space has the natural symplectic structure coming from the symplectic structure on the (extended) coadjoint orbits. For general \(C\) and \(E\), the moduli space of meromorphic connections (with quasi-parabolic structures) has been studied by Inaba-Iwasaki-Saito [22, 23], Inaba [21], and Inaba-Saito [24]. For general \(C\) and \(E\), the moduli space has also the natural symplectic structure. In these papers, the symplectic form described by a pairing of the hypercohomologies of some complex. This description of the symplectic structure is an analog of the symplectic structure of the moduli spaces of stable Higgs bundles due to Bottacin [8]. For the case where \(\nabla\) has only regular singular points, Inaba showed that this symplectic structure coincides with the pull-back of the Goldman symplectic structure on the character variety via the Riemann-Hilbert map in [21, the proof of Proposition 7.3].
Our purpose in this paper is to introduce canonical coordinates on the moduli spaces of meromorphic connections. For this purpose, there are some strategies. First one is to consider canonical coordinates on the products of coadjoint orbits. This direction was studied by Jimbo-Miwa-Mori-Sato [30], Harnad [17], and Woodhouse [53]. Sakai-Kawakami-Nakamura [32] and Gaiur-Mazzocco-Rubtsov [15] gave some explicit formulae for the isomonodromic Hamiltonians by the coordinates of this direction. Second one is to consider the apparent singularities. As mentioned above, we take this strategies.
In this paper, we consider only the case where the rank of \(E\) is two. Let \(X\) be an irregular curve, which is described in Section 2.3. That is, \(X\) is a tuple of (i) a compact Riemann surface \(C\), (ii) an effective divisor \(D\) on \(C\), (iii) local coordinates around the support with \(D\), and (iv) spectral data of meromorphic connections at the support with \(D\) (with data of residue parts). Here, the spectral data is described in Section 2.3. We fix an irregular curve \(X\). That is, we fix spectral data of rank \(2\) meromorphic connections at each point of the support with \(D\). By applying elementary transformations (which is also called Hecke modifications), we may change the degree
of the underlying vector bundle of a meromorphic connection freely. So we assume that \(\deg(E)=2g-1\). By this condition, the Euler characteristic of the vector bundle \(E\) is \(1\) by the Riemann-Roch theorem. In this situation, for generic meromorphic connections \((E,\nabla)\), we have \(\dim_{\mathbb{C}}H^{0}(C,E)=1\). So the global section of \(E\) is uniquely determined up to constant. This is convenient for the definition of the apparent singularities. In this paper, we consider only meromorphic connections with \(\dim_{\mathbb{C}}H^{0}(C,E)=1\). Moreover we assume that meromorphic connections \((E,\nabla)\) are irreducible. By this condition, the definition of apparent singularities becomes simple.
### \(\operatorname{GL}_{2}\)-connections
In the first part of this paper, we discuss on \(\operatorname{GL}_{2}\)-connections. That is, we consider rank \(2\) meromorphic connections. We do not fix the determinant bundles of the underlying vector bundles and the traces of connections. Our purpose is to introduce canonical coordinates on the moduli space of rank \(2\) meromorphic connections by using apparent singularities. When \(C\) is the projective line, many people introduced canonical coordinates on the moduli space by using the apparent singularities ([48], [46], [10], [51], [36], [9], and [34]). In this paper, we consider apparent singularities for general Riemann surfaces.
Let \(X\) be the fixed irregular curve. If \((E,\nabla)\) is a rank \(2\) meromorphic connection such that \(\deg(E)=2g-1\), \(\dim_{\mathbb{C}}H^{0}(C,E)=1\), and \((E,\nabla)\) is irreducible, then we can define apparent singularities for \((E,\nabla)\). (In detail, see Definition 1 below). The apparent singularities are the set of points \(\{q_{1},\dots,q_{N}\}\) on the underlying curve \(C\). Here we set \(N:=4g-3+\deg(D)\). Let \(M_{X}\) be the following moduli space
\[M_{X}:=\left.\begin{cases}(E,\nabla)\end{cases}\right|\begin{array}{l}\text{(i) $E$ is a rank $2$ vector bundle on $C$ with $\deg(E)=2g-1$}\\ \text{(ii) $\nabla$:$E\to E\otimes\Omega_{C}^{1}(D)$ is a connection}\\ \text{(iii) $(E,\nabla)$ is irreducible, and}\\ \text{(iv) $\nabla$ has the fixed spectral data in $X$}\end{array}\right\}\right/\cong.\]
This moduli space \(M_{X}\) has a natural symplectic structure due to Inaba-Iwasaki-Saito [22], Inaba [21], and Inaba-Saito [24]. We consider a Zariski open subset \(M_{X}^{0}\) of \(M_{X}\) as follows:
\[M_{X}^{0}:=\left.\begin{cases}(E,\nabla)\in M_{X}\end{cases}\right|\begin{array} []{l}\text{(i) $\dim_{\mathbb{C}}H^{0}(C,E)=1$,}\\ \text{(ii) $q_{1}+\dots+q_{N}$ is reduced, and}\\ \text{(iii) $q_{1}+\dots+q_{N}$ has disjoint support with $D$}\end{array}\right\}\right/\cong\]
(in detail, see Section 3.1). The dimension of the moduli space \(M_{X}^{0}\) is \(2N\) (Proposition 10). By taking apparent singularities, we have a map
\[\operatorname{App}\colon M_{X}^{0} \longrightarrow\operatorname{Sym}^{N}(C)\] \[(E,\nabla) \longmapsto\{q_{1},q_{2},\dots,q_{N}\}.\]
Remark that the dimension of \(\operatorname{Sym}^{N}(C)\) is half of the dimension of \(M_{X}^{0}\). To introduce coordinates on \(M_{X}^{0}\), it is necessary to find further invariants of connections, that are customarily called _accessory parameters_. To find these parameters, we introduce a twist of \(\Omega_{C}^{1}(D)\) by \(c_{d}\), which is the first Chern class \(c_{1}(\det(E))\in H^{1}(C,\Omega_{C}^{1})\) of \(E\). (In detail, Section 3.5 below). We denote by \(\Omega_{C}^{1}(D,c_{d})\) the twist of \(\Omega_{C}^{1}(D)\). Let
\[\pi_{c_{d}}\colon\boldsymbol{\Omega}(D,c_{d})\longrightarrow C\]
the total space of \(\Omega_{C}^{1}(D,c_{d})\). Let \(\omega_{D,c_{d}}\) be the rational \(2\)-form on \(\boldsymbol{\Omega}(D,c_{d})\) induced by the Liouville symplectic form. This rational \(2\)-form \(\omega_{D,c_{d}}\) induces a symplectic structure on \(\boldsymbol{\Omega}(D,c_{d})\setminus\pi_{c_{d}}^{-1}(D)\). We consider the symmetric product \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\). Let \(\sum_{j=1}^{N}\operatorname{pr}_{j}^{*}(\omega_{D,c_{d}})\) be the rational \(2\)-form on the product \(\boldsymbol{\Omega}(D,c_{d})^{N}\). Here \(\operatorname{pr}_{j}\colon\boldsymbol{\Omega}(D,c_{d})^{N}\to\boldsymbol{ \Omega}(D,c_{d})\) is the \(j\)-th projection. This rational
2-form \(\sum_{j=1}^{N}\operatorname{pr}_{j}^{*}(\omega_{D,c_{d}})\) induces a symplectic structure on a generic part of \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\). We will define a map from \(M_{X}^{0}\) to \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) by the following idea.
By the theory of apparent singularities discussed in Section 2.1, we have a canonical inclusion morphism
\[\mathcal{O}_{C}\oplus(\Omega_{C}^{1}(D))^{-1}\longrightarrow E.\]
By this morphism, we have the connection \(\nabla_{0}\) on \(\mathcal{O}_{C}\oplus(\Omega_{C}^{1}(D))^{-1}\) induced by a connection \(\nabla\) on \(E\). Notice that \(\nabla_{0}\) has simple poles at the apparent singularities. By applying automorphisms on \(\mathcal{O}_{C}\oplus(\Omega_{C}^{1}(D))^{-1}\), we may normalize \(\nabla_{0}\) as
\[\nabla_{0}=\begin{pmatrix}\operatorname{d}&\beta\\ 1&\delta\end{pmatrix},\]
which is called a companion normal form (in detail, see Section 2.2 below). Here \(\operatorname{d}\) is the exterior derivative on \(C\), \(\beta\in H^{0}(C,(\Omega_{C}^{1})^{\otimes 2}(2D+q_{1}+\cdots+q_{N}))\), and \(\delta\) is a connection on \((\Omega_{C}^{1}(D))^{-1}\), which has poles at the support of \(D\) and the apparent singularities \(q_{1},\ldots,q_{N}\). Then we may define a map
\[\begin{split} f_{\operatorname{App}}\colon M_{X}^{0}& \longrightarrow\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\\ (E,\nabla)&\longmapsto\{(q_{j},\operatorname{res}_{q_{j}}( \beta)+\operatorname{tr}(\nabla)|_{q_{j}})\}_{1\leq j\leq N}.\end{split} \tag{1.1}\]
Here, notice that \(\operatorname{res}_{q_{j}}(\beta)\in\Omega_{C}^{1}(D)|_{q_{j}}\) and \(\operatorname{tr}(\nabla)|_{q_{j}}\) is justified by considering the twisted cotangent bundle (in detail, see Definition 16 below). Remark that the dimension of \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) is equal to the dimension of \(M_{X}^{0}\). A generic part of \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) has the natural symplectic structure induced by the symplectic structure on the product \((\boldsymbol{\Omega}(D,c_{d})\setminus\pi_{c_{d}}^{-1}(D))\times\cdots\times( \boldsymbol{\Omega}(D,c_{d})\setminus\pi_{c_{d}}^{-1}(D))\). The first main theorem is the following:
**Theorem A** (Theorem 20 below).: _The pull-back of the symplectic form on a generic part of \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) under the map (1.1) coincides with the symplectic form on \(M_{X}^{0}\)._
If we take canonical coordinates on \(\boldsymbol{\Omega}(D,c_{d})\), then we have canonical coordinate on \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\), since the symplectic structure on \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) is induced by the 2-form \(\sum_{j=1}^{N}\operatorname{pr}_{j}^{*}(\omega_{D,c_{d}})\). Then we have canonical coordinates on \(M_{X}^{0}\) by Theorem A. Detail of construction of concrete canonical coordinates on \(M_{X}^{0}\) is discussed in the paragraph after the proof of Theorem 20 below.
In Section 5, we consider an example of this argument. We will calculate the canonical coordinates for an elliptic curve and a divisor \(D\) of length \(2\). The moduli space of rank \(2\) meromorphic connection with fixed trace connection on an elliptic curve with two simple poles was studied in [41] and [12]. In this paper, we will discuss the \(\operatorname{GL}_{2}\)-connection case.
### \(\operatorname{SL}_{2}\)-connections
In the second part of this paper, we discuss on \(\operatorname{SL}_{2}\)-connections. That is, we consider rank \(2\) meromorphic connections with fixed trace connection \((L_{0},\nabla_{0})\). Here \(L_{0}\) is a fixed line bundle on \(C\) of degree \(2g-1\) and \(\nabla_{0}\colon L_{0}\to L_{0}\otimes\Omega_{C}^{1}(D)\) is a fixed connection. More precisely, we consider rank \(2\) quasi-parabolic connections \((E,\nabla,\{l^{(i)}\})\), defined in [24, Definition 2.1], with fixed trace connection \((L_{0},\nabla_{0})\). Here the spectral data of \(\nabla_{0}\) is determined by the fixed irregular curve \(X\). The quasi-parabolic structure \(l^{(i)}\) at \(t_{i}\) induces a one dimensional subspace
of \(E|_{t_{i}}\), that is the restriction of \(l^{(i)}\) to \(t_{i}\) (without multiplicity). Our moduli space is as follows:
\[M_{X}(L_{0},\nabla_{0})_{0}:=\left\{(E,\nabla,\{l^{(i)}\})\ \left|\begin{array}{l} \mbox{(i) $\nabla$ has the fixed spectral data in $X$,}\\ \mbox{(ii) $E$ is an extension of $L_{0}$ by $\mathcal{O}_{C}$,}\\ \mbox{(iii) $\dim_{\mathbb{C}}H^{0}(C,E)=1$, and}\\ \mbox{(iv) $l^{(i)}_{\mathrm{red}}\not\in\mathcal{O}_{C}|_{t_{i}}\subset \mathbb{P}(E)$ for any $i$}\end{array}\right.\right\}\right/\cong,\]
which is described in Section 4.2. Here \((E,\nabla,\{l^{(i)}\})\) are rank 2 quasi-parabolic connections on \((C,D)\) with fixed trace connection \((L_{0},\nabla_{0})\). When \(g=0\), we impose one more condition (in detail, see the paragraph after the proof of Lemma 26 below). This moduli space also has a natural symplectic structure. The dimension of the moduli space \(M_{X}(L_{0},\nabla_{0})_{0}\) is \(2N_{0}\), where \(N_{0}:=3g-3+\deg(D)\). For \((E,\nabla,\{l^{(i)}\})\in M_{X}(L_{0},\nabla_{0})_{0}\), we can also define apparent singularities (Section 4.2 below). The apparent singularities give an element of \(\mathbb{P}H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))\). So we have a map
\[\pi_{\mathrm{App}}\colon M_{X}(L_{0},\nabla_{0})_{0}\longrightarrow\mathbb{P}H ^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D)).\]
For \((E,\nabla,\{l^{(i)}\})\in M(L_{0},\nabla_{0})_{0}\), we forget the connection \(\nabla\). So we have a quasi-parabolic bundle \((E,\{l^{(i)}\})\). By taking the extension class for the quasi-parabolic bundle \((E,\{l^{(i)}\})\), we have a map
\[\pi_{\mathrm{Bun}}\colon M_{X}(L_{0},\nabla_{0})_{0}\longrightarrow\mathbb{P} H^{1}(C,L_{0}^{-1}(-D)).\]
Here the extension class is described in Section 4.1 below. We consider the product
\[\pi_{\mathrm{App}}\times\pi_{\mathrm{Bun}}\colon M_{X}(L_{0},\nabla_{0})_{0} \longrightarrow\mathbb{P}H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))\times\mathbb{ P}H^{1}(C,L_{0}^{-1}(-D)).\]
This map has been studied by Loray-Saito-Simpson [43], Loray-Saito [42], Fassarella-Loray [12], Fassarella-Loray-Muniz [13], and Matsumoto [44].
Notice that \(H^{1}(C,L_{0}^{-1}(-D))\) is isomorphic to the dual of \(H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))\). Remark that
\[\dim_{\mathbb{C}}\mathbb{P}H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))=\dim_{ \mathbb{C}}\mathbb{P}H^{1}(C,L_{0}^{-1}(-D))=N_{0}.\]
Let us introduce the homogeneous coordinates \(\boldsymbol{a}=(a_{0}:\cdots:a_{N_{0}})\) on \(\mathbb{P}H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))\cong\mathbb{P}^{N_{0}}_{ \boldsymbol{a}}\) and the dual coordinates \(\boldsymbol{b}=(b_{0}:\cdots:b_{N_{0}})\) on
\[\mathbb{P}H^{1}(C,L_{0}^{-1}(-D))\cong\mathbb{P}H^{0}(C,L_{0}\otimes\Omega^{ 1}_{C}(D))^{\vee}\cong\mathbb{P}^{N_{0}}_{\boldsymbol{b}}.\]
We may define a 1-form \(\eta\) on \(\mathbb{P}^{N_{0}}_{\boldsymbol{a}}\times\mathbb{P}^{N_{0}}_{\boldsymbol{b}}\) by
\[\eta=(\text{constant})\cdot\frac{a_{0}\,db_{0}+a_{1}\,db_{1}+\cdots+a_{N_{0}} \,db_{N_{0}}}{a_{0}b_{0}+a_{1}b_{1}+\cdots+a_{N_{0}}b_{N_{0}}}.\]
(In detail, see Section 4.4). The 2-form \(d\eta\) gives an symplectic structure on \(\mathbb{P}^{N_{0}}_{\boldsymbol{a}}\times\mathbb{P}^{N_{0}}_{\boldsymbol{b}}\setminus\Sigma\). Here we set
\[\Sigma\colon(a_{0}b_{0}+a_{1}b_{1}+\cdots+a_{N_{0}}b_{N_{0}}=0)\subset\mathbb{ P}^{N_{0}}_{\boldsymbol{a}}\times\mathbb{P}^{N_{0}}_{\boldsymbol{b}}.\]
The image of \(M(L_{0},\nabla_{0})_{0}\) is contained in \(\mathbb{P}^{N_{0}}_{\boldsymbol{a}}\times\mathbb{P}^{N_{0}}_{\boldsymbol{b}}\setminus\Sigma\). (In detail, see Section 4.3). The second main theorem is the following:
**Theorem B** (Theorem 31 below).: _We assume that the fixed spectral data satisfies the generic condition (4.8) below. The pull-back of the symplectic form \(d\eta\) on \(\mathbb{P}^{N_{0}}_{\boldsymbol{a}}\times\mathbb{P}^{N_{0}}_{\boldsymbol{b}} \setminus\Sigma\) under the map \(\pi_{\mathrm{App}}\times\pi_{\mathrm{Bun}}\) coincides with the symplectic form on the moduli space \(M_{X}(L_{0},\nabla_{0})_{0}\)._
### The organization of this paper
In Section 2, the apparent singularities for a generic rank \(2\) meromorphic connection are defined. After the definition of the apparent singularities, we will discuss on the companion normal form of a generic rank \(2\) meromorphic connection. We will use this companion normal form when we will introduce canonical coordinates. In Section 3, first, we will describe our moduli space of rank \(2\) meromorphic connections. Second, we will discuss on tangent spaces of the moduli space of rank \(2\) meromorphic connections. We will recall that the tangent spaces at a meromorphic connection are isomorphic to a hypercohomology of the complex defined by the meromorphic connection. After that, we will describe a natural symplectic structure on the moduli space of rank \(2\) meromorphic connections. Section 3.3 and Section 3.4 are preliminaries of the proof of the first main theorem. In Section 3.5, we will give the map from a generic part of the moduli space to \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) and will show the first main theorem.
In Section 4, we will consider rank \(2\) meromorphic connections with fixed trace connection. First, to describe the bundle map \(\pi_{\operatorname{Bun}}\), we recall the moduli space of stable quasi-parabolic bundles with fixed determinant. Second, we will describe our moduli space of rank \(2\) meromorphic connections with fixed trace connection. Third, we will describe the map \(\pi_{\operatorname{App}}\) defined by considering the apparent singularities. In Section 4.4, we will recall a natural symplectic structure on the moduli space of rank \(2\) meromorphic connections with fixed trace connection, and will show the second main theorem.
In Section 5, we will apply the argument in Section 2 and Section 3 to the case of an elliptic curve with a divisor \(D\) of length \(2\). When \(D\) is reduced, this amounts to two logarithmic singularities, otherwise to an irregular singularity. It is remarkable that using our approach these two cases can be studied completely similarly.
In Section 6, we will provide a method for obtaining canonical coordinates \(\tilde{p}_{j}\in\boldsymbol{\Omega}(D,c_{d})_{|q_{i}}\) for generic \((E,\nabla)\in M^{0}_{X}\) by introducing a section \(s\in H^{0}(C,\det(E))\) and \(\gamma\in H^{0}(C,\Omega^{1}_{C}(D))\). We will utilize an open set \(U_{0}=C\setminus\{s=0,\gamma=0\}\) and the trivialization of \(E_{|U_{0}}\) to define \(\tilde{p}_{j}\in\Omega^{1}_{C}(D)_{|q_{j}}\). This method can be also used for constructing a meromorphic connection \(\nabla_{1}\colon E\longrightarrow E\otimes\Omega^{1}_{C}(D(s))\) for a given \(s\in H^{0}(C,\det(E))\), where \(D(s)\) denotes the zero divisor of \(s\). In Theorem 41, we will provide an alternative proof of the birationality of \(f_{\operatorname{App}}\) (cf. Proposition 17) by utilizing the Higgs fields \(\nabla-\nabla_{1}\) and the BNR correspondence [3]. This approach may shed new light on the relationship between the canonical coordinates of the moduli spaces of connections and the moduli spaces of Higgs bundles. (cf. [50]).
### Acknowledgments
The authors would like to warmly thank Michi-aki Inaba and Takafumi Matsumoto for useful discussions. The first, third, and fourth authors would like to thank Frank Loray for his hospitality at IRMAR, Univ. Rennes.
## 2. Companion normal form
Let \(C\) be a compact Riemann surface of genus \(g\) (\(g\geq 0\)), and \(D\) be an effective divisor on \(C\). We assume \(4g-3+n>0\) where \(n=\deg(D)\). We consider a rank \(2\) meromorphic connection
\[\nabla:E\longrightarrow E\otimes\Omega^{1}_{C}(D) \tag{2.1}\]
on \(C\) where \(\deg(E)=2g-1\).
When \(g=0\), Diarra-Loray have given companion normal forms of the rank \(2\) meromorphic connections in [9]. By the companion normal forms, we may construct a universal family of the rank \(2\) meromorphic connections on some generic part of the moduli space of rank \(2\) meromorphic connections. This universal family is useful to describe the isomonodromic deformations [34]. The
purpose of this section is to give companion normal forms of rank \(2\) meromorphic connections when \(g\geq 0\). For this purpose, first, we will introduce the apparent singularities for (generic) rank \(2\) meromorphic connections.
### Apparent singularities
First we assume that \(\dim_{\mathbb{C}}H^{0}(C,E)=1\) for the rank \(2\) meromorphic connection (2.1). This assumption holds for a generic vector bundle of the rank \(2\) meromorphic connection with \(\deg(E)=2g-1\). For an element of \(H^{0}(C,E)\), we define the sequence of \(\mathbb{C}\)-linear maps
\[\varphi_{\nabla}\colon\mathcal{O}_{C}\longrightarrow E\stackrel{{ \nabla}}{{\longrightarrow}}E\otimes\Omega^{1}_{C}(D)\longrightarrow E/ \mathcal{O}_{C}\otimes\Omega^{1}_{C}(D). \tag{2.2}\]
This composition \(\varphi_{\nabla}\) is an \(\mathcal{O}_{C}\)-linear map. From now on we assume that \(\varphi_{\nabla}\neq 0\). This assumption holds for every \((E,\nabla)\), provided that the eigenvalues of the residues are chosen generically (see Remark 4 below). We call the global section in \(H^{0}(C,E)\) in (2.2) the _cyclic vector_.
Let us now define \(E_{0}\subset E\) as the rank \(2\) locally free subsheaf spanned by \(\mathcal{O}_{C}\) and
\[\operatorname{Im}\left\{\nabla|_{\mathcal{O}_{C}}\otimes\operatorname{Id}_{( \Omega^{1}_{C}(D))^{-1}}\colon(\Omega^{1}_{C}(D))^{-1}\to E\right\}.\]
This construction gives rise to a short exact sequence of coherent sheaves
\[0\longrightarrow\mathcal{O}_{C}\longrightarrow E_{0}\longrightarrow(\Omega^ {1}_{C}(D))^{-1}\longrightarrow 0.\]
We claim that this sequence splits, i.e.
\[E_{0}\cong\mathcal{O}_{C}\oplus(\Omega^{1}_{C}(D))^{-1}. \tag{2.3}\]
Indeed, equivalence classes of extensions of \((\Omega^{1}_{C}(D))^{-1}\) by \(\mathcal{O}_{C}\) are classified by the group
\[\operatorname{Ext}^{1}((\Omega^{1}_{C}(D))^{-1},\mathcal{O}_{C})=\operatorname {Ext}^{1}(\mathcal{O}_{C}(-D),\Omega^{1}_{C})\cong H^{0}(C,\mathcal{O}_{C}(-D ))^{\vee}=0,\]
where we have used Grothendieck-Serre duality. We denote by
\[\phi_{\nabla}\colon E_{0}\longrightarrow E. \tag{2.4}\]
the canonical inclusion morphism, and define the meromorphic connection
\[\nabla_{0}=\phi_{\nabla}^{*}(\nabla) \tag{2.5}\]
on \(E_{0}\). We note that the polar divisor of \(\nabla_{0}\) is \(D+B\) where
\[B=\operatorname{div}(\varphi_{\nabla}). \tag{2.6}\]
We note that
\[\deg(B)=4g-3+n. \tag{2.7}\]
From now on, moreover, we assume that \(B\) is reduced, with support disjoint from \(D\). In different terms, in view of (2.7), we have
\[B=q_{1}+\cdots+q_{4g-3+n}\]
where \(q_{i}\neq q_{j}\) once \(i\neq j\) and \(q_{i}\notin D\) for all \(i\).
**Definition 1**.: _Assume that \(\varphi_{\nabla}\neq 0\) and \(\operatorname{div}(\varphi_{\nabla})\) is reduced, with support disjoint from \(D\). We call the points of the support \(\{q_{1},\dots,q_{4g-3+n}\}\) of \(\operatorname{div}(\varphi_{\nabla})\) the apparent singularities of \((E,\nabla)\)._
### Companion normal form
The desired companion normal form is a normal form of \(\nabla_{0}\) in (2.5). So the companion normal form is given by normalization of \(\nabla_{0}\) by applying automorphisms on \(\mathcal{O}_{C}\oplus(\Omega^{1}_{C}(D))^{-1}\). To give the companion normal form, first, we describe a decomposition of \(\nabla_{0}\) relative to (2.3):
\[\nabla_{0}=\begin{pmatrix}\alpha&\beta\\ \gamma&\delta\end{pmatrix}\]
where
\[\begin{cases}\alpha&:&\mathcal{O}_{C}\longrightarrow\Omega^{1}_{C}(D+B)& \text{(connection)}\\ \beta&:&(\Omega^{1}_{C}(D))^{-1}\longrightarrow\Omega^{1}_{C}(D+B)&\text{($ \mathcal{O}_{C}$-linear)}\\ \gamma&:&\mathcal{O}_{C}\longrightarrow\mathcal{O}_{C}(B)&\text{($\mathcal{O }_{C}$-linear)}\\ \delta&:&(\Omega^{1}_{C}(D))^{-1}\longrightarrow(\Omega^{1}_{C}(D))^{-1} \otimes\Omega^{1}_{C}(D+B)&\text{(connection)}\end{cases}\]
This form is unique only up to pre-composition by an element of the automorphism group \(\operatorname{Aut}(E_{0})\) of \(E_{0}\). Elements of \(\operatorname{Aut}(E_{0})\) are described as follows:
\[\begin{pmatrix}\lambda_{1}&F\\ 0&\lambda_{2}\end{pmatrix},\]
where \(\lambda_{1},\lambda_{2}\in\mathbb{C}^{*}\) and \(F\in H^{0}(C,\Omega^{1}_{C}(D))\). It follows by construction that \(\nabla_{0}\) admits no pole in restriction to \(\mathcal{O}_{C}\) over the divisor \(B\), so that actually we have
\[\begin{cases}\alpha&:&\mathcal{O}_{C}\longrightarrow\Omega^{1}_{C}(D)&\text{ (connection)}\\ \gamma&:&\mathcal{O}_{C}\longrightarrow\mathcal{O}_{C}&\text{($=$ identity)}\end{cases}\]
The action of an automorphism of the form
\[\begin{pmatrix}1&F\\ 0&1\end{pmatrix},\quad F\in H^{0}(C,\Omega^{1}_{C}(D)) \tag{2.8}\]
transforms \(\alpha\) into \(\alpha-F\gamma\) (without affecting \(\gamma\)). Therefore, there exists a unique choice \(F\) such that \(\alpha=\mathrm{d}\) is the trivial connection on \(\mathcal{O}_{C}\). We thus get the unique companion normal form
\[\nabla_{0}=\begin{pmatrix}\mathrm{d}&\beta\\ 1&\delta\end{pmatrix}. \tag{2.9}\]
Notice that the same companion normal form is obtained simply by taking the generator \(\varphi_{\nabla}(1)\) for the second factor of (2.3), and the action of the automorphism (2.8) in the above argument simply amounts to switching to this particular generator.
### Spectral data
Now we consider the polar part of the meromorphic connection (2.1) at each point of the support of \(D\). We impose some conditions on the polar parts. To describe the conditions, we introduce the notion of irregular curves with residues. Let \(\nu\) be a positive integer. We set \(I:=\{1,2,\ldots,\nu\}\). Let \(\mathfrak{h}\) be the Cartan subalgebra
\[\mathfrak{h}=\left\{\begin{pmatrix}h_{1}&0\\ 0&h_{2}\end{pmatrix}\biggm{|}h_{1},h_{2}\in\mathbb{C}\right\}\]
of the Lie algebra \(\mathfrak{gl}_{2}(\mathbb{C})\). Let \(\mathfrak{h}_{0}\) be the regular locus of \(\mathfrak{h}\).
**Definition 2**.: _We say \(X=(C,D,\{z_{i}\}_{i\in I},\{\boldsymbol{\theta}_{i}\}_{i\in I},\boldsymbol{ \theta}_{\mathrm{res}})\) is an irregular curve with residues if_
* \(C\) _is a compact Riemann surface of genus_ \(g\)_,_
* \(D=\sum_{i\in I}m_{i}[t_{i}]\) _is an effective divisor on_ \(C\)_._
* \(z_{i}\) _is a generator of the maximal ideal of_ \(\mathcal{O}_{C,t_{i}}\)_,_
* \(\boldsymbol{\theta}_{i}=(\theta_{i,-m_{i}},(\theta_{i,-m_{i}+1},\ldots,\theta_ {i,-2}))\in\mathfrak{h}_{0}\times\mathfrak{h}^{m_{i}-2}\)_, and_
_._
* \(\boldsymbol{\theta}_{\mathrm{res}}=(\theta_{1,-1},\theta_{2,-1},\ldots,\theta_{ \nu,-1})\)_, where_ \(\theta_{i,-1}\in\mathfrak{h}\)_, such that_ \(\sum_{i=1}^{\nu}\mathrm{tr}(\theta_{i,-1})=-(2g-1)\)_._
_We set_
\[\theta_{i,-1}=\begin{pmatrix}\theta_{i,-1}^{-}&0\\ 0&\theta_{i,-1}^{+}\end{pmatrix}\quad\text{for each $i\in I$.}\]
_We assume that \(\sum_{i=1}^{\nu}\theta_{i,-1}^{\pm}\not\in\mathbb{Z}\) whatever are the signs \(\pm\), and, if \(m_{i}=1\), then \(\theta_{i,-1}^{+}-\theta_{i,-1}^{-}\not\in\mathbb{Z}\)._
For an irregular curve with residues \(X\), we set
\[\omega_{i}(X):=\theta_{i,-m_{i}}\frac{\mathrm{d}z_{i}}{z_{i}^{m_{i}}}+\theta _{i,-m_{i}+1}\frac{\mathrm{d}z_{i}}{z_{i}^{m_{i}-1}}+\cdots+\theta_{i,-2}\frac {\mathrm{d}z_{i}}{z_{i}^{2}}+\theta_{i,-1}\frac{\mathrm{d}z_{i}}{z_{i}} \tag{2.10}\]
and \(\mathcal{O}_{m_{i}[t_{i}]}:=\mathcal{O}_{C,t_{i}}/(z_{i}^{m_{i}})\). For an irregular curve with residues \(X\) and a meromorphic connection \((E,\nabla)\) in (2.1), we set \(E|_{m_{i}[t_{i}]}:=E\otimes\mathcal{O}_{m_{i}[t_{i}]}\). Let
\[\nabla|_{m_{i}[t_{i}]}\colon E|_{m_{i}[t_{i}]}\longrightarrow E|_{m_{i}[t_{i} ]}\otimes\Omega_{C}^{1}(D)\]
be the morphism induced by \(\nabla\).
**Definition 3**.: _We call \((E,\nabla)\) a rank \(2\) meromorphic connection over an irregular curve with residues \(X\) if_
* \(E\) _is a rank_ \(2\) _vector bundle of degree_ \(2g-1\) _on_ \(C\)_,_
* \(\nabla\colon E\to E\otimes\Omega_{C}^{1}(D)\) _is a connection, and_
* _there exists an isomorphism_ \(\varphi_{m_{i}[t_{i}]}\colon E|_{m_{i}[t_{i}]}\to\mathcal{O}_{m_{i}[t_{i}]}^{ \oplus 2}\) _for each_ \(i\in I\) _such that_ \[(\varphi_{m_{i}[t_{i}]}\otimes 1)\circ\nabla|_{m_{i}[t_{i}]}\circ\varphi_{m_{i}[t_{i} ]}^{-1}=\mathrm{d}+\omega_{i}(X).\] _Here_ \(\omega_{i}(X)\) _is defined in (_2.10_)._
_We call \(\omega_{i}(X)\) the spectral data of \((E,\nabla)\) and call the submodule \(\varphi_{m_{i}[t_{i}]}^{-1}(\mathcal{O}_{m_{i}[t_{i}]}\oplus 0)\) of \(E|_{m_{i}[t_{i}]}\) the quasi-parabolic structure of \((E,\nabla)\) at \(t_{i}\)._
From now on, by a connection we will mean a rank \(2\) meromorphic connection over a fixed irregular curve with residues \(X\). So we impose the condition (iii) of Definition 3 on the polar parts of the meromorphic connection \(\nabla\) in (2.1) at the points of the support of \(D\). This condition means that the polar parts of \(\nabla\) at \(t_{i}\) are diagonalizable with eigenvalues equal to the diagonal entries of \(\omega_{i}(X)\) for \(i=1,2,\ldots,n\).
**Remark 4**.: _In Definition 3, we impose the condition that \(\sum_{i=1}^{\nu}\theta_{i,-1}^{\pm}\not\in\mathbb{Z}\) whatever are the signs \(\pm\). By this assumption and the argument as in [35, Proposition 6], we have that \((E,\nabla)\) is irreducible. Then some arguments become simple. For example, \(\varphi_{\nabla}=0\) if and only if the free subsheaf \(\mathcal{O}_{C}\) of \(E\) is a proper \(\nabla\)-invariant subbundle. So we have that \(\varphi_{\nabla}\neq 0\). Moreover, \((E,\nabla)\) is automatically stable (described in Section 3.1 below)._
### The polar parts of \(\delta\)
We fix an irregular curve with residues \(X\). Let \((E,\nabla)\) be a rank \(2\) meromorphic connection over \(X\) and \(\nabla_{0}\) be the companion normal form for \((E,\nabla)\). We consider the \((2,2)\)-entry \(\delta\) of this companion normal form \(\nabla_{0}\).
It immediately follows from (2.9) that the connection \(\delta\) coincides with the trace connection \(\mathrm{tr}(\nabla_{0})\) on \(\det(E_{0})=(\Omega_{C}^{1}(D))^{-1}\). It is further related to the trace connection \(\mathrm{tr}(\nabla)\) by
\[\delta=\mathrm{tr}(\nabla_{0})=\mathrm{tr}(\nabla)+\frac{\mathrm{d}\varphi_{ \nabla}}{\varphi_{\nabla}}.\]
**Lemma 5**.:
1. _The polar part of_ \(\delta\) _over_ \(D\) _is determined by the spectral data;_
2. _The polar part of_ \(\delta\) _over_ \(B\) _is logarithmic with residue_ \(+1\)_;_
3. \(\delta\) _is determined by the irregular curve with residues_ \(X\) _up to adding a holomorphic_ \(1\)_-form of_ \(C\)_._
Proof.: The polar part of \(\delta\) at \(t_{i}\) is equal to \(\operatorname{tr}(\omega_{i})\), showing the first assertion. In view of our assumption \(q_{j_{1}}\neq q_{j_{2}}\) for \(j_{1}\neq j_{2}\), the second assertion is classical. Let now \(\delta,\delta^{\prime}\) be the \((2,2)\)-entries of companion normal forms \(\nabla_{0},\nabla_{0}^{\prime}\) of connections \(\nabla,\nabla^{\prime}\) satisfying the conditions of Definition 3. By the first part, \(\delta-\delta^{\prime}\) is then a global holomorphic \(1\)-form of \(C\).
As a consequence of the lemma and by \(\dim_{\mathbb{C}}H^{0}(C,\Omega^{1}_{C})=g\), the possible values for \(\delta\) represent \(g\) free parameters for a meromorphic connection over \(X\).
### The polar parts of \(\beta\)
Next we consider the \((1,2)\)-entry \(\beta\) of the companion normal form \(\nabla_{0}\) of a meromorphic connection \(\nabla\) over the irregular curve with residues \(X\). By the condition \(\gamma=1\) in (2.9), \(\beta\) accounts for the determinant of the characteristic polynomial of the residues. By Definition 3, the eigenvalues of the connection matrix of \(\nabla\) are differentials (of the first kind) with a pole of order at most \(m_{i}\) at \(t_{i}\). The same condition then holds for \(\nabla_{0}\) too, because it only differs from \(\nabla\) by elementary modifications at points \(q_{j}\neq t_{i}\). As the determinant of a \(2\times 2\) matrix is a quadratic expression of the eigenvalues, we see that \(\beta\) must be a quadratic differential with poles of order at most \(2m_{i}\) at \(t_{i}\). Over \(B\), a similar argument shows that \(\beta\) has poles of order at most \(2\).
Let us fix local coordinate charts \(z_{i}\) centered at the pole \(t_{i}\). One may then expand \(\beta\) into Laurent series:
\[\beta=\left(\beta_{i,-2m_{i}}z_{i}^{-2m_{i}}+\cdots+\beta_{i,-2}z_{i}^{-2}+O(z_ {i}^{-1})\right)(\operatorname{d}\!z_{i})^{\otimes 2}.\]
Notice that for given \(\beta\) the coefficient \(\beta_{i,-2}\) is independent of the chosen coordinate chart \(z_{i}\), however the other coefficients depend on \(z_{i}\). We also fix local coordinate charts \(z_{j}\) centered at the apparent singularity \(q_{j}\), and have a similar expansion
\[\beta=\left(\beta_{j,-2}z_{j}^{-2}+\beta_{j,-1}z_{j}^{-1}+O(z_{j}^{0})\right) (\operatorname{d}\!z_{j})^{\otimes 2}.\]
Analogously to Lemma 5, we therefore find
**Lemma 6**.:
1. _The coefficients_ \(\beta_{i,-2m_{i}},\ldots,\beta_{i,-2}\) _are uniquely determined by the irregular curve with residues_ \(X\) _(and the holomorphic coordinate_ \(z_{i}\)_);_
2. _We have_ \(\beta_{j,-2}=0\)_._
3. \(\beta\) _is determined by the irregular curve with residues_ \(X\) _up to adding a section of_ \((\Omega^{1}_{C})^{\otimes 2}(D)\)_._
Proof.: The coefficients \(\beta_{i,-2m_{i}},\ldots,\beta_{i,-2}\) all admit homogeneous quadratic expressions in terms of the eigenvalues of \(\boldsymbol{\theta}_{i},\boldsymbol{\theta}_{\text{res}}\), therefore they are determined by them. Conversely, the coefficients \(\beta_{i,-2m_{i}},\ldots,\beta_{i,-2}\) determine the polar part of the eigenvalues. It is classical that for an apparent singularity of \(\nabla_{0}\), one of the two eigenvalues of the residue must vanish. This implies that for every \(q\in B\) the product of the eigenvalues of \(\text{res}_{q}(\nabla_{0})\) vanishes. As this latter product gives the leading (second) order term \(\beta_{j,-2}\), we get the second assertion. The last part follows from the first two as in Lemma 5.
As a consequence of the lemma and by \(\dim_{\mathbb{C}}H^{0}(C,(\Omega^{1}_{C})^{\otimes 2}(D))=3g-3+n\), the possible values for \(\beta\) represent \(3g-3+n\) free parameters for a connection \(\nabla\) on \(X\) having apparent singularities at a fixed reduced divisor \(B\) of length \(N\).
From now on, we set \(\beta_{j,-1}=\zeta_{j}\), so that we have the expansion
\[\beta=\zeta_{j}\frac{(\operatorname{d}\!z_{j})^{\otimes 2}}{z_{j}}+\beta^{(j)} \tag{2.11}\]
for some local holomorphic quadratic differential \(\beta^{(j)}\). Notice that \(\zeta_{j}\) depends on the coordinate \(z_{j}\), however the element \(\zeta_{j}\,\mathrm{d}z_{j}\in\Omega^{1}_{C}|_{q_{j}}\) of the fiber of the holomorphic cotangent (or canonical) bundle over \(q_{j}\) does not depend on it. As a matter of fact, since \(\beta\) belongs to an affine space modelled over \(H^{0}(C,(\Omega^{1}_{C})^{\otimes 2}(D))\) (and in order to be consistent with the decomposition (2.3)), it is even more rigourous to consider \(\zeta_{j}\,\mathrm{d}z_{j}\) as elements of the fiber \(\Omega^{1}_{C}(D)|_{q_{j}}\), using the inclusion \(\Omega^{1}_{C}\subset\Omega^{1}_{C}(D)\). In the sequel we will consider them to be such elements. It will turn out that these quantities \(\zeta_{j}\,\mathrm{d}z_{j}\) are closely related to accessory parameters.
### Determination of \(\beta\) and \(\delta\) in terms of \(\zeta\)
Fix a reduced divisor \(B\) of length \(N\) on \(C\) with support disjoint from \(D\). In Subsections 2.4, 2.5 we have found that (normal forms of) meromorphic connections with residue on \(X\) that have apparent singularities at \(B\) can be described by an affine space of complex dimension \(g+3g-3+n=N\) (\(g\) coming from the choice of \(\delta\) and \(3g-3+n\) from the choice of \(\beta\)). In this section, we provide a description of such connections in terms of analogs of separated variables. Namely, it will turn out that generically the data of \(\delta,\beta\) is equivalent to the \(N\)-tuple \((\zeta_{1}\,\mathrm{d}z_{1},\ldots,\zeta_{N}\,\mathrm{d}z_{N})\).
The fact that singular points are apparent over \(B\) imposes further constraints on \(\beta\) and \(\delta\). This constraint gives \(1\) linear condition for each point \(q_{j}\) and we can expect that these constraints fix \(\beta\) and \(\delta\) uniquely in terms of the data \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})_{j=1}^{N}\). In fact, this is true for the genus \(g=0\) case (see [9]) and we will show in Lemma 7 that this is also true for _generic_ choices of \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})_{j=1}^{N}\) if \(g>0\).
In fact, the data of \(\zeta_{j}\,\mathrm{d}z_{j}\) can be interpreted as a certain quasi-parabolic structure over \(B\). Indeed, at a point \(q_{j}\) and with respect to the decomposition (2.3), the residue of \(\nabla_{0}\) reads as
\[\operatorname{res}_{q_{j}}\nabla_{0}=\begin{pmatrix}0&\zeta_{j}\,\mathrm{d}z_ {j}\\ 0&1\end{pmatrix}.\]
So, the vector \(\begin{pmatrix}\zeta_{j}\,\mathrm{d}z_{j}\\ 1\end{pmatrix}\) is an eigenvector with respect to eigenvalue \(1\) and the map \(\phi_{\nabla}\) (see (2.4)) is just the positive elementary transformation with respect to these parabolic directions at all points \(q_{j}\). In summary, the data of all values \(\zeta_{j}\,\mathrm{d}z_{j}\) is equivalent to the data of a quasi-parabolic structure of \(E_{0}\) over \(B\) (i.e., a line in the fiber of \(E_{0}\) over each \(q_{j}\)) distinct from the destabilizing subbundle \(\mathcal{O}_{C}\subset E_{0}\) for every \(j\).
Let us denote by \(\mathbf{\Omega}(D)\) the total space of the line bundle \(\Omega^{1}_{C}(D)\).
**Lemma 7**.: _For generic data \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})_{j}\in\operatorname{Sym}^{4g-3+n}(\mathbf{ \Omega}(D))\) there exist unique \(\beta\) and \(\delta\) as above such that the corresponding \(\nabla_{0}\) has apparent singular points at all the points \(q_{j}\) (\(1\leq j\leq N:=4g-3+n\)), and such that the Laurent expansion (2.11) is fulfilled._
Proof.: Let us consider \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})_{j}\) such that \(q_{j}\)'s are pair-wise distinct, and do not intersect the support of \(D\). Given one point \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})\), we can diagonalize the residue \(\operatorname{res}_{q_{i}}\nabla_{0}\) by conjugating by a triangular matrix
\[\begin{pmatrix}1&\zeta_{j}\,\mathrm{d}z_{j}\\ 0&1\end{pmatrix}^{-1}\begin{pmatrix}0&\beta\\ 1&\delta\end{pmatrix}\begin{pmatrix}1&\zeta_{j}\,\mathrm{d}z_{j}\\ 0&1\end{pmatrix}+\begin{pmatrix}1&\zeta_{j}\,\mathrm{d}z_{j}\\ 0&1\end{pmatrix}^{-1}d\begin{pmatrix}1&\zeta_{j}\,\mathrm{d}z_{j}\\ 0&1\end{pmatrix}\] \[=\begin{pmatrix}-\zeta_{j}\,\mathrm{d}z_{j}&\beta-\zeta_{j}\delta \otimes\mathrm{d}z_{j}-\zeta_{j}^{2}\,\mathrm{d}z_{j}^{\otimes 2}\\ 1&\delta+\zeta_{j}\,\mathrm{d}z_{j}\end{pmatrix}=\begin{pmatrix}0&0\\ 0&\frac{dz_{j}}{z_{j}}\end{pmatrix}+\operatorname{holomorphic} \tag{2.12}\]
where \(z_{j}\) stands for a local coordinate at \(q_{j}\). Then the elementary transformation \(\phi_{\nabla}\) is locally equivalent to the conjugacy by \(\begin{pmatrix}1&0\\ 0&z_{j}^{-1}\end{pmatrix}\) yielding
\[\begin{pmatrix}-\zeta_{j}\,\mathrm{d}z_{j}&\frac{\beta-\zeta_{j}\delta\otimes \mathrm{d}z_{j}-\zeta_{j}^{2}\,\mathrm{d}z_{j}^{\otimes 2}}{z_{j}}\\ z_{j}&\delta+\zeta_{j}\,\mathrm{d}z_{j}-\frac{dz_{j}}{z_{j}}\end{pmatrix}. \tag{2.13}\]
The apparent point condition is therefore equivalent to saying that \(\beta-\zeta_{j}\delta\otimes\mathrm{d}z_{j}-\zeta_{j}^{2}\,\mathrm{d}z_{j}^{ \otimes 2}\) is (holomorphic and) **vanishing** at \(q_{j}\). This condition is linear on \(\beta\) and \(\delta\) and rewrites
\[\underbrace{\beta-\zeta_{j}\delta\otimes\mathrm{d}z_{j}}_{\mathrm{holomorphic}}|_{q_{j}}= \zeta_{j}^{2}\,\mathrm{d}z_{j}^{\otimes 2}|_{q_{j}}, \tag{2.14}\]
where the right hand side does not involve \(\beta\) and \(\delta\). If we assume that \((q_{1},\ldots,q_{N})\) lies in the image of the map App (see (1.2)), then the normal form of any \((E,\nabla)\) in the preimage produces a solution \((\delta_{0},\beta_{0})\). Fixing such solutions, by Lemmas 5, 6 we may rewrite
\[\begin{cases}\beta&=&\beta_{0}+b_{1}\nu_{1}+\cdots+b_{N-g}\nu_{N-g}\\ \delta&=&\delta_{0}+d_{1}\omega_{1}+\cdots+d_{g}\omega_{g}\end{cases}\]
where \((\omega_{l})_{l=1}^{g}\), \((\nu_{k})_{k=1}^{N-g}\) are respective bases of \(H^{0}(C,\Omega^{1}_{C})\) and \(H^{0}(C,(\Omega^{1}_{C})^{\otimes 2}(D))\). Using these expressions, the constraint that \(q_{j}\) is an apparent singularity can be rewritten as a linear system consisting of \(N\) equations in the \(N\) variables \(b_{k}\), \(d_{l}\). The condition to uniquely determine \(\beta\) and \(\delta\) in terms of the data \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})\) is that the following determinant does not vanish
\[\det\begin{pmatrix}\nu_{1}(q_{1})&\cdots&\nu_{N-g}(q_{1})&\zeta_{1}\,\mathrm{ d}z_{1}\omega_{1}(q_{1})&\cdots&\zeta_{1}\,\mathrm{d}z_{1}\omega_{g}(q_{1})\\ \vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \nu_{1}(q_{N})&\cdots&\nu_{N-g}(q_{N})&\zeta_{N}\,\mathrm{d}z_{N}\omega_{1}(q_ {N})&\cdots&\zeta_{N}\,\mathrm{d}z_{N}\omega_{g}(q_{N})\end{pmatrix} \tag{2.15}\]
Of course, it is sufficient for our purpose to check that we can find some \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})\)'s such that this determinant does not vanish, so that it will be generically non vanishing. If we set \(\zeta_{1}=\cdots=\zeta_{N-g}=0\), then the matrix has a zero block of dimension \((N-g)\times g\) in the top right corner, and the determinant factors as
\[\zeta_{N-g+1}\,\mathrm{d}z_{N-g+1}\cdots\zeta_{N}\,\mathrm{d}z_{N}\cdot\det \begin{pmatrix}\nu_{1}(q_{1})&\cdots&\nu_{N-g}(q_{1})\\ \vdots&\ddots&\vdots\\ \nu_{1}(q_{N-g})&\cdots&\nu_{N-g}(q_{N-g})\end{pmatrix}\cdot\det\begin{pmatrix} \omega_{1}(\tilde{q}_{1})&\cdots&\omega_{g}(\tilde{q}_{1})\\ \vdots&\ddots&\vdots\\ \omega_{1}(\tilde{q}_{g})&\cdots&\omega_{g}(\tilde{q}_{g})\end{pmatrix}\]
where \(\tilde{q}_{j}=q_{j+N-g}\). After setting \(\zeta_{N-g+1}=\cdots=\zeta_{N}=1\), it is enough to find \(q_{j}\)'s such that the two smaller determinants are non zero. To conclude the proof, let us denote by \(L\) any of the two lines bundles \(\Omega^{1}_{C}\) or \((\Omega^{1}_{C})^{\otimes 2}(D)\), and by \(\mu_{1},\ldots,\mu_{N^{\prime}}\) a corresponding basis of \(H^{0}(C,L)\). Then we want to prove that the image of the curve by the evaluation map
\[C\stackrel{{\mathrm{ev}}}{{\longrightarrow}}\mathbb{P}^{N^{ \prime}-1}\ ;\ q\mapsto(\mu_{1}(q):\ldots:\mu_{N^{\prime}}(q))\]
is not contained in some hyperplane, i.e. that we can find \(q_{1},\ldots,q_{N^{\prime}}\in C\) such that the image is not contained in some hyperplane. But this is true, otherwise, we would have a linear relation between \(\mu_{1},\ldots,\mu_{N^{\prime}}\) contradicting that they form a basis.
**Remark 8**.: _In the previous proof, the locus of \(q_{j}\)'s for which \(\det(\omega_{i}(\tilde{q}_{j}))_{i,j}\) vanishes correspond to the Brill-Noether locus for divisor \(\tilde{q}_{1}+\cdots+\tilde{q}_{g}\)._
**Lemma 9**.: _When \(g=0\), any data \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})_{j}\in\mathrm{Sym}^{n-3}(\mathbf{\Omega}(D))\) gives rise to unique \(\beta\) and \(\delta\) such that the corresponding \(\nabla_{0}\) has apparent singular points at all \(q_{j}\)'s. However, for \(g>0\), there always exist data \((q_{j},\zeta_{j}\,\mathrm{d}z_{j})_{j}\) such that the determinant (2.15) vanishes._
Proof.: When \(g=0\), this directly follows from [9] (a consequence of Lagrange interpolation). When \(g>0\), fix generic \(q_{j}\)'s and let \(\omega\in H^{0}(C,\Omega_{C}^{1}(D))\). If we set \(\zeta_{j}:=\omega(q_{j})\), then the last colum of (2.15) is just the evaluation of the section \(\omega\otimes\omega_{g}\subset H^{0}(C,(\Omega_{C}^{1})^{\otimes 2}(D))\) at \(q_{1},\cdots,q_{4g-3+n}\) and is therefore a linear combination of the \(3g-3+n\) first colums.
## 3. Symplectic structure and canonical coordinates
We fix an irregular curve with residues \(X=(C,D,\{z_{i}\}_{i\in I},\{\boldsymbol{\theta}_{i}\}_{i\in I},\boldsymbol{ \theta}_{\mathrm{res}})\). As usual, we use the notation \(N:=4g+n-3\), where \(g\) is the genus of \(C\) and \(n=\deg(D)\). We will consider the moduli space \(M_{X}\) of rank \(2\) meromorphic connections over \(X\). This moduli space is constructed in [24, Theorem 2.1] and carries a natural symplectic structure described in [24, Proposition 4.1]. The purpose of this section is to give canonical coordinates on an open subset of \(M_{X}\) with respect to this symplectic structure. First we describe the moduli space \(M_{X}\).
### Moduli spaces
Let \((E,\nabla)\) be a rank \(2\) meromorphic connection over \(X\). Then, the subsheaf
\[l^{(i)}:=\varphi_{m_{i}[t_{i}]}^{-1}(\mathcal{O}_{m_{i}[t_{i}]}\oplus 0) \subset E_{m_{i}[t_{i}]}.\]
equips \((E,\nabla)\) with a canonical quasi-parabolic structure at each \(t_{i}\). So we may consider \((E,\nabla)\) as a _quasi-parabolic connection_\((E,\nabla,\{l^{(i)}\})\) defined in [21, Definition 1.1] and [24, Definition 2.1]. A stability condition for quasi-parabolic connections is introduced in [21, Definition 2.1] and [24, Definition 2.2]. The moduli space of stable quasi-parabolic connections is constructed in [21, Theorem 2.1] and [24, Theorem 2.1]. In our situation, any rank \(2\) meromorphic connections over \(X\) are irreducible (see Remark 4). So our objects are automatically stable objects. We omit the stability condition of the quasi-parabolic connections.
Let \(M_{X}\) be the moduli space of rank \(2\) meromorphic connections over the irregular curve with residues \(X\). If \((E,\nabla)\in M_{X}\) satisfies \(\dim_{\mathbb{C}}H^{0}(C,E)=1\), then we have a unique \(\mathcal{O}_{C}\)-morphism \(\varphi_{\nabla}\) in (2.2). The \(\mathcal{O}_{C}\)-morphism \(\varphi_{\nabla}\) is nonzero, since \((E,\nabla)\) is irreducible. So we may define the divisor \(\mathrm{div}(\varphi_{\nabla})\) in (2.6) for \((E,\nabla)\). We set
\[M_{X}^{0}:=\left\{(E,\nabla)\in M_{X}\ \left|\begin{array}{l}\dim_{ \mathbb{C}}H^{0}(C,E)=1,\\ \mathrm{div}(\varphi_{\nabla})\text{ is reduced, and}\\ \mathrm{div}(\varphi_{\nabla})\text{ has disjoint support with }D\end{array}\right. \right\}.\]
Next we recall the natural symplectic structure on \(M_{X}\).
### Symplectic structure
We will describe the natural symplectic structure on \(M_{X}\) via Cech cohomology. This is defined in [21, Proposition 7.2] and [24, Proposition 4.1]. This is analog of the symplectic form on the moduli space of stable Higgs bundles in [8]. This description of the symplectic structure is useful to comparing this symplectic structure with the Goldman symplectic structure on the character variety via the Riemann-Hilbert map (for example, see [21, the proof of Proposition 7.3] and [4, Theorem 3.2]). Moreover, this description of the symplectic structure is useful to describe the isomonodromic deformations (for example, see [5, Proposition 4.3], [6, Proposition 4.4], and [33, Proposition 3.8]).
First we recall the description of the tangent space of \(M_{X}\) at \((E,\nabla)\in M_{X}\) in terms of the hypercohomology of a certain complex ([21, the proof of Theorem 2.1] and [24, the proof of Proposition
4.1]). We consider \((E,\nabla)\) as a quasi-parabolic connection \((E,\nabla,\{l^{(i)}\})\). We define a complex \(\mathcal{F}^{\bullet}\) for \((E,\nabla,\{l^{(i)}\})\) by
\[\mathcal{F}^{0}:=\Big{\{}s\in\mathcal{E}nd(E)\ \Big{|}\ s|_{m_{i}t_{i}}(l^{(i)} )\subset l^{(i)}\ \text{for any}\ i\Big{\}}\] \[\mathcal{F}^{1}:=\Big{\{}s\in\mathcal{E}nd(E)\otimes\Omega^{1}_{ C}(D)\ \Big{|}\ s|_{m_{i}t_{i}}(l^{(i)})\subset l^{(i)}\otimes\Omega^{1}_{C}\ \text{for any}\ i\Big{\}}\] \[\nabla_{\mathcal{F}^{\bullet}}\colon\mathcal{F}^{0}\longrightarrow \mathcal{F}^{1};\quad\nabla_{\mathcal{F}^{\bullet}}(s)=\nabla\circ s-s\circ\nabla. \tag{3.1}\]
Then we have an isomorphism between the tangent space \(T_{(E,\nabla,\{l^{(i)}\})}M_{X}\) and \(\mathbf{H}^{1}(\mathcal{F}^{\bullet})\).
Now we recall this isomorphism. We take an analytic (or affine) open covering \(C=\bigcup_{\alpha}U_{\alpha}\) such that \(E|_{U_{\alpha}}\cong\mathcal{O}^{\oplus 2}_{U_{\alpha}}\) for any \(\alpha\), \(\sharp\{i\ |\ t_{i}\cap U_{\alpha}\neq\emptyset\}\leq 1\) for any \(\alpha\) and \(\sharp\{\alpha\ |\ t_{i}\cap U_{\alpha}\neq\emptyset\}\leq 1\) for any \(i\). Take a tangent vector \(v\in T_{(E,\nabla,\{l^{(i)}\})}M_{X}\). The field \(v\) corresponds to an infinitesimal deformation \((E_{\epsilon},\nabla_{\epsilon},\{l^{(i)}_{\epsilon}\})\) of \((E,\nabla,\{l^{(i)}\})\) over \(C\times\operatorname{Spec}\mathbb{C}[\epsilon]\) such that \((E_{\epsilon},\nabla_{\epsilon},\{l^{(i)}_{\epsilon}\})\otimes\mathbb{C}[ \epsilon]/(\epsilon)\cong(E,\nabla,\{l^{(i)}\})\), where \(\mathbb{C}[\epsilon]=\mathbb{C}[t]/(t^{2})\). There is an isomorphism
\[\varphi_{\alpha}\colon E_{\epsilon}|_{U_{\alpha}\times\operatorname{Spec} \mathbb{C}[\epsilon]}\xrightarrow{\sim}\mathcal{O}^{\oplus 2}_{U_{\alpha} \times\operatorname{Spec}\mathbb{C}[\epsilon]}\xrightarrow{\sim}E|_{U_{ \alpha}}\otimes\mathbb{C}[\epsilon]\]
such that \(\varphi_{\alpha}\otimes\mathbb{C}[\epsilon]/(\epsilon)\colon E_{\epsilon} \otimes\mathbb{C}[\epsilon]/(\epsilon)|_{U_{\alpha}}\xrightarrow{\sim}E|_{U_{ \alpha}}\otimes\mathbb{C}[\epsilon]/(\epsilon)=E|_{U_{\alpha}}\) is the given isomorphism and that \(\varphi_{\alpha}|_{t_{i}\times\operatorname{Spec}\mathbb{C}[\epsilon]}(l^{(i) }_{\epsilon})=l^{(i)}|_{U_{\alpha}\times\operatorname{Spec}\mathbb{C}[ \epsilon]}\) if \(t_{i}\cap U_{\alpha}\neq\emptyset\). We put
\[u_{\alpha\beta}:=\varphi_{\alpha}\circ\varphi_{\beta}^{-1}- \operatorname{id}_{E|_{U_{\alpha\beta}\times\operatorname{Spec}\mathbb{C}[ \epsilon]}},\] \[v_{\alpha}:=(\varphi_{\alpha}\otimes\operatorname{id})\circ\nabla _{\epsilon}|_{U_{\alpha}\times\operatorname{Spec}\mathbb{C}[\epsilon]}\circ \varphi_{\alpha}^{-1}-\nabla|_{U_{\alpha}\times\operatorname{Spec}\mathbb{C}[ \epsilon]}.\]
Then \(\{u_{\alpha\beta}\}\in C^{1}((\epsilon)\otimes\mathcal{F}^{0})\), \(\{v_{\alpha}\}\in C^{0}((\epsilon)\otimes\mathcal{F}^{1})\) and we have the cocycle conditions
\[u_{\beta\gamma}-u_{\alpha\gamma}+u_{\alpha\beta}=0\quad\text{and}\quad\nabla \circ u_{\alpha\beta}-u_{\alpha\beta}\circ\nabla=v_{\beta}-v_{\alpha}.\]
So \([(\{u_{\alpha\beta}\},\{v_{\alpha}\})]\) determines an element of \(\mathbf{H}^{1}(\mathcal{F}^{\bullet})\). This correspondence gives an isomorphism between the tangent space \(T_{(E,\nabla,\{l^{(i)}\})}M_{X}\) and \(\mathbf{H}^{1}(\mathcal{F}^{\bullet})\).
We define a pairing
\[\mathbf{H}^{1}(\mathcal{F}^{\bullet})\otimes\mathbf{H}^{1}( \mathcal{F}^{\bullet}) \longrightarrow\mathbf{H}^{2}(\mathcal{O}_{C}\xrightarrow{d}\Omega^{1}_{C}) \cong\mathbb{C}\] \[[(\{u_{\alpha\beta}\},\{v_{\alpha}\})]\otimes[(\{u^{\prime}_{ \alpha\beta}\},\{v^{\prime}_{\alpha}\})]\longmapsto[(\{\operatorname{tr}(u_{ \alpha\beta}\circ u^{\prime}_{\beta\gamma})\},-\{\operatorname{tr}(u_{\alpha \beta}\circ v^{\prime}_{\beta})-\operatorname{tr}(v_{\alpha}\circ u^{\prime}_{ \alpha\beta})\})], \tag{3.2}\]
considered in Cech cohomology with respect to an open covering \(\{U_{\alpha}\}\) of \(C\), \(\{u_{\alpha\beta}\}\in C^{1}(\mathcal{F}^{0})\), \(\{v_{\alpha}\}\in C^{0}(\mathcal{F}^{1})\) and so on. This pairing gives a nondegenerate 2-form on the moduli space \(M_{X}\). This fact follows from the Serre duality and the five lemma:
(3.3)
We denote by \(\omega\) the nondegenerate 2-form on \(M_{X}\). This 2-form \(\omega\) is a symplectic structure. That is, we have \(d\omega=0\) (see [21, Proposition 7.3] and [24, Proposition 4.2]).
We get as a consequence:
**Proposition 10**.: _The dimension of \(M^{0}_{X}\) is equal to \(2N\), where \(N=4g-3+n\)._
Proof.: By irreducibility of \((E,\nabla)\) and Schur's lemma we have
\[\mathbf{H}^{0}(\mathcal{F}^{\bullet})\cong\mathbb{C}.\]
On a Zariski open subset of \(M_{X}\), the underlying quasi-parabolic vector bundle \((E,\{l^{(i)}\}_{i})\) is irreducible, so we also have
\[H^{0}(\mathcal{F}^{0})\cong\mathbb{C}.\]
Clearly, we have \(\deg(\mathcal{F}^{0})=-\operatorname{length}(D)\). From Riemann-Roch we find
\[\dim_{\mathbb{C}}H^{1}(\mathcal{F}^{0}) =\dim_{\mathbb{C}}H^{0}(\mathcal{F}^{0})+4(g-1)-\deg(\mathcal{F}^ {0})\] \[=4g-3+n=N.\]
By Serre duality and Euler characteristic count applied to the hypercohomology long exact sequence (3.3), we get the statement.
### Trivializations of \(E\)
Our purpose is to give canonical coordinates of \(M_{X}^{0}\) with respect to the symplectic form (3.2). To do it, we will calculate the Cech cohomology by taking trivializations of \(E\). To simplify the calculation, we take trivializations of \(E\) by using
\[\phi_{\nabla}\colon E_{0}\stackrel{{\subset}}{{\longrightarrow}}E,\]
whose cokernel defines the apparent singularities. In this section, we will discuss construction of the trivializations of \(E\) by using \(\phi_{\nabla}\).
We take \((E,\nabla,\{l^{(i)}\})\in M_{X}^{0}\). Let \(\{(q_{j},\zeta_{j}\operatorname{d}\!z_{j})\}_{j=1,2,\ldots,N}\) be the point on \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D))\) corresponding to \((E,\nabla,\{l^{(i)}\})\). We assume that the point \(\{(q_{j},\zeta_{j}\operatorname{d}\!z_{j})\}_{j=1,2,\ldots,N}\) is generic in the sense of Lemma 7. Let \(U_{q_{j}}^{\operatorname{an}}\) be an analytic open subset of \(C\) such that \(q_{j}\in U_{q_{j}}^{\operatorname{an}}\) and \(U_{t_{i}}^{\operatorname{an}}\) be an analytic open subset of \(C\) such that \(t_{i}\in U_{t_{i}}^{\operatorname{an}}\). We assume that \(U_{q_{j}}^{\operatorname{an}}\) and \(U_{t_{i}}^{\operatorname{an}}\) are small enough. We take an analytic coordinate \(z_{j}\) on \(U_{q_{j}}^{\operatorname{an}}\) such that it is independent of the moduli space \(M_{X}^{0}\). We denote also by \(q_{j}\) the complex number so that the point \(q_{j}\) on \(C\) is defined by \(z_{j}-q_{j}=0\).
**Definition 11**.: _Let \(\{U_{\alpha}\}_{\alpha}\) be an analytic open covering of \(C\): \(C=\bigcup_{\alpha}U_{\alpha}\) such that_
* \(\sharp\{i\mid t_{i}\cap U_{\alpha}\neq\emptyset\}\leq 1\) _for any_ \(\alpha\)_, and_ \(\sharp\{\alpha\mid t_{i}\cap U_{\alpha}\neq\emptyset\}\leq 1\) _for any_ \(i\)_,_
* \(\sharp\{j\mid q_{j}\cap U_{\alpha}\neq\emptyset\}\leq 1\) _for any_ \(\alpha\)_, and_ \(\sharp\{\alpha\mid q_{j}\cap U_{\alpha}\neq\emptyset\}\leq 1\) _for any_ \(j\)_,_
* \(\Omega_{C}^{1}(D)\) _is free on_ \(U_{\alpha}\) _for any_ \(\alpha\)_, that is,_ \(\Omega_{C}^{1}(D)|_{U_{\alpha}}\cong\mathcal{O}_{U_{\alpha}}\)_,_
* \(U_{\alpha_{t_{i}}}=U_{t_{i}}^{\operatorname{an}}\) _and_ \(U_{\alpha_{q_{j}}}=U_{q_{j}}^{\operatorname{an}}\)_._
_Here we denote by \(\alpha_{t_{i}}\) the index \(\alpha\) such that \(t_{i}\in U_{\alpha}\), and by \(\alpha_{q_{j}}\) the index \(\alpha\) such that \(q_{j}\in U_{\alpha}\)._
We fix trivializations \(\omega_{\alpha}\colon\mathcal{O}_{U_{\alpha}}\stackrel{{\sim}}{{ \longrightarrow}}\Omega_{C}^{1}(D)|_{U_{\alpha}}\) of \(\Omega_{C}^{1}(D)\). We assume that \(\omega_{\alpha}\) is independent of the moduli space \(M_{X}^{0}\). By using \(\omega_{\alpha}\), we have \(\omega_{\alpha}^{-1}\colon\mathcal{O}_{U_{\alpha}}\stackrel{{ \sim}}{{\longrightarrow}}(\Omega_{C}^{1}(D))^{-1}|_{U_{\alpha}}\). By the trivializations, we have trivializations \(\varphi_{\alpha}^{\operatorname{norm}}\colon\mathcal{O}_{U_{\alpha}}^{ \oplus 2}\stackrel{{\sim}}{{\longrightarrow}}E_{0}|_{U_{\alpha}}\) of \(E_{0}\). Assume that the connection matrices \(A_{\alpha}^{\operatorname{norm}}\) of \(\nabla_{0}\) associated to \(\varphi_{\alpha}^{\operatorname{norm}}\) are
\[A_{\alpha}^{\operatorname{norm}}=\begin{pmatrix}0&\beta_{\alpha}\\ \gamma_{\alpha}&\delta_{\alpha}\end{pmatrix}, \tag{3.4}\]
where \(\beta_{\alpha},\delta_{\alpha}\in\Omega_{C}^{1}(D+B)|_{U_{\alpha}}\) are determined by \(\{(q_{j},\operatorname{res}_{q_{j}}(\beta))\}_{j=1,2,\ldots,N}\) (see Lemma 7). The 1-form \(\gamma_{\alpha}\in\Omega_{C}^{1}(D)|_{U_{\alpha}}\) is the image of 1 under the composition
\[\mathcal{O}_{U_{\alpha}}\stackrel{{\sim}}{{\longrightarrow}}( \Omega_{C}^{1}(D))^{-1}\otimes\Omega_{C}^{1}(D)|_{U_{\alpha}}\stackrel{{ \omega_{\alpha}\otimes 1}}{{\longrightarrow}}\mathcal{O}_{U_{\alpha}}\otimes \Omega_{C}^{1}(D).\]
In particular, \(\gamma_{\alpha}\) is independent of the moduli space \(M^{0}_{X}\) for any \(\alpha\). The polar part of \(A^{\mathrm{norm}}_{\alpha_{t_{i}}}\) at \(t_{i}\) is independent of the moduli space \(M^{0}_{X}\) for any \(i\). We set
\[\zeta_{j}:=\frac{\mathrm{res}_{q_{j}}(\beta)}{\gamma_{\alpha_{q_{j}}}|_{q_{j}}} \in\mathbb{C}\quad\text{ for }j=1,2,\ldots,N. \tag{3.5}\]
Here \(\beta\in H^{0}(C,(\Omega^{1}_{C})^{\otimes 2}(2D+B))\) is the \((1,2)\)-entry of (2.9). Notice that \(\beta|_{U_{\alpha}}=\beta_{\alpha}\gamma_{\alpha}\), where \(\beta_{\alpha}\) and \(\gamma_{\alpha}\) are in (3.4). So we have
\[\mathrm{res}_{q_{j}}(A^{\mathrm{norm}}_{\alpha_{q_{j}}})=\begin{pmatrix}0& \zeta_{j}\\ 0&1\end{pmatrix}\quad\text{ for }j=1,2,\ldots,N.\]
**Definition 12**.: _We define other trivializations \(\varphi^{\mathrm{App},0}_{\alpha}\colon\mathcal{O}^{\oplus 2}_{U_{\alpha}} \stackrel{{\sim}}{{\longrightarrow}}E_{0}|_{U_{\alpha}}\) of \(E_{0}\) for each \(\alpha\) as follows:_
1. _When_ \(\alpha=\alpha_{q_{j}}\)_, we take a trivialization_ \(\varphi^{\mathrm{App},0}_{\alpha}\) _as_ \[\varphi^{\mathrm{App},0}_{\alpha}=\varphi^{\mathrm{norm}}_{\alpha}\circ \begin{pmatrix}1&\zeta_{j}\\ 0&1\end{pmatrix}.\] _Note that this triangular matrix appeared in (_2.12_)._
2. _Otherwise, we take a trivialization_ \(\varphi^{\mathrm{App},0}_{\alpha}\) _as_ \(\varphi^{\mathrm{App},0}_{\alpha}=\varphi^{\mathrm{norm}}_{\alpha}\)_._
Let \(A^{\mathrm{App},0}_{\alpha}\) be the connection matrix of \(\nabla_{0}\) associated to \(\varphi^{\mathrm{App},0}_{\alpha}\), that is,
\[(\varphi^{\mathrm{App},0}_{\alpha})^{-1}\circ(\phi^{*}_{\nabla}\nabla)\circ \varphi^{\mathrm{App},0}_{\alpha}=\mathrm{d}+A^{\mathrm{App},0}_{\alpha}.\]
We have that
\[A^{\mathrm{App},0}_{\alpha}=\begin{pmatrix}-\zeta_{j}\gamma_{\alpha}&\beta_{ \alpha}-\zeta_{j}\delta_{\alpha}-\zeta_{j}^{2}\gamma_{\alpha}\\ \gamma_{\alpha}&\delta_{\alpha}+\zeta_{j}\gamma_{\alpha}\end{pmatrix}\quad \text{when }\alpha=\alpha_{q_{j}}\]
We have
\[\mathrm{res}_{q_{j}}(A^{\mathrm{App},0}_{\alpha_{q_{j}}})=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}\quad\text{ for }j=1,2,\ldots,N.\]
Now we define trivializations of \(E\) by using \(\phi_{\nabla}\colon E_{0}\to E\) in (2.4) and the trivialization of \(E_{0}\) in Definition 12.
**Definition 13**.: _Now we define trivialization \(\varphi^{\mathrm{App}}_{\alpha}\colon\mathcal{O}^{\oplus 2}_{U_{\alpha}} \stackrel{{\sim}}{{\longrightarrow}}E|_{U_{\alpha}}\) of \(E\) for the open covering \(\{U_{\alpha}\}_{\alpha}\) in Definition 11 as follows._
1. _When_ \(\alpha=\alpha_{q_{j}}\)_, we take a trivialization_ \(\varphi^{\mathrm{App}}_{\alpha}\) _so that_ \[(\varphi^{\mathrm{App}}_{\alpha})^{-1}\circ\phi_{\nabla}|_{U_{\alpha}}\circ \varphi^{\mathrm{App},0}_{\alpha}=\begin{pmatrix}1&0\\ 0&z_{j}-q_{j}\end{pmatrix}.\]
2. _When_ \(\alpha=\alpha_{t_{i}}\)_, we take_ \(g^{t_{i}}_{\alpha}\in\mathrm{Aut}(\mathcal{O}^{\oplus 2}_{U_{\alpha}})\) _so that the polar part of_ \((g^{t_{i}}_{\alpha})^{-1}A^{\mathrm{norm}}_{\alpha}g^{t_{i}}_{\alpha}\) _is diagonal at_ \(m_{i}[t_{i}]\)_. We take a trivialization_ \(\varphi^{\mathrm{App}}_{\alpha}\) _as_ \[\varphi^{\mathrm{App}}_{\alpha}=\phi_{\nabla}|_{U_{\alpha}}\circ\varphi^{ \mathrm{norm}}_{\alpha}\circ g^{t_{i}}_{\alpha}.\] _Here remark that_ \(\phi_{\nabla}|_{U_{\alpha}}\) _is invertible. Since the polar part of_ \(A^{\mathrm{norm}}_{\alpha_{t_{i}}}\) _at_ \(t_{i}\) _is independent of the moduli space_ \(M^{0}_{X}\)_, we may assume that_ \((g^{t_{i}}_{\alpha})_{<m_{i}}\) _is independent of the moduli space_ \(M^{0}_{X}\)_. Here we define_ \((g^{t_{i}}_{\alpha})_{<m_{i}}\) _so that_ \(g^{t_{i}}_{\alpha}=(g^{t_{i}}_{\alpha})_{<m_{i}}+O(z^{m_{i}}_{i})\)
3. _Otherwise, we take a trivialization_ \(\varphi_{\alpha}^{\mathrm{App}}\) _so that_ \[(\varphi_{\alpha}^{\mathrm{App}})^{-1}\circ\phi_{\nabla}|_{U_{\alpha}}\circ \varphi_{\alpha}^{\mathrm{norm}}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}.\] _Since_ \(\phi_{\nabla}|_{U_{\alpha}}\) _is invertible in this case,_ \(\varphi_{\alpha}^{\mathrm{App}}=\phi_{\nabla}|_{U_{\alpha}}\circ\varphi_{ \alpha}^{\mathrm{norm}}\)_._
Let \(A_{\alpha}\) be the connection matrix of \(\nabla\) associated to \(\varphi_{\alpha}^{\mathrm{App}}\), that is
\[(\varphi_{\alpha}^{\mathrm{App}})^{-1}\circ\nabla\circ\varphi_{\alpha}^{ \mathrm{App}}=\mathrm{d}+A_{\alpha}.\]
We have that
\[A_{\alpha}=\begin{cases}\begin{pmatrix}-\zeta_{j}\gamma_{\alpha}&\frac{\beta_{ \alpha}-\zeta_{j}\delta_{\alpha}-\zeta_{i}^{2}\gamma_{\alpha}}{z_{j}-q_{j}}\\ (z_{j}-q_{j})\gamma_{\alpha}&\delta_{\alpha}+\zeta_{j}\gamma_{\alpha}-\frac{ dz_{j}}{z_{j}-q_{j}}\end{pmatrix}&\text{when }\alpha=\alpha_{q_{j}}\\ \omega_{i}(X)+[\text{holo. part}]&\text{when }\alpha=\alpha_{t_{i}}\\ \begin{pmatrix}0&\beta_{\alpha}\\ \gamma_{\alpha}&\delta_{\alpha}\end{pmatrix}&\text{otherwise.}\end{cases} \tag{3.6}\]
Here \(\omega_{i}(X)\) is the \(1\)-form defined in (2.10). The connection matrix \(A_{\alpha_{q_{j}}}\) on \(U_{\alpha_{q_{j}}}\) appeared in (2.13). The connection matrix \(A_{\alpha_{q_{j}}}\) has no pole at \(q_{j}\) for any \(j=1,2,\ldots,N\), since \(\beta_{\alpha},\delta_{\alpha}\) are determined by Lemma 7. We have considered diagonalization of the polar part of the connection \((E,\nabla)\) at each \(t_{i}\). The reason why we consider diagonalization of the polar parts is that we use the connection matrix (3.6) to calculate an infinitesimal deformation of \((E,\nabla)\). So we will calculate variations of the transition functions with respect to the trivializations in Definition 13 and variations of the connection matrices (3.6). These are elements of \(\mathcal{F}^{0}\) and \(\mathcal{F}^{1}\) of (3.1), respectively. To be elements of \(\mathcal{F}^{0}\) and \(\mathcal{F}^{1}\), we need the compatibility with the quasi-parabolic structure. However, this compatibility follows directly from diagonalization of the polar parts.
### Descriptions of the cocycles of an infinitesimal deformation
Let \(\mathbf{\Omega}(D)\to C\) be the total space of \(\Omega^{1}_{C}(D)\). By the argument as in Lemma 6, we may define a map
\[\begin{split} f_{\mathrm{App},0}&:M^{0}_{X}\longrightarrow \mathrm{Sym}^{N}(\mathbf{\Omega}(D))\\ (E,\nabla)&\longmapsto\left\{(q_{j},\mathrm{res}_{q_{j}}( \beta))\right\}_{j=1,2,\ldots,N}.\end{split} \tag{3.7}\]
Here \(\beta\in H^{0}(C,(\Omega^{1}_{C})^{\otimes 2}(2D+B))\) is the \((1,2)\)-entry of (2.9) and \(\mathrm{res}_{q_{j}}(\beta)\in\Omega^{1}_{C}(D)|_{q_{j}}\). We take an analytic open subset \(V\) of \(M^{0}_{X}\). For the analytic open subset \(V\), we assume that we may define a composition
\[V\longrightarrow f_{\mathrm{App},0}(V) \longrightarrow\mathrm{Sym}^{N}(\mathbb{C}^{2}_{(q,\zeta)})\] \[(E,\nabla)\longmapsto\left\{(q_{j},\mathrm{res}_{q_{j}}(\beta)) \right\}_{j=1,2,\ldots,N} \longmapsto\left\{(q_{j},\zeta_{j})\right\}_{j=1,2,\ldots,N},\]
where \(\zeta_{j}\) is defined in (3.5), and the image of \(V\) under the composition is isomorphic to some analytic open subset of \(\mathbb{C}^{2N}_{(q,\zeta)}\). Let \(U_{(q,\zeta)}\) be such an analytic open subset of \(\mathbb{C}^{2N}_{(q,\zeta)}\). So we have a map
\[\begin{split} M^{0}_{X}\supset V& \longrightarrow U_{(q,\zeta)}\subset\mathbb{C}^{2N}_{(q,\zeta)}\\ (E,\nabla)&\longmapsto(q_{1},\ldots,q_{N},\zeta_{1}, \ldots,\zeta_{N}),\end{split} \tag{3.8}\]
which are coordinates that we will use in this subsection. We consider the family of \((E,\nabla,\{l^{(i)}\})\) parametrized by \(U_{(q,\zeta)}\) such that this family induces the inverse map of the map \(V\to U_{(q,\zeta)}\). Here this family is constructed by Lemma 7. By using the trivializations \(\{\varphi_{\alpha}^{\mathrm{App}}\}_{\alpha}\) of \(E\) in Definition 13,
we have transition functions and connection matrices of the family of \((E,\nabla,\{l^{(i)}\})\) parametrized by \(U_{(q,\zeta)}\). Indeed, the transition function is
\[B_{\alpha\beta}:=(\varphi_{\alpha}^{\mathrm{App}}|_{U_{\alpha\beta}})^{-1} \circ\varphi_{\beta}^{\mathrm{App}}|_{U_{\alpha\beta}}\colon\mathcal{O}_{U_{ \alpha\beta}}^{\oplus 2}\longrightarrow\mathcal{O}_{U_{\alpha\beta}}^{\oplus 2}, \tag{3.9}\]
and the connection matrix is as in (3.6).
Let \((q_{j},\zeta_{j})_{j}\) be a point on \(U_{(q,\zeta)}\). The purpose of this subsection is to describe the tangent map
\[\begin{split} T_{(q_{j},\zeta_{j})_{j}}\mathbb{C}_{(q,\zeta)}^{2N }&\longrightarrow T_{(E,\nabla,\{l^{(i)}\})}M_{X}^{0}\cong \mathbf{H}^{1}(\mathcal{F}^{\bullet})\\ v&\longmapsto[(\{u_{\alpha\beta}(v)\},\{v_{\alpha}( v)\})]\end{split} \tag{3.10}\]
induced by the inverse map of (3.8). For this purpose, we will calculate the variations of the transition functions and the connection matrices parametrized by \(U_{(q,\zeta)}\) with respect to the tangent vector \(v\) in \(U_{(q,\zeta)}\subset\mathbb{C}_{(q,\zeta)}^{2N}\). By using these variations, we will calculate the cocycles \((\{u_{\alpha\beta}(v)\},\{v_{\alpha}(v)\})\) of the infinitesimal deformation of \((E,\nabla,\{l^{(i)}\})\) with respect to \(v\).
First, we calculate \(u_{\alpha\beta}(v)\in\mathcal{F}^{1}(U_{\alpha\beta})\). We consider the variation of \(B_{\alpha\beta}\) in (3.9) by \(v\):
\[B_{\alpha\beta}(\mathrm{id}+\epsilon B_{\alpha\beta}^{-1}v(B_{\alpha\beta})) \colon\mathcal{O}_{U_{\alpha\beta}}^{\oplus 2}\longrightarrow\mathcal{O}_{U_{ \alpha\beta}}^{\oplus 2}\otimes\mathbb{C}[\epsilon].\]
Then \(u_{\alpha\beta}(v)\) has the following description:
\[u_{\alpha\beta}(v)=\varphi_{\beta}^{\mathrm{App}}|_{U_{\alpha\beta}}\circ \left(B_{\alpha\beta}^{-1}v(B_{\alpha\beta})\right)\circ(\varphi_{\beta}^{ \mathrm{App}}|_{U_{\alpha\beta}})^{-1}. \tag{3.11}\]
**Lemma 14**.: _Let \(I_{\mathrm{cov}}\) be the set of the indices of the open covering \(\{U_{\alpha}\}\) in Definition 11. We set \(I_{\mathrm{cov}}^{t}=\{\alpha_{t_{1}},\ldots,\alpha_{t_{\nu}}\}\) and \(I_{\mathrm{cov}}^{q}=\{\alpha_{q_{1}},\ldots,\alpha_{q_{N}}\}\), which are subsets of \(I_{\mathrm{cov}}\). For \(v\in T_{(E,\nabla,\{l^{(i)}\})}M_{X}^{0}\), we have the equality_
\[u_{\alpha\beta}(v)=\begin{cases}0&\alpha,\beta\in I_{\mathrm{cov}}\setminus(I _{\mathrm{cov}}^{t}\cup I_{\mathrm{cov}}^{q})\\ \varphi_{\alpha_{q_{j}}}^{\mathrm{App}}|_{U_{\alpha\alpha_{q_{j}}}}\circ \begin{pmatrix}0&\frac{v(\zeta_{j})}{z_{j}-q_{j}}\\ 0&\frac{v(q_{j})}{z_{j}-q_{j}}\end{pmatrix}\circ(\varphi_{\alpha_{q_{j}}}^{ \mathrm{App}}|_{U_{\alpha\alpha_{q_{j}}}})^{-1}&\alpha\in I_{\mathrm{cov}} \setminus(I_{\mathrm{cov}}^{t}\cup I_{\mathrm{cov}}^{q}),\beta=\alpha_{q_{j}} \in I_{\mathrm{cov}}^{q}\\ \varphi_{\alpha_{t_{i}}}^{\mathrm{App}}|_{U_{\alpha\alpha_{t_{i}}}}\circ \left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t_{i}})\right) \circ(\varphi_{\alpha_{t_{i}}}^{\mathrm{App}}|_{U_{\alpha\alpha_{t_{i}}}})^{-1 }&\alpha\in I_{\mathrm{cov}}\setminus(I_{\mathrm{cov}}^{t}\cup I_{\mathrm{cov}} ^{q}),\beta=\alpha_{t_{i}}\in I_{\mathrm{cov}}^{t},\end{cases} \tag{3.12}\]
_and we have that_
\[(g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t_{i}})=O(z_{i}^{m_{i}}). \tag{3.13}\]
Proof.: Let \(\alpha\in I_{\mathrm{cov}}\setminus(I_{\mathrm{cov}}^{t}\cup I_{\mathrm{cov}} ^{q})\). If \(\beta\in I_{\mathrm{cov}}\setminus(I_{\mathrm{cov}}^{t}\cup I_{\mathrm{cov}} ^{q})\), then we have the following equalities:
\[B_{\alpha\beta} =(\varphi_{\alpha}^{\mathrm{App}}|_{U_{\alpha\beta}})^{-1}\circ \varphi_{\beta}^{\mathrm{App}}|_{U_{\alpha\beta}}\] \[=(\varphi_{\alpha}^{\mathrm{norm}}|_{U_{\alpha\beta}})^{-1}\circ( \phi\nabla|_{U_{\alpha\beta}})^{-1}\circ\phi_{\nabla}|_{U_{\alpha\beta}}\circ \varphi_{\beta}^{\mathrm{norm}}|_{U_{\alpha\beta}}\] \[=(\varphi_{\alpha}^{\mathrm{norm}}|_{U_{\alpha\beta}})^{-1}\circ \varphi_{\beta}^{\mathrm{norm}}|_{U_{\alpha\beta}}=\begin{pmatrix}1&0\\ 0&((\omega_{\alpha}^{-1})^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})\end{pmatrix}.\]
Here \(\omega_{\alpha}^{-1}\) is a trivialization \(\mathcal{O}_{U_{\alpha}}\xrightarrow{\cong}(\Omega_{C}^{1}(D))^{-1}|_{U_{ \alpha}}\) for any \(\alpha\). Since \(((\omega_{\alpha}^{-1})^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})\) is independent of the moduli space \(M_{X}^{0}\), we have \(v(B_{\alpha\beta})=0\). So \(u_{\alpha\beta}(v)=0\).
If \(\beta=\alpha_{q_{j}}\), then we have the following equalities:
\[\begin{split} B_{\alpha\alpha_{q_{j}}}&=(\varphi_{ \alpha}^{\text{App}}|U_{\alpha\alpha_{q_{j}}})^{-1}\circ\varphi_{\alpha_{q_{j}}} ^{\text{App}}|U_{\alpha\alpha_{q_{j}}}\\ &=(\varphi_{\alpha}^{\text{App},0}|_{U_{\alpha\alpha_{q_{j}}}})^{ -1}\circ(\phi_{\nabla}|_{U_{\alpha\alpha_{q_{j}}}})^{-1}\circ\phi_{\nabla}|_{U_ {\alpha\alpha_{q_{j}}}}\circ\varphi_{\alpha_{q_{j}}}^{\text{App},0}|_{U_{ \alpha\alpha_{q_{j}}}}\circ\begin{pmatrix}1&0\\ 0&\frac{1}{z_{j}-q_{j}}\end{pmatrix}\\ &=(\varphi_{\alpha}^{\text{App},0}|_{U_{\alpha\alpha_{q_{j}}}})^{-1}\circ \varphi_{\alpha_{q_{j}}}^{\text{App},0}|_{U_{\alpha\alpha_{q_{j}}}}\circ \begin{pmatrix}1&0\\ 0&\frac{1}{z_{j}-q_{j}}\end{pmatrix}\\ &=(\varphi_{\alpha}^{\text{norm}}|_{U_{\alpha\alpha_{q_{j}}}})^{-1}\circ \varphi_{\alpha_{q_{j}}}^{\text{norm}}|_{U_{\alpha\alpha_{q_{j}}}}\circ \begin{pmatrix}1&\zeta_{j}\\ 0&1\end{pmatrix}\begin{pmatrix}1&0\\ 0&\frac{1}{z_{j}-q_{j}}\end{pmatrix}\\ &=\begin{pmatrix}1&0\\ 0&((\omega_{\alpha}^{-1})^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})\end{pmatrix} \begin{pmatrix}1&\frac{\zeta_{j}}{z_{j}-q_{j}}\\ 0&\frac{1}{z_{j}-q_{j}}\end{pmatrix}.\end{split} \tag{3.14}\]
So we have
\[B_{\alpha\alpha_{q_{j}}}^{-1}v(B_{\alpha\alpha_{q_{j}}})=\begin{pmatrix}1&- \zeta_{j}\\ 0&z_{j}-q_{j}\end{pmatrix}\begin{pmatrix}0&\frac{v(\zeta_{j})(z_{j}-q_{j})+ \zeta_{j}v(q_{j})}{(z_{j}-q_{j})^{2}}\\ 0&-\frac{-v(q_{j})}{(z_{j}-q_{j})^{2}}\end{pmatrix}=\begin{pmatrix}0&\frac{v( \zeta_{j})}{z_{j}-q_{j}}\\ 0&\frac{v(q_{j})}{z_{j}-q_{j}}\end{pmatrix}.\]
If \(\beta=\alpha_{t_{i}}\), then we have the following equalities:
\[B_{\alpha\alpha_{t_{i}}} =(\varphi_{\alpha}^{\text{App}}|_{U_{\alpha\alpha_{t_{i}}}})^{-1} \circ\varphi_{\alpha_{t_{i}}}^{\text{App}}|_{U_{\alpha\alpha_{t_{i}}}}\] \[=(\varphi_{\alpha}^{\text{norm}}|_{U_{\alpha\alpha_{t_{i}}}})^{-1} \circ(\phi_{\nabla}|_{U_{\alpha\alpha_{t_{i}}}})^{-1}\circ\phi_{\nabla}|_{U_{ \alpha\alpha_{t_{i}}}}\circ\varphi_{\alpha_{t_{i}}}^{\text{norm}}|_{U_{\alpha \alpha_{t_{i}}}}\circ g_{\alpha_{t_{i}}}^{t_{i}}\] \[=\begin{pmatrix}1&0\\ 0&((\omega_{\alpha}^{-1})^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})\end{pmatrix} \circ g_{\alpha_{t_{i}}}^{t_{i}}.\]
So we have \(B_{\alpha\alpha_{t_{i}}}^{-1}v(B_{\alpha\alpha_{t_{i}}})=(g_{\alpha_{t_{i}}}^{ t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t_{i}})\). Since \((g_{\alpha}^{t_{i}})_{<m_{i}}\) is independent of the moduli space \(M_{X}^{0}\), we have that \(v(g_{\alpha_{t_{i}}}^{t_{i}})=O(z_{i}^{m_{i}})\). Finally, we have the statement of the lemma.
Next we calculate \(v_{\alpha}(v)\in\mathcal{F}^{1}(U_{\alpha})\) for \(v\in T_{(E,\nabla,\{l^{(i)}\})}M_{X}^{0}\). This is given by calculating the variation of the connection matrix \(A_{\alpha}\) in (3.6) with respect to \(v\). So we have
\[v_{\alpha}(v)=\begin{cases}\varphi_{\alpha}^{\text{App}}\circ\begin{pmatrix}-v (\zeta_{j})\gamma_{\alpha}&v\left(\frac{\beta_{\alpha}-\zeta_{j}\delta_{\alpha }-\zeta_{j}^{2}\gamma_{\alpha}}{z_{j}-q_{j}}\right)\\ -v(q_{j})\gamma_{\alpha}&v(\operatorname{tr}(A_{\alpha_{q_{j}}}))+v(\zeta_{j}) \gamma_{\alpha}\end{pmatrix}\circ(\varphi_{\alpha}^{\text{App}})^{-1}&\text{ when }\alpha=\alpha_{q_{j}}\\ \varphi_{\alpha}^{\text{App}}\circ\begin{pmatrix}0&v(\beta_{\alpha})\\ 0&v(\operatorname{tr}(A_{\alpha}))\end{pmatrix}\circ(\varphi_{\alpha}^{\text{App }})^{-1}&\text{ when }\alpha\in I_{\text{cov}}\setminus(I_{\text{cov}}^{t}\cup I_{\text{cov}}^{q}) \end{cases}. \tag{3.15}\]
Here remark that \(\gamma_{\alpha}\) is independent of the moduli space \(M_{X}^{0}\) for any \(\alpha\). When \(\alpha=\alpha_{t_{i}}\), we have that \(v_{\alpha}(v)\) is holomorphic at \(t_{i}\).
### Canonical coordinates
Now we introduce canonical coordinates on \(M_{X}^{0}\) with respect to the symplectic form (3.2). We recall that we have set \(N:=4g+n-3\).
Let \(\pi\colon\mathbf{\Omega}(D)\to C\) and \(\pi_{0}\colon\mathbf{\Omega}\to C\) be the total spaces of \(\Omega^{1}_{C}(D)\) and \(\Omega^{1}_{C}\), respectively. The total space \(\mathbf{\Omega}\) has the Liouville symplectic form \(\omega_{\text{Liouv}}\). Since we have an isomorphism
\[\pi_{0}^{-1}(C\setminus\text{Supp}(D))\overset{\sim}{\longrightarrow}\pi^{-1}( C\setminus\text{Supp}(D)),\]
the Liouville symplectic form induces a symplectic form \(\pi^{-1}(C\backslash\operatorname{Supp}(D))\). Let \(\pi_{N}\colon\operatorname{Sym}^{N}(\mathbf{\Omega}(D))\to\operatorname{Sym}^{N }(C)\) be the map induced by the map \(\pi\colon\mathbf{\Omega}(D)\to C\). We set
\[\operatorname{Sym}^{N}(\mathbf{\Omega}(D))_{0}:=\left\{\{q_{1},\ldots,q_{N}\} \in\pi_{N}^{-1}(\operatorname{Sym}^{N}(C\setminus\operatorname{Supp}(D)))\ \big{|}\ q_{j_{1}}\neq q_{j_{2}}\ (j_{1}\neq j_{2})\right\}.\]
Then \(\operatorname{Sym}^{N}(\mathbf{\Omega}(D))_{0}\) has the induced symplectic form from the Liouville symplectic form.
**Remark 15**.: _We have a map \(f_{\operatorname{App},0}\colon M_{X}^{0}\to\operatorname{Sym}^{N}(\mathbf{ \Omega}(D))_{0}\), which is described in (3.7). Notice that \(M_{X}^{0}\) and \(\operatorname{Sym}^{N}(\mathbf{\Omega}(D))_{0}\) have symplectic forms. But by the explicit calculation as below, we realize that this map \(f_{\operatorname{App},0}\) does not preserve these symplectic structures. So \(f_{\operatorname{App},0}\) does not give canonical coordinates directly. To give canonical coordinates, we have to modify the map \(f_{\operatorname{App},0}\) as follows._
We twist \(\mathbf{\Omega}(D)\) by a class in \(H^{1}(C,\Omega_{C}^{1})\) as follows. Let \(c_{d}\) be the image of the line bundle \(\det(E)\) under the morphism
\[H^{1}(C,\mathcal{O}_{C}^{*})\xrightarrow{\operatorname{d}\log}H^{1}(C,\Omega_ {C}^{1})\cong\operatorname{Ext}_{C}^{1}(T_{C},\mathcal{O}_{C}).\]
Let \(\mathcal{A}_{C}(c_{d})\) be the sheaf produced by the Atiyah sequence on \(C\) with respect to \(c_{d}\), that is, \(\mathcal{A}_{C}(c_{d})\) is given by the extension
\[0\longrightarrow\mathcal{O}_{C}\longrightarrow\mathcal{A}_{C}(c_{d}) \longrightarrow T_{C}\longrightarrow 0 \tag{3.16}\]
with respect to \(c_{d}\in H^{1}(C,\Omega_{C}^{1})\). Then, \(\mathcal{A}_{C}(c_{d})\) is naturally a Lie-algebroid, called the Atiyah algebroid of the \(\mathbb{G}_{m}\)-principal bundle \(\operatorname{Tot}(T_{C})\setminus 0\), where \(0\) stands for the \(0\)-section; for details, see [40, Section 3.1.2]. We denote by \(\operatorname{symb}_{1}\colon\mathcal{A}_{C}(c_{d})\to T_{C}\) the morphism in (3.16). We consider the subsheaf \(T_{C}(-D)\subset T_{C}\). We set \(\mathcal{A}_{C}(c_{d},D):=\operatorname{symb}_{1}^{-1}T_{C}(-D)\), which is an extension
\[0\longrightarrow\mathcal{O}_{C}\longrightarrow\mathcal{A}_{C}(c_{d},D) \longrightarrow T_{C}(-D)\longrightarrow 0.\]
Let \(\Omega_{C}^{1}(D,c_{d})\) be the twisted cotangent bundle over \(C\) with respect to \(\mathcal{A}_{C}(c_{d},D)\), that is,
\[\Omega_{C}^{1}(D,c_{d})=\left\{\phi\in\mathcal{A}_{C}(c_{d},D)^{\vee}\ \big{|}\ \langle\phi,1_{\mathcal{A}_{C}(c_{d},D)}\rangle=1\right\}.\]
We denote by
\[\pi_{c_{d}}\colon\mathbf{\Omega}(D,c_{d})\longrightarrow C\]
the total space of the twisted cotangent bundle \(\Omega_{C}^{1}(D,c_{d})\), and a generic element of this affine bundle by \((q,\tilde{p})\) in analogy with classical notation \((q,p)\) for points of \(\mathbf{\Omega}(D)\). For each \((E,\nabla,\{l^{(i)}\})\in M_{X}^{0}\), we have \((\det(E),\operatorname{tr}(\nabla))\). The connection \(\operatorname{tr}(\nabla)\) on the line bundle \(\det(E)\) is considered as a _global_ section of \(\mathbf{\Omega}(D,c_{d})\to C\), which is the total space of the twisted cotangent bundle with respect to \(\det(E)\). The global section \(\operatorname{tr}(\nabla)\) gives a diffeomorphism
\[\mathbf{\Omega}(D)\longrightarrow\mathbf{\Omega}(D,c_{d});\quad(q,p) \longmapsto(q,p+\operatorname{tr}(\nabla)).\]
Notice that \(\operatorname{tr}(\nabla)\)_does_ depend on \(M_{X}^{0}\). So this morphism depends on \(M_{X}^{0}\). Moreover, it is not a morphism of vector bundles.
**Definition 16**.: _We define the accessory parameter associated to \((E,\nabla)\) at \(q_{j}\) by_
\[\tilde{p}_{j}=\operatorname{res}_{q_{j}}(\beta)+\operatorname{tr}(\nabla)|_{q _{j}},\]
_where \(\beta\in H^{0}(C,(\Omega_{C}^{1})^{\otimes 2}(2D+B))\) is the \((1,2)\)-entry of (2.9) and \(\operatorname{res}_{q_{j}}(\beta)\in\Omega_{C}^{1}(D)|_{q_{j}}\). The \(N\)-tuple \(\{(q_{j},\tilde{p}_{j})\}_{j=1,2,\ldots,N}\) will be called canonical coordinates of \((E,\nabla)\). We let \(f_{\operatorname{App}}\) be the map_
\[f_{\operatorname{App}}\colon M_{X}^{0} \longrightarrow\operatorname{Sym}^{N}(\mathbf{\Omega}(D,c_{d}))\] \[(E,\nabla,\{l^{(i)}\}) \longmapsto\left\{(q_{j},\tilde{p}_{j})\right\}_{j=1,2,\ldots,N}.\]
Notice that the map \(f_{\mathrm{App},0}\) in (3.7) is defined by using only \(\mathrm{res}_{q_{j}}(\beta)\). The reason why we consider the twisted cotangent bundle \(\boldsymbol{\Omega}(D,c_{d})\) is to justify \(\mathrm{tr}(\nabla)|_{q_{j}}\). The next proposition shows that the quantities introduced in the definition may indeed be called coordinates.
**Proposition 17**.: _The map \(f_{\mathrm{App}}\) introduced in Definition 16 is birational._
Proof.: It follows from Proposition 10 that the dimensions of the source and target of \(f_{\mathrm{App}}\) agree. We therefore need to show two things: first, that \(f_{\mathrm{App}}\) is rational, and second, that it admits an inverse over a Zariski open subset of \(\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\).
The first assertion is trivial, because the construction of the apparent singularities \(q_{j}\) and their accessory parameters \(\tilde{p}_{j}\) follow from algebraic arguments on certain Zariski open subsets.
The key statement is existence of a generic inverse. This is now a variant of Lemma 7. Namely, fixing generic \(\left\{(q_{j},\tilde{p}_{j})\right\}_{j=1,2,\ldots,N}\), we must find a unique \((\delta,\beta)\). Since we have \(\delta=\mathrm{tr}(\nabla_{0})\), we get the expression
\[\tilde{p}_{j}=\zeta_{j}\,\mathrm{d}z_{j}+\delta-\frac{\mathrm{d}z_{j}}{z_{j}}.\]
An algebraic manipulation shows that the constraint (2.14) expressing that the singularity at \(q_{j}\) be apparent is equivalent to the holomorphicity and vanishing of the expression
\[\beta+\delta\left(\tilde{p}_{j}+\frac{\mathrm{d}z_{j}}{z_{j}}\right)-\left( \tilde{p}_{j}+\frac{\mathrm{d}z_{j}}{z_{j}}\right)^{2}. \tag{3.17}\]
We now study these conditions by taking the Laurent expansion of this expression with respect to \(z_{j}\). We first observe that it clearly admits a pole of order at most \(2\) at \(q_{j}\), because \(q_{j}\neq t_{i}\). Since \(\delta\) has a simple pole with residue \(1\), the term of degree \(-2\) is
\[(\mathrm{d}z_{j})^{\otimes 2}-(\mathrm{d}z_{j})^{\otimes 2}=0.\]
So the pole is automatically at most simple.
For the study of the residue, we need to introduce some notation: let us write
\[\delta_{0} =\frac{\mathrm{d}z_{j}}{z_{j}}+\delta_{0}^{(j)}\] \[\beta_{0} =\zeta_{j}\frac{(\mathrm{d}z_{j})^{\otimes 2}}{z_{j}}+\beta_{0}^{(j)}\]
for a holomorphic rank \(1\) connection \(\delta_{0}^{(j)}\) and a holomorphic quadratic differential \(\beta_{0}^{(j)}\) on \(U_{q_{j}}\). Then, the degree \(-1\) part of (3.17) is (up to a global factor \(\mathrm{d}z_{j}\))
\[\zeta_{j}\,\mathrm{d}z_{j}+\tilde{p}_{j}+\left(\delta-\frac{\mathrm{d}z_{j}}{ z_{j}}\right)-2\tilde{p}_{j}=0\]
by the definition of \(\tilde{p}_{j}\).
Finally, to deal with the vanishing constraint, we make use of the same basis expansions for \(\delta\) and \(\beta\) as in Lemma 7. Then, the conditions read as
\[\sum_{k=1}^{N-g}b_{k}\nu_{k}(q_{j})+\tilde{p}_{j}\sum_{l=1}^{g}d_{l}\omega_{l}(q _{j})=(\tilde{p}_{j})^{\otimes 2}-\delta_{0}^{(j)}(q_{j})\tilde{p}_{j}-\beta_{0}^ {(j)}(q_{j}).\]
Now, the determinant of this linear system of \(N\) equations (for \(1\leq j\leq N\)) in \(N\) variables \(b_{1},\ldots,b_{N-g},d_{1},\ldots,d_{g}\) agrees with the determinant studied in Lemma 7, up to replacing each occurrence of \(\zeta_{j}\,\mathrm{d}z_{j}\) by \(\tilde{p}_{j}\). The end of the proof then follows word by word the method of Lemma 7.
**Remark 18**.: _The expression (3.17) has variables \(\tilde{p}_{j}\) in the twisted cotangent sheaf rather than the ordinary cotangent sheaf. The quadratic polynomial of \(\tilde{p}_{j}\) can be viewed as the characteristic polynomial of the connection matrix of \(\nabla_{0}\). Thus, in a sense the vanishing condition on (3.17) may be interpreted as the requirement that \(\tilde{p}_{j}\) lie on the quantum spectral curve of \(\nabla_{0}\), see e.g. [11]._
By taking a local trivialization of \(\det(E)\), we have a concrete description of the map \(f_{\mathrm{App}}\). Now we will discuss on such a description of \(f_{\mathrm{App}}\). The description discussed below is useful for the proof of Theorem 20 below. Let \((E,\nabla,\{l^{(i)}\})\in M^{0}_{X}\). As a local trivialization of \(\det(E)\), we take the isomorphism
\[\det(\varphi^{\mathrm{App}}_{\alpha_{q_{j}}})\colon\mathcal{O}_{U_{\alpha_{q_ {j}}}}\longrightarrow\det(E)|_{U_{\alpha_{q_{j}}}}, \tag{3.18}\]
which is the determinant of the trivialization in Definition 13. Notice that the composition
\[\mathcal{O}_{U_{\alpha_{q_{j}}}}\xrightarrow{\omega^{-1}_{\alpha_{q_{j}}}} (\Omega^{1}_{C}(D))^{-1}|_{U_{\alpha_{q_{j}}}}\xrightarrow{\det(\phi\nabla)|_ {U_{\alpha_{q_{j}}}}}\det(E)|_{U_{\alpha_{q_{j}}}}\xrightarrow{\det(\varphi^{ \mathrm{App}}_{\alpha_{q_{j}}})^{-1}}\mathcal{O}_{U_{\alpha_{q_{j}}}}\]
coincides with \((z_{j}-q_{j})\colon\mathcal{O}_{U_{\alpha_{q_{j}}}}\to\mathcal{O}_{U_{\alpha_{ q_{j}}}}\). Let \(\operatorname{tr}(A_{\alpha_{q_{j}}})\in\Omega^{1}_{C}(D)|_{U_{\alpha_{q_{j}}}}\) be the connection matrix of \((\det(E),\operatorname{tr}(\nabla))\) on \(U_{\alpha_{q_{j}}}\) with respect to the local trivialization \(\det(\varphi^{\mathrm{App}}_{\alpha_{q_{j}}})\). Then, by using (3.5), the map \(f_{\mathrm{App}}\) has the following description:
\[f_{\mathrm{App}}\colon(E,\nabla,\{l^{(i)}\})\longmapsto\left\{\left(q_{j}, \zeta_{j}\gamma_{\alpha_{q_{j}}}|_{q_{j}}+\operatorname{tr}(A_{\alpha_{q_{j}} })|_{q_{j}}\right)\right\}_{j=1,2,\ldots,N},\]
Here \(\zeta_{j}\gamma_{\alpha_{q_{j}}}|_{q_{j}}+\operatorname{tr}(A_{\alpha_{q_{j}} })|_{q_{j}}\) is an element of \(\Omega^{1}_{C}(D)|_{q_{j}}\). We set
\[p_{j}:=\operatorname{res}_{q_{j}}\left(\frac{\zeta_{j}\gamma_{\alpha_{q_{j}}}} {z_{j}-q_{j}}\right)+\operatorname{res}_{q_{j}}\left(\frac{\operatorname{tr} (A_{\alpha_{q_{j}}})}{z_{j}-q_{j}}\right), \tag{3.19}\]
which is the image of \(\zeta_{j}\gamma_{\alpha_{q_{j}}}|_{q_{j}}+\operatorname{tr}(A_{\alpha_{q_{j}} })|_{q_{j}}\) under the isomorphism \(\Omega^{1}_{C}(D)|_{q_{j}}\cong\mathbb{C}\).
**Remark 19**.: _This \(p_{j}\) is just the evaluation of the \((2,2)\)-entry of the connection matrix \(A_{\alpha_{q_{j}}}\) in (3.6) at \(q_{j}\). Note that the \((2,1)\)-entry of this connection matrix \(A_{\alpha_{q_{j}}}\) at \(q_{j}\) vanishes. So \(p_{j}\) is an "eigenvalue" of \(\nabla\) at \(q_{j}\). (On the other hand, \(\zeta_{j}\) is an "eigenvector" of \(\nabla_{0}\) at \(q_{j}\)). This fact means that the coordinates \((q_{j},p_{j})_{j}\) are an analog of the coordinates on the moduli space of (parabolic) Higgs bundles given as in [16] and [19]. The coordinates on the moduli space of (parabolic) Higgs bundles are by using the BNR correspondence [3]. (See Section 6)._
Let \(\pi_{c_{d},N}\colon\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\to \operatorname{Sym}^{N}(C)\) be the map induced by the map \(\pi_{c_{d}}\colon\boldsymbol{\Omega}(D,c_{d})\to C\). We set
\[\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))_{0}:=\left\{\{(q_{j}, \tilde{p}_{j})\}_{j=1}^{N}\in\pi_{c_{d},N}^{-1}(\operatorname{Sym}^{N}(C \setminus\operatorname{Supp}(D)))\ \Big{|}\ q_{j_{1}}\neq q_{j_{2}}\ (j_{1}\neq j_{2}) \right\}.\]
Then \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))_{0}\) has the induced symplectic form from the Liouville symplectic form. Notice that by construction the image of \(M^{0}_{X}\) under the map \(f_{\mathrm{App}}\) is contained in \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))_{0}\).
**Theorem 20**.: _Let \(\omega\) be the symplectic form on \(M^{0}_{X}\) defined by (3.2). The pull-back of the symplectic form on \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))_{0}\) under the map_
\[f_{\mathrm{App}}\colon M^{0}_{X}\longrightarrow\operatorname{Sym}^{N}( \boldsymbol{\Omega}(D,c_{d}))_{0}\]
_in Definition 16 coincides with \(\omega\)._
Proof.: Let \(V\) be an analytic open subset of \(M_{X}^{0}\) as in Section 3.4. Moreover, we assume that we may define a composition
\[V \longrightarrow f_{\operatorname{App}}(V) \longrightarrow\operatorname{Sym}^{N}(\mathbb{C}^{2}_{(q,p)})\] \[(E,\nabla) \longmapsto f_{\operatorname{App}}(E,\nabla) \longmapsto\left\{(q_{j},p_{j})\right\}_{j=1,2,\dots,N},\]
where \(p_{j}\) is defined in (3.19), and the image of \(V\) under the composition is isomorphic to some analytic open subset of \(\mathbb{C}^{2N}_{(q,p)}\). Let \(U_{(q,p)}\) be such an analytic open subset of \(\mathbb{C}^{2N}_{(q,p)}\). We denote by \(f_{2}\) the map
\[M_{X}^{0}\supset V \longrightarrow U_{(q,p)}\subset\mathbb{C}^{2N}_{(q,\zeta)}\] \[(E,\nabla) \longmapsto(q_{1},\dots,q_{N},p_{1},\dots,p_{N}).\]
We consider the following maps
\[U_{(q,\zeta)}\xleftarrow{f_{1}}V\xleftarrow{f_{2}}U_{(q,p)}.\]
Here \(f_{1}\colon V\xrightarrow{\sim}U_{(q,\zeta)}\) is the isomorphism (3.8). The symplectic structure on \(U_{(q,p)}\) induced by the symplectic structure on \(\operatorname{Sym}^{N}(\boldsymbol{\Omega}(D,c_{d}))\) is \(\sum_{j=1}^{N}dp_{j}\wedge dq_{j}\). We will show that
\[(f_{1}^{-1})^{*}(\omega|_{V})=(f_{2}\circ f_{1}^{-1})^{*}\left(\sum_{j=1}^{N} dp_{j}\wedge dq_{j}\right).\]
Let \(v,v^{\prime}\) be elements of \(T_{(q_{j},\zeta_{j})_{j}}U_{(q,\zeta)}\) for \((q_{j},\zeta_{j})_{j}\in U_{(q,\zeta)}\). We will use the description of the tangent map (3.10) of \(f_{1}^{-1}\colon U_{(q,\zeta)}\to V\). That is, we calculate \((f_{1}^{-1})^{*}(\omega|_{V})\) by applying the descriptions (3.12) and (3.15) of \(u_{\alpha\beta}(v)\) and \(v_{\alpha}(v)\), respectively.
First we consider \(\{u_{\alpha\beta}(v)u_{\beta\gamma}(v^{\prime})\}_{\alpha\beta\gamma}\). Remark that \(U_{\alpha_{q_{j_{1}}}}\cap U_{\alpha_{q_{j_{2}}}}=\emptyset\) for any \(j_{1}\) and \(j_{2}\), \(U_{\alpha_{\iota_{i_{1}}}}\cap U_{\alpha_{\iota_{i_{2}}}}=\emptyset\) for any \(i_{1}\) and \(i_{2}\), and \(U_{\alpha_{q_{j}}}\cap U_{\alpha_{\iota_{i_{i}}}}=\emptyset\) for any \(j\) and \(i\). Then we have \(u_{\alpha\beta}u_{\beta\gamma}=0\) by Lemma 14. So we may take a representative of the class in the pairing (3.2) so that
\[[-\{\operatorname{tr}(u_{\alpha\beta}(v)\circ v_{\beta}(v^{\prime}))- \operatorname{tr}(v_{\alpha}(v)\circ u_{\alpha\beta}(v^{\prime}))\}_{\alpha \beta}]\in H^{1}(C,\Omega^{1}_{C})\cong\mathbb{C}.\]
Now we calculate \(\operatorname{tr}(u_{\alpha\beta}(v)\circ v_{\beta}(v^{\prime}))- \operatorname{tr}(v_{\alpha}(v)\circ u_{\alpha\beta}(v^{\prime}))\). If \(\alpha\in I_{\operatorname{cov}}\setminus(I_{\operatorname{cov}}^{t}\cup I_ {\operatorname{cov}}^{q})\) and \(\beta=\alpha_{q_{j}}\), then, by applying (3.12) and (3.15), we have the following equalities
\[\begin{split}&\operatorname{tr}(u_{\alpha\alpha_{q_{j}}}(v)v_{ \alpha_{q_{j}}}(v^{\prime}))-\operatorname{tr}(v_{\alpha}(v)u_{\alpha\alpha_{ q_{j}}}(v^{\prime}))\\ &=\operatorname{tr}\left(\begin{pmatrix}0&\frac{v(\zeta_{j})}{z_{j }-q_{j}}\\ 0&\frac{v(q_{j})}{z_{j}-q_{j}}\end{pmatrix}\begin{pmatrix}*&*\\ -v^{\prime}(q_{j})\gamma_{\alpha_{q_{j}}}&v^{\prime}(\operatorname{tr}(A_{\alpha _{q_{j}}}))+v^{\prime}(\zeta_{j})\gamma_{\alpha_{q_{j}}}\end{pmatrix}\right) \\ &\qquad-\operatorname{tr}\left(\begin{pmatrix}*&*\\ 0&v(\operatorname{tr}(A_{\alpha}))\end{pmatrix}\begin{pmatrix}0&\frac{v^{\prime} (\zeta_{j})}{z_{j}-q_{j}}\\ 0&\frac{v^{\prime}(q_{j})}{z_{j}-q_{j}}\end{pmatrix}\right)\\ &=-\frac{v(\zeta_{j})v^{\prime}(q_{j})\gamma_{\alpha_{q_{j}}}}{z_{j}-q_{j}}+ \frac{v(q_{j})\left(v^{\prime}(\operatorname{tr}(A_{\alpha_{q_{j}}}))+v^{ \prime}(\zeta_{i})\gamma_{\alpha_{q_{j}}}\right)}{z_{j}-q_{j}}-\frac{v^{ \prime}(q_{j})\left(v(\operatorname{tr}(A_{\alpha}))\right)}{z_{j}-q_{j}}\\ &=-\frac{\left(v(\zeta_{j})\gamma_{\alpha_{q_{j}}}+v(\operatorname{tr}(A_{ \alpha}))\right)v^{\prime}(q_{j})}{z_{j}-q_{j}}+\frac{v(q_{j})\left(v^{ \prime}(\operatorname{tr}(A_{\alpha_{q_{j}}}))+v^{\prime}(\zeta_{i})\gamma_{ \alpha_{q_{j}}}\right)}{z_{j}-q_{j}}.\end{split} \tag{3.20}\]
Now we consider the difference between \(v(\operatorname{tr}(A_{\alpha_{q_{j}}}))\) and \(v(\operatorname{tr}(A_{\alpha}))\). So we consider infinitesimal deformation of \((\det(\nabla),\operatorname{tr}(\nabla))\). We have that
\[\det(B_{\alpha\alpha_{q_{j}}})=\det\left(\begin{pmatrix}1&0\\ 0&((\omega_{\alpha}^{-1})^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})\end{pmatrix} \begin{pmatrix}1&\frac{\zeta_{j}}{z_{j}-q_{j}}\\ 0&\frac{\zeta_{j}}{z_{j}-q_{j}}\end{pmatrix}\right)=\frac{((\omega_{\alpha}^{-1 })^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})}{z_{j}-q_{j}}.\]
Here \(B_{\alpha\alpha_{q_{j}}}\) is calculated in (3.14). Set
\[u_{\alpha\alpha_{q_{j}}}^{\det}(v):=\det(B_{\alpha\alpha_{q_{j}}})^{-1}v(\det (B_{\alpha\alpha_{q_{j}}}))=\frac{v(q_{j})}{z_{j}-q_{j}}. \tag{3.21}\]
Here remark that \(((\omega_{\alpha}^{-1})^{-1}\circ\omega_{\alpha_{q_{j}}}^{-1})\) is independent of the moduli space \(M^{0}_{X}\). We have a cocycle condition
\[v(\operatorname{tr}(A_{\alpha_{q_{j}}}))-v(\operatorname{tr}(A_{\alpha}))= \operatorname{tr}(\nabla)\circ u_{\alpha\alpha_{q_{j}}}^{\det}-u_{\alpha\alpha _{q_{j}}}^{\det}\circ\operatorname{tr}(\nabla).\]
So we have
\[v(\operatorname{tr}(A_{\alpha_{q_{j}}}))-v(\operatorname{tr}(A_{\alpha}))= \operatorname{d}\left(\frac{v(q_{j})}{z_{j}-q_{j}}\right)=-\frac{v(q_{j}) \operatorname{d}z_{j}}{(z_{j}-q_{j})^{2}}.\]
By applying this difference to (3.20), we have that
\[\operatorname{tr}(u_{\alpha\alpha_{q_{j}}}(v)v_{\alpha_{q_{j}}}(v ^{\prime}))-\operatorname{tr}(v_{\alpha}(v)u_{\alpha\alpha_{q_{j}}}(v^{\prime }))\] \[=-\frac{\left(v(\zeta_{j})\gamma_{\alpha_{q_{j}}}+v(\operatorname{ tr}(A_{\alpha_{q_{j}}}))\right)v^{\prime}(q_{j})}{z_{j}-q_{j}}+\frac{v(q_{j}) \left(v^{\prime}(\operatorname{tr}(A_{\alpha_{q_{j}}}))+v^{\prime}(\zeta_{i}) \gamma_{\alpha_{q_{j}}}\right)}{z_{j}-q_{j}}-\frac{v(q_{j})v^{\prime}(q_{j}) \operatorname{d}z_{j}}{(z_{j}-q_{j})^{3}}. \tag{3.22}\]
So we may extend the \(1\)-form
\[\operatorname{tr}(u_{\alpha\alpha_{q_{j}}}(v)v_{\alpha_{q_{j}}}(v^{\prime}))- \operatorname{tr}(v_{\alpha}(v)u_{\alpha\alpha_{q_{j}}}(v^{\prime}))\]
from \(U_{\alpha\alpha_{q_{j}}}\) to \(U_{\alpha_{q_{j}}}\) by (3.22). Then we have a meromorphic \(1\)-form defined on \(U_{\alpha_{q_{j}}}\), which has a pole at \(q_{j}\). We denote by \(\omega_{\alpha_{q_{j}}}(v,v^{\prime})\) the meromorphic \(1\)-form defined on \(U_{\alpha_{q_{j}}}\).
Next we consider the case where \(\alpha\in I_{\operatorname{cov}}\setminus(I_{\operatorname{cov}}^{t}\cup I_{ \operatorname{cov}}^{q})\) and \(\beta=\alpha_{t_{i}}\). We have the following equalities
\[\operatorname{tr}(u_{\alpha\alpha_{t_{i}}}(v)v_{\alpha_{t_{i}}}(v ^{\prime}))-\operatorname{tr}(v_{\alpha}(v)u_{\alpha\alpha_{t_{i}}}(v^{\prime }))\] \[=\operatorname{tr}\left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{ \alpha_{t_{i}}}^{t_{i}})v^{\prime}(A_{\alpha_{t_{i}}})\right)-\operatorname{ tr}\left(\left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(A_{\alpha})g_{\alpha_{t_{i}}}^{t_{i}} \right)(g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v^{\prime}(g_{\alpha_{t_{i}}}^{t_{i}})\right)\]
We have the cocycle condition
\[v(A_{\alpha_{t_{i}}})-(g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(A_{\alpha})(g_{\alpha _{t_{i}}}^{t_{i}})\] \[=(\operatorname{d}+A_{\alpha_{t_{i}}})\circ\left((g_{\alpha_{t_{ i}}}^{t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t_{i}})\right)-\left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1} v(g_{\alpha_{t_{i}}}^{t_{i}})\right)\circ(\operatorname{d}+A_{\alpha_{t_{i}}})\] \[=\operatorname{d}\left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{ \alpha_{t_{i}}}^{t_{i}})\right)+\left[\,A_{\alpha_{t_{i}}},\,\left((g_{\alpha_ {t_{i}}}^{t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t_{i}})\right)\right].\]
By this condition, we have
\[\operatorname{tr}(u_{\alpha\alpha_{t_{i}}}(v)v_{\alpha_{t_{i}}}(v ^{\prime}))-\operatorname{tr}(v_{\alpha}(v)u_{\alpha\alpha_{t_{i}}}(v^{\prime }))\] \[=\operatorname{tr}\left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{ \alpha_{t_{i}}}^{t_{i}})v^{\prime}(A_{\alpha_{t_{i}}})\right)-\operatorname{ tr}\left(v(A_{\alpha_{t_{i}}})(g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v^{\prime}(g_{ \alpha_{t_{i}}}^{t_{i}})\right)\] \[\qquad+\operatorname{tr}\left(\left(\operatorname{d}\left((g_{ \alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t_{i}})\right)+\left[\,A_{ \alpha_{t_{i}}},\,\left((g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v(g_{\alpha_{t_{i}}}^{t _{i}})\right)\right]\right)(g_{\alpha_{t_{i}}}^{t_{i}})^{-1}v^{\prime}(g_{ \alpha_{t_{i}}}^{t_{i}})\right) \tag{3.23}\]
So we may extend the 1-form
\[\operatorname{tr}(u_{\alpha\alpha_{t_{i}}}(v)v_{\alpha_{t_{i}}}(v^{\prime}))- \operatorname{tr}(v_{\alpha}(v)u_{\alpha\alpha_{t_{i}}}(v^{\prime}))\]
from \(U_{\alpha\alpha_{t_{i}}}\) to \(U_{\alpha_{t_{i}}}\) by (3.23). Since we have the vanishing of the lower terms (3.13), the extended 1-form defined on \(U_{\alpha_{t_{i}}}\) is holomorphic. We denote by \(\omega_{\alpha_{t_{i}}}(v,v^{\prime})\) the holomorphic 1-form defined on \(U_{\alpha_{t_{i}}}\).
For \(\alpha\in I_{\operatorname{cov}}\setminus(I_{\operatorname{cov}}^{t}\cup I_ {\operatorname{cov}}^{q})\), we set \(\omega_{\alpha}(v,v^{\prime})=0\). By (3.22) and (3.23), we have a meromorphic coboundary \(\{\omega_{\alpha}(v,v^{\prime})\}_{\alpha}\) of
\[\{\operatorname{tr}(u_{\alpha\beta}(v)\circ v_{\beta}(v^{\prime}))- \operatorname{tr}(v_{\alpha}(v)\circ u_{\alpha\beta}(v^{\prime}))\}_{\alpha \beta}.\]
So we have
\[H^{1}(C,\Omega_{C}^{1})\xrightarrow{\cong}\mathbb{C}\] \[[-\{\operatorname{tr}(u_{\alpha\beta}(v)\circ v_{\beta}(v^{\prime }))-\operatorname{tr}(v_{\alpha}(v)\circ u_{\alpha\beta}(v^{\prime}))\}_{ \alpha\beta}]\longmapsto\sum_{x\in C}-\operatorname{res}_{x}\left(\omega_{ \alpha}(v,v^{\prime})\right).\]
By taking the residues of the right hand sides of (3.22) and (3.23), we have that
\[-\sum_{x\in C}\operatorname{res}_{x}\left(\omega_{\alpha}(v,v^{ \prime})\right) =\sum_{j=1}^{N}\operatorname{res}_{q_{j}}\left(\frac{\left(v(\zeta_{j} )\gamma_{\alpha_{q_{j}}}+v(\operatorname{tr}(A_{\alpha_{q_{j}}}))\right)v^{ \prime}(q_{j})}{z_{j}-q_{j}}\right)\] \[\qquad-\sum_{j=1}^{N}\operatorname{res}_{q_{j}}\left(\frac{v(q_{ j})\left(v^{\prime}(\operatorname{tr}(A_{\alpha_{q_{j}}}))+v^{\prime}(\zeta_{i}) \gamma_{\alpha_{q_{j}}}\right)}{z_{j}-q_{j}}\right)\] \[=\sum_{j=1}^{N}\left(v(p_{j})v^{\prime}(q_{j})-v(q_{j})v^{\prime }(p_{j})\right)=\left(\sum_{j=1}^{N}dp_{j}\wedge dq_{j}\right)(v,v^{\prime}).\]
Here remark that \(\gamma_{\alpha}\) is independent of the moduli space \(M_{X}^{0}\) for any \(\alpha\).
By the map \(f_{\operatorname{App}}\), we have concrete canonical coordinates as follows. We take an analytic open subset \(V\) of \(M_{X}^{0}\) at a point \((E,\nabla)\), which is small enough. We define functions \(q_{j}\) and \(p_{j}\) (\(j=1,2,\ldots,N\)) on \(V\) as follows. (So, here, the notation \(q_{j}\) has a double meaning). Let \(U_{\alpha_{q_{j}}}\) be an analytic open subset of \(C\) such that \(U_{\alpha_{q_{j}}}\) contains the apparent singularity \(q_{j}\) of the point \((E,\nabla)\) and is small enough. Let \(q_{j}^{\prime}\) be the apparent singularity of each \((E^{\prime},\nabla^{\prime})\in V\), where \(q_{j}^{\prime}\in U_{\alpha_{q_{j}}}\). First we take a local coordinate \(z_{j}\) on \(U_{\alpha_{q_{j}}}\). By evaluating the apparent singularity \(q_{j}^{\prime}\) by the local coordinate \(z_{j}\) for each \((E^{\prime},\nabla^{\prime})\in V\), we have a function \(q_{j}\colon V\to\mathbb{C}\). Second, let \((E_{V},\nabla_{V})\) be a vector bundles on \(C\times V\), which is a family of vector bundles on \(C\) parametrized by \(V\). We take a trivialization of \(\det(E_{V})\) on \(U_{\alpha_{q_{j}}}\times V\) which depends on only \(q_{j}\colon V\to\mathbb{C}\) (which is described in (3.18)). We take the connection matrix of \(\operatorname{tr}(\nabla_{V})\) with respect to the local trivialization. Let \(\mathbf{\Omega}(D,c_{d})_{V}\to C\times V\) be the relative twisted cotangent bundle over \(V\) with respect to the family of line bundles \(\det(E_{V})\) on \(C\times V\). We have an identification between \(\mathbf{\Omega}(D,c_{d})_{V}\) and \(\mathbf{\Omega}(D)\times V\) on \(U_{\alpha_{q_{j}}}\times V\) that depends only on \(q_{j}\colon V\to\mathbb{C}\). By evaluating \(\operatorname{res}_{q_{j}^{\prime}}(\beta^{\prime})+\operatorname{tr}(\nabla^{ \prime})|_{q_{j}^{\prime}}\) by the identification \(\mathbf{\Omega}(D,c_{d})|_{q_{j}^{\prime}}\cong\mathbf{\Omega}(D)|_{q_{j}^{ \prime}}\cong\mathbf{\Omega}\) for each \((E^{\prime},\nabla^{\prime})\in V\), we have a function \(p_{j}\colon V\to\mathbb{C}\). This is just
(3.19). That is, this is the following composition:
\[V \longrightarrow U_{\alpha_{q_{j}}}\times V\longrightarrow\mathbf{ \Omega}(D,c_{d})_{V}|_{U_{\alpha_{q_{j}}}\times V}\longrightarrow\mathbf{\Omega}(D )|_{U_{\alpha_{q_{j}}}}\longrightarrow\mathbb{C}\] \[(E^{\prime},\nabla^{\prime}) \longmapsto(q_{j}^{\prime},(E^{\prime},\nabla^{\prime})) \longmapsto\left((\zeta_{j}\gamma_{\alpha_{q_{j}}})_{V}+\operatorname{tr}( \nabla_{V})\right)|_{(q_{j}^{\prime},(E^{\prime},\nabla^{\prime}))}\] \[\longmapsto\left(\zeta_{j}^{\prime}\gamma_{\alpha_{q_{j}}}+ \operatorname{tr}(A_{\alpha_{q_{j}}}^{\prime})\right)|_{q_{j}^{\prime}} \longmapsto\operatorname{res}_{q_{j}^{\prime}}\left(\frac{\zeta_{j}^{\prime} \gamma_{\alpha_{q_{j}}}+\operatorname{tr}(A_{\alpha_{q_{j}}}^{\prime})}{z_{j} -q_{j}^{\prime}}\right).\]
By Theorem 20, the symplectic structure on \(V\) has the following description: \(\sum_{j=1}^{N}\mathrm{d}p_{j}\wedge\mathrm{d}q_{j}\).
**Remark 21**.: _We set_
\[p_{j}^{0}:=\operatorname{res}_{q_{j}}\left(\frac{\zeta_{j}\gamma_{\alpha_{q_{j} }}}{z_{j}-q_{j}}\right)\in\mathbb{C}.\]
_If \(g=0\), then \(\operatorname{res}_{q_{j}}\left(\frac{\operatorname{tr}(A_{\alpha_{q_{j}}})}{ z_{j}-q_{j}}\right)\) depends on only \(q_{j}\). So we have \(\sum_{j=1}^{N}\mathrm{d}p_{j}\wedge\mathrm{d}q_{j}=\sum_{j=1}^{N}\mathrm{d}p_{j }^{0}\wedge\mathrm{d}q_{j}\). Here the symplectic form \(\sum_{j=1}^{N}\mathrm{d}p_{j}^{0}\wedge\mathrm{d}q_{j}\) is induced by the symplectic form on \(\operatorname{Sym}^{N}(\mathbf{\Omega}(D))_{0}\)._
**Remark 22**.: _In general, \(\sum_{j=1}^{N}\mathrm{d}p_{j}\wedge\mathrm{d}q_{j}\neq\sum_{j=1}^{N}\mathrm{d }p_{j}^{0}\wedge\mathrm{d}q_{j}\), that is,_
\[\sum_{j}\mathrm{d}\Bigg{(}\operatorname{res}_{q_{j}}\left(\frac{\operatorname{ tr}(A_{\alpha_{q_{j}}})}{z_{j}-q_{j}}\right)\Bigg{)}\wedge\mathrm{d}q_{j} \tag{3.24}\]
_does not vanish. This is related to the determinant map_
\[M_{X}^{0} \longrightarrow M_{X}^{\operatorname{rk}=1}(\boldsymbol{\nu}_{ \text{res}})\] \[(E,\nabla,\{l^{(i)}\}) \longmapsto(\det(E),\operatorname{tr}(\nabla)).\]
_The 2-form (3.24) comes from_
\[\left[\{u_{\alpha\beta}^{\det}(v)u_{\beta\gamma}^{\det}(v^{\prime})\},-\{u_{ \alpha\beta}^{\det}(v)v^{\prime}(\operatorname{tr}(A_{\beta}))-v(\operatorname {tr}(A_{\alpha}))u_{\alpha\beta}^{\det}(v^{\prime})\}\right]\in\mathbf{H}^{2}( \mathcal{O}_{C}\to\Omega_{C}^{1}).\]
_Here \(u_{\alpha\beta}^{\det}(v)\) is defined as in (3.21). This class gives rise to the \(2\)-form on \(M_{X}^{0}\) which is just the pull-back of the natural symplectic form on \(M_{X}^{\operatorname{rk}=1}(\boldsymbol{\theta}_{\text{res}})\) under the determinant map. The determinant map is not degenerate in general. So the class (3.24) does not vanish in general._
## 4. Symplectic structure on the moduli space with fixed trace connection
In this section, we consider the moduli spaces of rank \(2\) quasi-parabolic connections _with fixed trace connection_. When the effective divisor \(D\) is reduced, this moduli space is detailed in [1], [42] (when \(g=0\)), [12], [13] (when \(g=1\)), and [44] (when \(g\geq 1\)). The moduli spaces of rank \(2\) quasi-parabolic connections with fixed trace connection has a natural symplectic structure described as in Section 3.2. The purpose of this section is to give coordinates on some generic part of the moduli space and to describe the natural symplectic structure by using the coordinates. As in the case where the effective divisor \(D\) is reduced ([42], [12], [13], [44]), we may define the map forgetting connections and the apparent map. These maps are from a generic part of the moduli space to projective spaces. These maps will give our coordinates on the generic part of the moduli space. First we describe these maps.
### Moduli space of quasi-parabolic bundles with fixed determinant
To describe the map forgetting connections, we recall the moduli space of quasi-parabolic bundles. The moduli space of (quasi-)parabolic bundles was introduced in Mehta-Seshadri [45]. Yokogawa generalized this notion to (quasi-)parabolic sheaves and studied their moduli [54].
Let \(\nu\) be a positive integer. Set \(I:=\{1,2,\ldots,\nu\}\). Let \(C\) be a compact Riemann surface of genus \(g\), and \(D=\sum_{i\in I}m_{i}[t_{i}]\) be an effective divisor on \(C\). We assume \(3g-3+n>0\) where \(n=\operatorname{length}(D)\). Let \(z_{i}\) be a generator of the maximal ideal of \(\mathcal{O}_{C,t_{i}}\). We fix a line bundle \(L_{0}\) with \(\deg(L_{0})=2g-1\).
**Definition 23**.: _We say \((E,\{l^{(i)}\})\) a rank \(2\) quasi-parabolic bundle with determinant \(L_{0}\) over \((C,D)\) if_
* \(E\) _is a rank_ \(2\) _vector bundle of degree_ \(2g-1\) _on_ \(C\) _with_ \(\det(E)\cong L_{0}\)_, and_
* \(E|_{m_{i}[t_{i}]}\supset l^{(i)}\supset 0\) _is a filtration by free_ \(\mathcal{O}_{m_{i}[t_{i}]}\)_-modules such that_ \(E|_{m_{i}[t_{i}]}/l^{(i)}\cong\mathcal{O}_{m_{i}[t_{i}]}\) _and_ \(l^{(i)}\cong\mathcal{O}_{m_{i}[t_{i}]}\) _for any_ \(i\in I\)_._
We fix weights \(\boldsymbol{w}=(w_{1},\ldots,w_{\nu})\) such that \(w_{i}\in[0,1]\) for any \(i\in I\). When \(g=0\), we assume that \((w_{i})_{i\in I}\) satisfies
\[w_{1}=\cdots=w_{\nu}\quad\text{and}\quad\frac{1}{\deg(D)}<w_{i}<\frac{1}{\deg( D)-2}. \tag{4.1}\]
When \(g\geq 1\), we assume that \((w_{i})_{i\in I}\) satisfy
\[0<w_{i}\ll 1. \tag{4.2}\]
**Definition 24**.: _Let \((E,\{l^{(i)}\})\) be a rank \(2\) quasi-parabolic bundle with determinant \(L_{0}\). Let \(L\) be a line subbundle of \(E\). We define the \(\boldsymbol{w}\)-stability index of \(L\) to be the real number_
\[\operatorname{Stab}_{\boldsymbol{w}}(L):=\deg(E)-2\deg(L)+\sum_{i\in I}w_{i} \left(m_{i}-2\operatorname{length}(l_{i}\cap L|_{m_{i}[t_{i}]})\right).\]
**Definition 25**.: _A rank \(2\) quasi-parabolic bundle \((E,\{l^{(i)}\})\) is \(\boldsymbol{w}\)-stable if for any subbundle \(L\subset E\), the inequality \(\operatorname{Stab}_{\boldsymbol{w}}(L)>0\) holds._
We say that a quasi-parabolic bundle \((E,\{l^{(i)}\})\) is decomposable if there exists a decomposition \(E=L_{1}\oplus L_{2}\) such that \(l^{(i)}=l_{1}^{(i)}\) or \(l^{(i)}=l_{2}^{(i)}\) for any \(i\in I\), where we set \(l_{1}^{(i)}:=l^{(i)}\cap(L_{1}|_{m_{i}[t_{i}]})\) and \(l_{2}^{(i)}:=l^{(i)}\cap(L_{2}|_{m_{i}[t_{i}]})\). We say that \((E,\{l^{(i)}\})\) is undecomposable if \((E,\{l^{(i)}\})\) is not decomposable. A free \(\mathcal{O}_{m_{i}[t_{i}]}\)-submodule \(l^{(i)}\) of \(E|_{m_{i}[t_{i}]}\) induces a one dimensional subspace \(l_{\operatorname{red}}^{(i)}\) of \(E|_{t_{i}}\), that is the restriction of \(l^{(i)}\) to \(t_{i}\) (without multiplicity).
**Lemma 26**.: _Let \((E,\{l^{(i)}\})\) be a rank \(2\) quasi-parabolic bundle with determinant \(L_{0}\). If_
* \(E\) _is an extension of_ \(L_{0}\) _by_ \(\mathcal{O}_{C}\) _(when_ \(g=0\)_, moreover we assume that_ \((E,\{l^{(i)}\})\) _is undecomposable)_
* \(\dim_{\mathbb{C}}H^{1}(C,E)=0\)__
* \(l_{\operatorname{red}}^{(i)}\not\in\mathcal{O}_{C|_{t_{i}}}\subset\mathbb{P}(E)\) _for any_ \(i\)_,_
_then \((E,\{l^{(i)}\})\) is \(\boldsymbol{w}\)-stable._
Proof.: When \(g=0\), we have this statement from [35, Proposition 46] by the condition (4.1). When \(g\geq 1\), we have that \(E\) is stable, that is, \(\deg(E)-2\deg(L)\) is a positive integer for any line subbundle \(L\subset E\). This claim follows from the same argument as in [44, Lemma 4.2]. Since \(0<w_{i}\ll 1\) in (4.2), we have that \(\operatorname{Stab}_{\boldsymbol{w}}(L)>0\)
Let \(P^{\boldsymbol{w}}_{(C,D)}\) be a moduli space of \(\boldsymbol{w}\)-stable quasi-parabolic bundles constructed in [54]. Let \(P^{\boldsymbol{w}}_{(C,D)}(L_{0})\) be the fiber of \(L_{0}\) under the determinant map
\[P^{\boldsymbol{w}}_{(C,D)}\longrightarrow\operatorname{Pic}_{C}^{2g-1};\quad( E,\{l^{(i)}\})\longmapsto\det(E).\]
We set
\[P_{(C,D)}(L_{0})_{0}:=\left\{(E,\{l^{(i)}\})\begin{array}{l}\text{$(E,\{l^{( i)}\})$ is rank $2$ quasi-parabolic bundle over $(C,D)$ such that }\\ \text{$(i)$ $\det(E)\cong L_{0},\quad$(ii)$ $E$ is an extension of $L_{0}$ by $\mathcal{O}_{C}$,}\\ \text{$(iii)$ $\dim_{\mathbb{C}}H^{1}(C,E)=0,\quad$(iv)$ $l^{(i)}_{\text{red}}\not\in\mathcal{O}_{C}|_{t_{i}}\subset\mathbb{P}(E)$ for any $i$},\\ \text{$(v)$ $(E,\{l^{(i)}\})$ is indecomposable (when $g=0$)}\end{array}\right\}.\]
By Lemma 26, we have an inclusion
\[P_{(C,D)}(L_{0})_{0}\subset P^{\boldsymbol{w}}_{(C,D)}(L_{0}).\]
For \((E,\{l^{(i)}\})\in P_{(C,D)}(L_{0})_{0}\), we have an extension
\[0\longrightarrow\mathcal{O}_{C}\longrightarrow E\longrightarrow L_{0} \longrightarrow 0. \tag{4.3}\]
Since \(\dim_{\mathbb{C}}H^{1}(C,E)=0\), we have that \(\dim_{\mathbb{C}}H^{0}(C,E)=1\). So the injection \(\mathcal{O}_{C}\xrightarrow{\subset}E\) in (4.3) is unique up to a constant.
**Definition 27**.: _Let \((E,\{l^{(i)}\})\in P_{(C,D)}(L_{0})_{0}\). We take an affine open covering \(\{U_{\alpha}\}_{\alpha}\) of \(C\), i.e. \(C=\bigcup_{\alpha}U_{\alpha}\). Let \(\{\varphi^{\operatorname{Ext}}_{\alpha}\}_{\alpha}\) be trivializations \(\varphi^{\operatorname{Ext}}_{\alpha}\colon\mathcal{O}^{\oplus 2}_{U_{\alpha}} \to E|_{U_{\alpha}}\) of the underlying vector bundle \(E\) such that_
1. _the composition_ \[\mathcal{O}_{U_{\alpha}} \longrightarrow\mathcal{O}^{\oplus 2}_{U_{\alpha}}\xrightarrow{ \varphi^{\operatorname{Ext}}_{\alpha}}E|_{U_{\alpha}}\] \[f \longmapsto(f,0)\] _is just the inclusion_ \(\mathcal{O}_{C}\subset E\) _of the extension (_4.3_) for any_ \(\alpha\)_, and_
2. _the image of the composition_ \[\mathcal{O}_{U_{\alpha}} \longrightarrow\mathcal{O}^{\oplus 2}_{U_{\alpha}}\xrightarrow{ \varphi^{\operatorname{Ext}}_{\alpha}}E|_{U_{\alpha}}\longrightarrow E|_{m_{i} [t_{i}]}\] \[f \longmapsto(0,f)\] _generates the submodule_ \(l^{(i)}\subset E|_{m_{i}[t_{i}]}\) _for each_ \(i\) _and_ \(\alpha\) _where_ \(t_{i}\in U_{\alpha}\)_._
Notice that the claim that we may take \(\varphi^{\operatorname{Ext}}_{\alpha}\) which satisfies the condition (ii) of Definition 27 follows from the condition that \(l^{(i)}_{\text{red}}\not\in\mathcal{O}_{C}|_{t_{i}}\subset\mathbb{P}(E)\) for any \(i\).
Now we define a map
\[P_{(C,D)}(L_{0})_{0}\longrightarrow\mathbb{P}H^{1}(C,L_{0}^{-1}(-D)) \tag{4.4}\]
as follows. Let \(\{\varphi^{\operatorname{Ext}}_{\alpha}\}_{\alpha}\) be the trivializations in Definition 27. We have the transition matrices
\[B_{\alpha\beta}:=(\varphi^{\operatorname{Ext}}_{\alpha}|_{U_{\alpha\beta}})^{ -1}\circ\varphi^{\operatorname{Ext}}_{\beta}|_{U_{\alpha\beta}}\colon \mathcal{O}^{\oplus 2}_{U_{\alpha\beta}}\longrightarrow\mathcal{O}^{\oplus 2}_{U_{ \alpha\beta}}.\]
We represent \(B_{\alpha\beta}\) as a matrix:
\[B_{\alpha\beta}=\begin{pmatrix}1&b^{12}_{\alpha\beta}\\ 0&b^{22}_{\alpha\beta}\end{pmatrix}. \tag{4.5}\]
Remark that \(\{b_{\alpha\beta}^{22}\}_{\alpha\beta}\) is a multiplicative cocycle which defines the fixed line bundle \(L_{0}\). We take a meromorphic coboundary
\[\{b_{\alpha}^{22}\}_{\alpha}\quad\left(\text{where }b_{\alpha\beta}^{22}=\frac{b_{ \alpha}^{22}}{b_{\beta}^{22}}\right) \tag{4.6}\]
of the multiplicative cocycle \(\{b_{\alpha\beta}^{22}\}_{\alpha\beta}\). By using the coboundary \(\{b_{\alpha}^{22}\}_{\alpha}\), we define a cocycle
\[b_{\alpha\beta}^{\text{Bun}}:=b_{\alpha\beta}^{12}b_{\alpha}^{22}, \tag{4.7}\]
which gives a class \([\{b_{\alpha\beta}^{\text{Bun}}\}]\in H^{1}(C,L_{0}^{-1}(-D))\). Then we have a map (4.4):
\[(E,\{l^{(i)}\})\longmapsto\overline{[\{b_{\alpha\beta}^{\text{Bun}}\}]}.\]
### Moduli space of quasi-parabolic connections with fixed trace connection
Now we recall the moduli space of quasi-parabolic connections. We fix an irregular curve with residues \(X=(C,D,\{z_{i}\},\{\boldsymbol{\theta}_{i}\},\boldsymbol{\theta}_{\text{res}})\) defined in Definition 2. Moreover we assume that
\[\sum_{i\in I}\theta_{i,-1}^{-}\neq 0. \tag{4.8}\]
Let \((L_{0},\nabla_{L_{0}}\colon L_{0}\to L_{0}\otimes\Omega_{C}^{1}(D))\) be a rank \(1\) connection on \(C\) with degree \(2g-1\) such that the polar part of \(\nabla_{L_{0}}\) at \(t_{i}\) is \(\operatorname{tr}(\omega_{i}(X))\).
**Definition 28**.: _We say \((E,\nabla,\lambda,\{l^{(i)}\})\) a rank \(2\) quasi-parabolic \(\lambda\)-connection over \(X\) with fixed trace connection\((L_{0},\nabla_{L_{0}})\) if_
1. \(E\) _is a rank_ \(2\) _vector bundle on_ \(C\) _with_ \(\det(E)\cong L_{0}\)_,_
2. \(\lambda\in\mathbb{C}\) _and_ \(\nabla\colon E\to E\otimes\Omega_{C}^{1}(D)\) _is a_ \(\lambda\)_-connection that is,_ \(\nabla(fs)=\lambda s\otimes df+f\nabla(s)\) _for any_ \(f\in\mathcal{O}_{C}\) _and_ \(s\in E\)_, and_
3. \(\nabla(s_{1})\wedge s_{2}+s_{1}\wedge\nabla(s_{2})=\lambda\nabla_{L}(s_{1} \wedge s_{2})\) _for_ \(s_{1},s_{2}\in E\)_,_
4. \(E|_{m_{i}[t_{i}]}\supset l^{(i)}\supset 0\) _is a filtration by free_ \(\mathcal{O}_{m_{i}[t_{i}]}\)_-modules such that, for any_ \(i\in I\)_,_ * \(E|_{m_{i}[t_{i}]}/l^{(i)}\cong\mathcal{O}_{m_{i}[t_{i}]}\) _and_ \(l^{(i)}\cong\mathcal{O}_{m_{i}[t_{i}]}\)_,_ * \(\nabla|_{m_{i}[t_{i}]}(l^{(i)})\subset l^{(i)}\otimes\Omega_{C}^{1}(D)\)_, and_ * _the image of_ \((E|_{m_{i}[t_{i}]}/l^{(i)})\oplus l^{(i)}\) _under_ \(\operatorname{Gr}_{i}(\nabla)-\lambda\cdot\omega_{i}(X)\) _is contained in_ \[\left((E|_{m_{i}[t_{i}]}/l^{(i)})\oplus l^{(i)}\right)\otimes\Omega_{C}^{1}.\]
_Here \(\operatorname{Gr}_{i}(\nabla)\) is the induced morphism_
\[\operatorname{Gr}_{i}(\nabla)\colon(E|_{m_{i}[t_{i}]}/l^{(i)})\oplus l^{(i)} \longrightarrow\left((E|_{m_{i}[t_{i}]}/l^{(i)})\oplus l^{(i)}\right)\otimes \Omega_{C}^{1}(D).\]
Notice that, if \(\lambda=0\), then \(\nabla\) is an \(\mathcal{O}_{C}\)-morphism, which is called a Higgs field. So \((E,\nabla,\lambda,\{l^{(i)}\})\) is called a (trace free) quasi-parabolic Higgs bundle when \(\lambda=0\). We consider only rank \(2\) quasi-parabolic \(\lambda\)-connections \((E,\nabla,\lambda,\{l^{(i)}\})\) over \(X\) with \((L_{0},\nabla_{L_{0}})\) such that the underlying quasi-parabolic bundle \((E,\{l^{(i)}\})\) is in the moduli space \(P_{(C,D)}(L_{0})_{0}\).
We define the moduli spaces \(\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\) and \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\) as follows:
\[\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}=\left\{\begin{array}{l}(E,\nabla,\lambda,\{l^{(i)}\})\\ \text{quasi-parabolic $\lambda$-connection}\\ \text{over $X$ with trace $(L_{0},\nabla_{L_{0}})$}\end{array}\right|\ \ (E,\{l^{(i)}\})\in P_{(C,D)}(L_{0})_{0}\ \ \right\}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
and
\[M_{X}(L_{0},\nabla_{L_{0}})_{0}=\left\{\begin{array}{l}(E,\nabla,\lambda,\{l^{(i )}\})\\ \text{quasi-parabolic $\lambda$-connection}\\ \text{over $X$ with trace $(L_{0},\nabla_{L_{0}})$}\end{array}\right|\begin{array}{l}(E,\{l^{(i )}\})\in P_{(C,D)}(L_{0})_{0}\\ \text{and $\lambda\neq 0$}\end{array}\right\}\bigg{/}\cong.\]
### Maps from the moduli space
Now we describe two maps: the forgetful map \(\pi_{\text{Bun}}\) forgetting connections and the apparent map \(\pi_{\text{App}}\). First we consider the composition
\[\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\longrightarrow P_{(C,D)}(L_{0})_ {0}\longrightarrow\mathbb{P}H^{1}(C,L_{0}^{-1}(-D)).\]
Here the first map is the forgetful map, and the second map is (4.4). We denote by
\[\pi_{\text{Bun}}\colon\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\longrightarrow \mathbb{P}H^{1}(C,L_{0}^{-1}(-D)).\]
the composition.
Second we define a map
\[\pi_{\text{App}}\colon\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\longrightarrow \mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D)) \tag{4.9}\]
as follows. Let \((E,\nabla,\lambda,\{l^{(i)}\})\) be a point on \(\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\). Let \(\{\varphi_{\alpha}^{\text{Ext}}\}_{\alpha}\) be the trivializations in Definition 27. Let \(A_{\alpha}\) be the connection matrix of the \(\lambda\)-connection \(\nabla\) with respect to \(\varphi_{\alpha}^{\text{Ext}}\), that is,
\[\lambda\,\text{d}+A_{\alpha}:=(\varphi_{\alpha}^{\text{Ext}})^{-1}\circ\nabla \circ\varphi_{\alpha}^{\text{Ext}}\colon\mathcal{O}_{U_{\alpha}}^{\oplus 2} \longrightarrow(\Omega_{U_{\alpha}}^{1}(D))^{\oplus 2}.\]
We denote the matrix \(A_{\alpha}\) as follows:
\[A_{\alpha}=\begin{pmatrix}a_{\alpha}^{11}&a_{\alpha}^{12}\\ a_{\alpha}^{21}&a_{\alpha}^{22}\end{pmatrix}. \tag{4.10}\]
By the condition (ii) in Definition 27 and the condition (iv) in Definition 28, the polar part of the connection matrix \(A_{\alpha}\) at \(t_{i}\) is a lower triangular matrix, that is, the Laurent expansion of \(A_{\alpha}\) at \(t_{i}\) is as follows:
\[A_{\alpha}=\begin{pmatrix}\lambda\nu_{i}^{-}&0\\ *&\lambda\nu_{i}^{+}\end{pmatrix}\frac{1}{z_{i}^{m_{i}}}+[\text{ holo. part }]. \tag{4.11}\]
Here \(\nu_{i}^{-},\nu_{i}^{+}\in\Omega_{C}^{1}(D)|_{m_{i}[t_{i}]}\) are defined so that
\[\lambda\cdot\omega_{i}(X)=\begin{pmatrix}\lambda\nu_{i}^{-}&0\\ 0&\lambda\nu_{i}^{+}\end{pmatrix}.\]
By using the coboundary \(\{b_{\alpha}^{22}\}_{\alpha}\) in (4.6), we define cocycles
\[a_{\alpha}^{\text{App}}:=a_{\alpha}^{21}(b_{\alpha}^{22})^{-1}, \tag{4.12}\]
which give a class \([\{a_{\alpha}^{\text{App}}\}]\in H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\). Then we have a map (4.9):
\[(E,\nabla,\lambda,\{l^{(i)}\})\longmapsto\overline{[\{a_{\alpha}^{\text{App} }\}]}.\]
Finally, we have a map
\[(\pi_{\text{App}},\pi_{\text{Bun}})\colon\widetilde{M}_{X}(L_{0},\nabla_{L_{0} })_{0}\longrightarrow\mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\times \mathbb{P}H^{1}(C,L_{0}^{-1}(-D)). \tag{4.13}\]
We consider the natural pairing
\[H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\times H^{1}(C,L_{0}^{-1}(-D)) \longrightarrow H^{1}(C,\Omega_{C}^{1})\cong\mathbb{C}. \tag{4.14}\]
**Lemma 29**.: _Let \((E,\nabla,\lambda,\{l^{(i)}\})\in\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\). Let \(a_{\alpha}^{\text{App}}\) and \(b_{\alpha\beta}^{\text{Bun}}\) be the cocycles in (4.7) and in (4.12), respectively. Then we have_
\[[\{b_{\alpha\beta}^{\text{Bun}}\cdot a_{\beta}^{\text{App}}\}]=\lambda\cdot \sum_{i\in I}\theta_{i,-1}^{-}.\]
_Here the left hand side is the pairing (4.14)._
Proof.: Let \(B_{\alpha\beta}\) be the transition function in (4.5). Let \(A_{\alpha}\) be the connection matrix in (4.10). Then we have
\[\lambda\cdot\mathrm{d}B_{\alpha\beta}+A_{\alpha}B_{\alpha\beta}=B_{\alpha \beta}A_{\beta}.\]
By comparing the \((1,1)\)-entries of the both hand sides, we have
\[a_{\alpha}^{11}-a_{\beta}^{11}=b_{\alpha\beta}^{\text{Bun}}\cdot a_{\beta}^{ \text{App}}.\]
By (4.11) and the isomorphism \(H^{1}(C,\Omega_{C}^{1})\cong\mathbb{C}\), we have \([\{b_{\alpha\beta}^{\text{Bun}}\cdot a_{\beta}^{\text{App}}\}]=\lambda\cdot \sum_{i}\theta_{i,-1}^{-}\).
Set
\[N_{0}:=\dim_{\mathbb{C}}\mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))=3g+n -3.\]
Let us introduce the homogeneous coordinates \(\boldsymbol{a}=(a_{0}:\cdots:a_{N_{0}})\) on \(\mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\cong\mathbb{P}_{\boldsymbol {a}}^{N_{0}}\) and the dual coordinates \(\boldsymbol{b}=(b_{0}:\cdots:b_{N_{0}})\) on
\[\mathbb{P}H^{1}(C,L_{0}^{-1}(-D))\cong\mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C }^{1}(D))^{\vee}\cong\mathbb{P}_{\boldsymbol{b}}^{N_{0}}.\]
Let \(\Sigma\subset\mathbb{P}_{\boldsymbol{a}}^{N_{0}}\times\mathbb{P}_{\boldsymbol {b}}^{N_{0}}\) be the incidence variety whose defining equation is given by \(\sum_{j}a_{j}b_{j}=0\). By Lemma 29, we have that
\[\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\setminus M_{X}(L_{0},\nabla_{L_{ 0}})_{0}\xrightarrow{(\pi_{\text{App}},\pi_{\text{Bun}})}\Sigma.\]
**Remark 30**.: _Loray-Saito (for \(g=0\)) and Matsumoto (for \(g\geq 1\)) discussed on the birationality of the map (4.13). They showed the birationality of the map (4.13) when \(D\) is a reduced effective divisor ([42, Theorem 4.3] for \(g=0\) and [44, Theorem 4.5] for \(g\geq 1\)). In these cases, quasi-parabolic connections have only simple poles. But we may apply the arguments in [42, Theorem 4.3] and in [44, Theorem 4.5] to our cases where quasi-parabolic connections admit generic unramified irregular singular points. So we can reconstruct \((E,\nabla,\lambda,\{l^{(i)}\})\in\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\) from an element of_
\[\mathbb{P}H^{1}(C,L_{0}^{-1}(-D))_{0}\times\mathbb{P}H^{0}(C,L_{0}\otimes \Omega_{C}^{1}(D)).\]
_Here we set_
\[\mathbb{P}H^{1}(C,L_{0}^{-1}(-D))_{0}:=\left\{b\in\mathbb{P}H^{1}(C,L_{0}^{-1} (-D))\ \bigg{|}\begin{array}{l}\text{ The extension $E$ corresponding to $b$}\\ \text{ satisfies $\dim_{\mathbb{C}}H^{1}(C,E)=0$}\end{array}\right\}.\]
_Then we have isomorphisms_
\[\widetilde{M}_{X}(L_{0},\nabla_{L_{0}})_{0}\cong\mathbb{P}H^{1}(C,L_{0}^{-1} (-D))_{0}\times\mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\]
_and_
\[M_{X}(L_{0},\nabla_{L_{0}})_{0}\cong\mathbb{P}H^{1}(C,L_{0}^{-1}(-D))_{0} \times\mathbb{P}H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\setminus\Sigma.\]
### Symplectic structure and explicit description
Now we recall the natural symplectic structure on \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\). We define a complex \(\mathcal{F}_{0}^{\bullet}\) for \((E,\frac{1}{\lambda}\nabla,\{l^{(i)}\})\) by
\[\mathcal{F}_{0}^{0} :=\Big{\{}s\in\mathcal{E}nd(E)\ \Big{|}\ \operatorname{tr}(s)=0,\,s|_{m_{ i}t_{i}}(l^{(i)})\subset l^{(i)}\text{ for any }i\Big{\}}\] \[\mathcal{F}_{0}^{1} :=\Big{\{}s\in\mathcal{E}nd(E)\otimes\Omega_{C}^{1}(D)\ \Big{|}\ \operatorname{tr}(s)=0,\,s|_{m_{ i}t_{i}}(l^{(i)})\subset l^{(i)}\otimes\Omega_{C}^{1}\text{ for any }i\Big{\}}\] \[\nabla_{\mathcal{F}^{\bullet}} :\mathcal{F}_{0}^{0}\longrightarrow\mathcal{F}_{0}^{1};\quad \nabla_{\mathcal{F}_{0}^{\bullet}}(s)=(\frac{1}{\lambda}\nabla)\circ s-s \circ(\frac{1}{\lambda}\nabla).\]
We define the following morphism
\[\mathbf{H}^{1}(\mathcal{F}_{0}^{\bullet})\otimes\mathbf{H}^{1}(\mathcal{F}_{ 0}^{\bullet})\longrightarrow\mathbf{H}^{2}(\mathcal{O}_{C}\overset{d}{\to} \Omega_{C}^{1})\cong\mathbb{C} \tag{4.15}\]
as in (3.2). This pairing gives the symplectic form on \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\). We denote by \(\omega_{0}\) the symplectic form.
The maps \(\pi_{\operatorname{App}}\) and \(\pi_{\operatorname{Bun}}\) give coordinates on \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\) (see Remark 30). Now we describe the symplectic structure (4.15) by using the coordinates on \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\). We define a \(1\)-form \(\eta\) on \(\mathbb{P}_{\boldsymbol{a}}^{N_{0}}\times\mathbb{P}_{\boldsymbol{b}}^{N_{0}}\) as follows:
\[\eta:=\left(-\sum_{i}\theta_{i,-1}^{-}\right)\cdot\frac{a_{0}\,db_{0}+a_{1}\,db _{1}+\cdots+a_{N_{0}}\,db_{N_{0}}}{a_{0}b_{0}+a_{1}b_{1}+\cdots+a_{N_{0}}b_{N_{ 0}}}.\]
**Theorem 31**.: _Assume that \(\sum_{i\in I}\theta_{i,-1}^{-}\neq 0\). Let \(\omega_{\boldsymbol{a},\boldsymbol{b}}\) be the \(2\)-form on \(\mathbb{P}_{\boldsymbol{a}}^{N_{0}}\times\mathbb{P}_{\boldsymbol{b}}^{N_{0}}\) defined by \(\omega_{\boldsymbol{a},\boldsymbol{b}}=d\eta\). The pull-back of \(\omega_{\boldsymbol{a},\boldsymbol{b}}\) under the map_
\[M_{X}(L_{0},\nabla_{L_{0}})_{0}\xrightarrow{(\pi_{\operatorname{App}},\pi_{ \operatorname{Bun}})}\mathbb{P}_{\boldsymbol{a}}^{N_{0}}\times\mathbb{P}_{ \boldsymbol{b}}^{N_{0}}\]
_coincides with the symplectic form \(\omega_{0}\) on \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\)._
Proof.: Let \(v,v^{\prime}\in T_{(E,\frac{1}{\lambda}\nabla,\{l^{(i)}\})}M_{X}(L_{0},\nabla _{L_{0}})_{0}\). We have the isomorphism
\[T_{(E,\frac{1}{\lambda}\nabla,\{l^{(i)}\})}M_{X}(L_{0},\nabla_{L_{0}})_{0} \xrightarrow{\cong}\mathbf{H}^{1}(\mathcal{F}_{0}^{\bullet}).\]
Let \(u_{\alpha\beta}(v)\) and \(v_{\alpha}(v)\) be cocycles such that the class \([\{u_{\alpha\beta}(v)\}_{\alpha\beta},\{v_{\alpha}(v)\}_{\alpha}]\) is the image of \(v\) under the isomorphism. We calculate \(u_{\alpha\beta}(v)\) and \(v_{\alpha}(v)\) by using the trivialization \(\{\varphi_{\alpha}^{\operatorname{Ext}}\}_{\alpha}\) as follows:
\[\begin{split} u_{\alpha\beta}(v)&=\varphi_{\beta}^{ \operatorname{Ext}}|_{U_{\alpha\beta}}\circ\left(B_{\alpha\beta}^{-1}v(B_{ \alpha\beta})\right)\circ(\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha \beta}})^{-1}\\ &=\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta}}\circ \begin{pmatrix}0&v(b_{\alpha\beta}^{12})\\ 0&0\end{pmatrix}\circ(\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta} })^{-1}\\ &=\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta}}\circ \begin{pmatrix}0&\frac{v(b_{\alpha\beta}^{\operatorname{Bun}})}{b_{\alpha \beta}^{2}}\\ 0&0\end{pmatrix}\circ(\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta} })^{-1}\end{split} \tag{4.16}\]
and
\[\begin{split} v_{\alpha}(v)&=\varphi_{\alpha}^{ \operatorname{Ext}}\circ v\left(\frac{1}{\lambda}A_{\alpha}\right)\circ( \varphi_{\alpha}^{\operatorname{Ext}})^{-1}\\ &=\varphi_{\alpha}^{\operatorname{Ext}}\circ\begin{pmatrix}v(a_{ \alpha}^{11}/\lambda)&v(a_{\alpha}^{12}/\lambda)\\ v(a_{\alpha}^{21}/\lambda)&v(a_{\alpha}^{22}/\lambda)\end{pmatrix}\circ( \varphi_{\alpha}^{\operatorname{Ext}})^{-1}\\ &=\varphi_{\alpha}^{\operatorname{Ext}}\circ\begin{pmatrix}v(a_{ \alpha}^{11}/\lambda)&v(a_{\alpha}^{12}/\lambda)\\ v(a_{\alpha}^{\operatorname{App}})b_{\alpha\alpha}^{22}&v(a_{\alpha}^{22}/ \lambda)\end{pmatrix}\circ(\varphi_{\alpha}^{\operatorname{Ext}})^{-1}.\end{split} \tag{4.17}\]
Here \(\{b_{\alpha}^{22}\}_{\alpha}\) is the coboundary in (4.6). Since we fix the determinant bundle \(L_{0}\), we may assume that the coboundary \(\{b_{\alpha}^{22}\}_{\alpha}\) is independent of the moduli space \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\).
Now we calculate the class
\[[(\{\operatorname{tr}(u_{\alpha\beta}(v)u_{\beta\gamma}(v^{\prime}))\},-\{ \operatorname{tr}\left(u_{\alpha\beta}(v)v_{\beta}(v^{\prime})\right)- \operatorname{tr}\left(v_{\alpha}(v)u_{\alpha\beta}(v^{\prime})\right)\})] \tag{4.18}\]
in \(\mathbf{H}^{2}(\mathcal{O}_{C}\xrightarrow{d}\Omega^{1}_{C})\cong\mathbb{C}\). First we calculate \(u_{\alpha\beta}(v)u_{\beta\gamma}(v^{\prime})\) as follows:
\[u_{\alpha\beta}(v)u_{\beta\gamma}(v^{\prime})\] \[=\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta}}\circ \begin{pmatrix}0&\frac{v(b_{\alpha\beta}^{\operatorname{Ban}})}{b_{\alpha}^{ 22}}\\ 0&0\end{pmatrix}\circ(\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta}}) ^{-1}\circ\varphi_{\gamma}^{\operatorname{Ext}}|_{U_{\alpha\beta}}\circ \begin{pmatrix}0&\frac{v(b_{\beta\gamma}^{\operatorname{Ban}})}{b_{\alpha}^{22 }}\\ 0&0\end{pmatrix}\circ(\varphi_{\gamma}^{\operatorname{Ext}}|_{U_{\alpha\beta}}) ^{-1}\] \[=\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta}}\circ \begin{pmatrix}0&\frac{v(b_{\alpha\beta}^{\operatorname{Ban}})}{b_{\alpha}^{ 22}}\\ 0&0\end{pmatrix}B_{\beta\gamma}\begin{pmatrix}0&\frac{v(b_{\alpha\beta}^{ \operatorname{Ban}})}{b_{\alpha}^{22}}\\ 0&0\end{pmatrix}\circ(\varphi_{\gamma}^{\operatorname{Ext}}|_{U_{\alpha\beta}} )^{-1}\] \[=\varphi_{\beta}^{\operatorname{Ext}}|_{U_{\alpha\beta}}\circ \begin{pmatrix}0&0\\ 0&0\end{pmatrix}\circ(\varphi_{\gamma}^{\operatorname{Ext}}|_{U_{\alpha\beta}} )^{-1}=0.\]
So we may take a representative of the class (4.18) so that
\[[-\{\operatorname{tr}\left(u_{\alpha\beta}(v)v_{\beta}(v^{\prime})\right)- \operatorname{tr}\left(v_{\alpha}(v)u_{\alpha\beta}(v^{\prime})\right)\}]\]
is in \(H^{1}(C,\Omega^{1}_{C})\). By using equalities (4.16) and (4.17), we have the following equality
\[\operatorname{tr}\left(u_{\alpha\beta}(v)v_{\beta}(v^{\prime})\right)- \operatorname{tr}\left(v_{\alpha}(v)u_{\alpha\beta}(v^{\prime})\right)=v(b_{ \alpha\beta}^{\operatorname{Bun}})v^{\prime}\left(\frac{a_{\beta}^{ \operatorname{App}}}{\lambda}\right)-v\left(\frac{a_{\alpha}^{\operatorname{App }}}{\lambda}\right)v^{\prime}(b_{\alpha\beta}^{\operatorname{Bun}}). \tag{4.19}\]
We take bases
\[a^{\operatorname{App}(0)},a^{\operatorname{App}(1)},\ldots,a^{\operatorname{App }(N_{0})}\in H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))\]
of \(H^{0}(C,L_{0}\otimes\Omega^{1}_{C}(D))\) and
\[[\{b_{\alpha\beta}^{\operatorname{App}(0)}\}],[\{b_{\alpha\beta}^{ \operatorname{App}(1)}\}],\ldots,[\{b_{\alpha\beta}^{\operatorname{App}(N_{0} )}\}]\]
of \(H^{1}(C,L_{0}^{-1}(-D))\) so that these bases give the homogeneous coordinates \((a_{0}:\cdots:a_{N_{0}})\) on \(\mathbb{P}^{N_{0}}_{\boldsymbol{\alpha}}\) and \((b_{0}:\cdots:b_{N_{0}})\) on \(\mathbb{P}^{N_{0}}_{\boldsymbol{b}}\). We may assume that these bases are independent of the moduli space \(M_{X}(L_{0},\nabla_{L_{0}})_{0}\). We set
\[a_{\alpha}^{\operatorname{App}}=a_{0}a^{\operatorname{App}(0)}|_{U_{\alpha}}+ a_{1}a^{\operatorname{App}(1)}|_{U_{\alpha}}+\cdots+a_{N_{0}}a^{\operatorname{App}(N_{0} )}|_{U_{\alpha}}\]
and
\[b_{\alpha\beta}^{\operatorname{App}}=b_{0}b_{\alpha\beta}^{\operatorname{App} (0)}+b_{1}b_{\alpha\beta}^{\operatorname{App}(1)}+\cdots+b_{N_{0}}b_{\alpha \beta}^{\operatorname{App}(N_{0})}.\]
By (4.19), we have that
\[\operatorname{tr}\left(u_{\alpha\beta}(v)v_{\beta}(v^{\prime})\right) -\operatorname{tr}\left(v_{\alpha}(v)u_{\alpha\beta}(v^{\prime})\right) =v\left(\sum_{k=0}^{N_{0}}b_{k}b_{\alpha\beta}^{\operatorname{App}(k)} \right)v^{\prime}\left(\frac{\sum_{k=0}^{N_{0}}a_{k}a^{\operatorname{App}(k)}|U_ {\alpha}}{\lambda}\right)\] \[\qquad-v\left(\frac{\sum_{k=0}^{N_{0}}a_{k}a^{\operatorname{App}(k )}|_{U_{\alpha}}}{\lambda}\right)v^{\prime}\left(\sum_{k=0}^{N_{0}}b_{k}b_{ \alpha\beta}^{\operatorname{App}(k)}\right)\] \[=\left(\sum_{k=0}^{N_{0}}v\left(b_{k}\right)b_{\alpha\beta}^{ \operatorname{App}(k)}\right)\left(\sum_{k=0}^{N_{0}}v^{\prime}\left(\frac{a_ {k}}{\lambda}\right)a^{\operatorname{App}(k)}|_{U_{\alpha}}\right)\] \[\qquad-\left(\sum_{k=0}^{N_{0}}v\left(\frac{a_{k}}{\lambda} \right)a^{\operatorname{App}(k)}|_{U_{\alpha}}\right)\left(\sum_{k=0}^{N_{0} }v^{\prime}\left(b_{k}\right)b_{\alpha\beta}^{\operatorname{App}(k)}\right).\]
Since \((b_{0}:\cdots:b_{N_{0}})\) is dual of \((a_{0}:\cdots:a_{N_{0}})\) with respect to the natural pairing
\[H^{0}(C,L_{0}\otimes\Omega_{C}^{1}(D))\times H^{1}(C,L_{0}^{-1}(-D))\longrightarrow H ^{1}(C,\Omega_{C}^{1})\cong\mathbb{C},\]
we have that
\[\operatorname{tr}\left(u_{\alpha\beta}(v)v_{\beta}(v^{\prime}) \right)-\operatorname{tr}\left(v_{\alpha}(v)u_{\alpha\beta}(v^{\prime})\right)\] \[=\sum_{k=0}^{N_{0}}v\left(b_{k}\right)v^{\prime}\left(\frac{a_{k }}{\lambda}\right)-\sum_{k=0}^{N_{0}}v^{\prime}\left(b_{k}\right)v\left(\frac{ a_{k}}{\lambda}\right).\]
On the other hand, we have that
\[\lambda=\frac{\langle[\{a_{\alpha}^{\operatorname{App}}\},[\{b_{\alpha\beta}^ {\operatorname{Bun}}\}]\rangle\rangle}{-\sum_{i}\theta_{i,-1}^{-}}=\frac{a_{0 }b_{0}+a_{1}b_{1}+\cdots+a_{N_{0}}b_{N_{0}}}{-\sum_{i}\theta_{i,-1}^{-}}.\]
Then we have
\[H^{1}(C,\Omega_{C}^{1})\stackrel{{\cong}}{{\longrightarrow}} \mathbb{C}\]
\[[-\{\operatorname{tr}\left(u_{\alpha\beta}(v)v_{\beta}(v^{\prime})\right)- \operatorname{tr}\left(v_{\alpha}(v)u_{\alpha\beta}(v^{\prime})\right)\}] \longmapsto d\eta(v,v^{\prime}).\]
This means the statement.
## 5. Companion normal forms for an elliptic curve with two poles
In Section 2, we introduced the companion normal form of a rank 2 meromorphic connection with some assumption. The purpose of the present section is to detail the case of an elliptic curve with two simple poles, or with an unramified irregular singularity of order 2. The latter case arises by confluence from the first one, up to some modification in the arguments. We will give explicit description of the companion normal form for an elliptic curve in these cases. Moreover, we will calculate the canonical coordinates introduced in Section 3.5. First we start from construction of the companion normal form \((\mathcal{O}_{C}\oplus(\Omega_{C}^{1}(D))^{-1},\nabla_{0})\). Next we will construct a rank 2 meromorphic connection \((E,\nabla)\) by transforming the companion normal form.
Let \(C\) be the elliptic curve constructed by gluing affine cubic curves
\[U_{0}:=(y_{1}^{2}-x_{1}(x_{1}-1)(x_{1}-\lambda)=0)\quad\text{and}\quad U_{ \infty}:=(y_{2}^{2}-x_{2}(1-x_{2})(1-\lambda x_{2})=0)\]
with the relations \(x_{1}=x_{2}^{-1}\) and \(y_{1}=y_{2}x_{2}^{-2}\). We fix some \(t\in\mathbb{C}\) and set \(D=t_{1}+t_{2}\) where \(t_{1}=(t,s)\) and \(t_{2}=(t,-s)\), so that \(D\) is the positive part of \(\operatorname{div}(x-t)\). Let \(q_{1},q_{2},q_{3}\) be points on \(C\):
\[q_{j}\colon(x_{1},y_{1})=(u_{j},v_{j})\]
for each \(j=1,2,3\). Now we assume that \(u_{j}\not\in\{0,1,\lambda,\infty,t\}\) for any \(j\).
We take trivialization of the line bundle \((\Omega^{1}_{C}(D))^{-1}\) over \(C\) as follows:
\[\mathcal{O}_{U_{0}}\stackrel{{\sim}}{{\longrightarrow}}(\Omega^{ 1}_{C}(D))^{-1}|_{U_{0}};\quad 1\longmapsto\left(\frac{\mathrm{d}x_{1}}{(x_{1}-t)y_{1 }}\right)^{-1} \tag{5.1}\]
and
\[\mathcal{O}_{U_{\infty}}\stackrel{{\sim}}{{\longrightarrow}}( \Omega^{1}_{C}(D))^{-1}|_{U_{\infty}};\quad 1\longmapsto\left(\frac{\mathrm{d}x_{2}}{(1- tx_{2})y_{2}}\right)^{-1}. \tag{5.2}\]
Then the corresponding transition function \(f_{0\infty}\) is as follows:
\[f_{\infty 0}\colon\mathcal{O}_{U_{0}}|_{U_{0}\cap U_{\infty}} \stackrel{{\sim}}{{\longrightarrow}}\mathcal{O}_{U_{ \infty}}|_{U_{0}\cap U_{\infty}}\] \[1 \longmapsto-\frac{1}{x_{2}}. \tag{5.3}\]
### Definition of a connection \(\nabla_{0}\) on \(\mathcal{O}_{C}\oplus(\Omega^{1}_{C}(D))^{-1}\)
For \(\zeta_{1},\zeta_{2},\zeta_{3}\in\mathbb{C}\), we define \(1\)-forms \(\omega_{12}\), \(\omega_{21}\), and \(\omega_{22}\) as follows:
\[\omega_{12} =\sum_{j=1}^{3}\frac{\zeta_{j}}{2}\cdot\frac{y_{1}+v_{j}}{x_{1}- u_{j}}\cdot\frac{\mathrm{d}x_{1}}{y_{1}}+\left(\frac{A_{1}+A_{2}y_{1}}{x_{1}-t}+A_{3 }+A_{4}x_{1}\right)\frac{\mathrm{d}x_{1}}{y_{1}}\] \[\omega_{21} :=\frac{1}{x_{1}-t}\frac{\mathrm{d}x_{1}}{y_{1}}\] \[\omega_{22} :=\sum_{j=1}^{3}\frac{1}{2}\cdot\frac{y_{1}+v_{j}}{x_{1}-u_{j}} \cdot\frac{\mathrm{d}x_{1}}{y_{1}}+\left(\frac{B_{1}+B_{2}y_{1}}{x_{1}-t}+B_{ 3}\right)\frac{\mathrm{d}x_{1}}{y_{1}}. \tag{5.4}\]
Here \(A_{1},\ldots,A_{4}\in\mathbb{C}\) and \(B_{1},\ldots,B_{3}\in\mathbb{C}\) are parameters. Notice that \(\omega_{12}\otimes\omega_{21}\) is a global section of \((\Omega^{1}_{C})^{\otimes 2}(2D+B)\) and \(\omega_{22}\) is a global section of \(\Omega^{1}_{C}(D+B+\infty)\).
#### 5.1.1. Fixing the polar parts in the logarithmic case
We start by analyzing the case where \(t\notin\{0,1,\lambda,\infty\}\). In this case, we have \(s\neq 0\), so \(t_{1}\neq t_{2}\). We fix complex numbers \(\theta^{\pm}_{1},\theta^{\pm}_{2}\) such that \(\sum_{i=1}^{2}(\theta^{+}_{i}+\theta^{-}_{i})=-1\), which is called Fuchs' relation. Now we assume that the eigenvalues of the matrix
\[\operatorname{res}_{t_{1}}\begin{pmatrix}0&\omega_{12}\\ \omega_{21}&\omega_{22}\end{pmatrix}\]
are given by \(\theta^{+}_{1},\theta^{-}_{1}\) and the eigenvalues of the matrix
\[\operatorname{res}_{t_{2}}\begin{pmatrix}0&\omega_{12}\\ \omega_{21}&\omega_{22}\end{pmatrix}\]
are given by \(\theta^{+}_{2},\theta^{-}_{2}\). (To be coherent with Definition 2, we should write \(\theta_{1,-1}\) and \(\theta_{2,-1}\) for elements of the Cartan subalgebra, and \(\theta^{\pm}_{1,-1}\) and \(\theta^{\pm}_{2,-1}\) for their eigenvalues; however, we drop the subscript \(-1\) for ease of notation, because there are only poles of order \(1\), so no confusion is possible.) Specifically, these conditions read as
\[\operatorname{res}_{(t,s)}\omega_{12}\cdot\operatorname{res}_{(t,s)}\omega_{2 1}=\theta^{+}_{1}\cdot\theta^{-}_{1},\qquad\operatorname{res}_{(t,-s)}\omega_{ 12}\cdot\operatorname{res}_{(t,-s)}\omega_{21}=\theta^{+}_{2}\cdot\theta^{-}_ {2}, \tag{5.5}\]
and
\[\operatorname{res}_{(t,s)}\omega_{22}=\theta^{+}_{1}+\theta^{-}_{1},\qquad \operatorname{res}_{(t,-s)}\omega_{22}=\theta^{+}_{2}+\theta^{-}_{2}. \tag{5.6}\]
Notice that \(\operatorname{res}_{(u_{j},v_{j})}\omega_{22}=1\) for each \(j\). By the residue theorem, \(\operatorname{res}_{\infty}\omega_{22}=-2\). By the assumption (5.5) and (5.6), we may determine the parameters \(A_{1},A_{2},B_{1}\), and \(B_{2}\).
**Lemma 32**.: _Let complex numbers \(\theta_{1}^{\pm},\theta_{2}^{\pm}\) satisfying Fuchs' relation be given. Then, there exist unique values of the parameters \(A_{1},A_{2},B_{1}\), and \(B_{2}\) such that (5.5) and (5.6) are fulfilled. Moreover, these parameter values are independent of \(u_{1},u_{2},u_{3}\), \(\zeta_{1},\zeta_{2}\), and \(\zeta_{3}\). So the polar parts of \(\omega_{12},\omega_{21}\), and \(\omega_{22}\) at \(t_{i}\) are independent of \(u_{1},u_{2},u_{3}\), \(\zeta_{1},\zeta_{2}\), and \(\zeta_{3}\)._
Proof.: By the equalities (5.5), we have
\[\frac{A_{1}+A_{2}s}{s}\cdot\frac{1}{s}=\theta_{1}^{+}\cdot\theta_{1}^{-}\quad \text{and}\quad\frac{A_{1}-A_{2}s}{-s}\cdot\frac{1}{-s}=\theta_{2}^{+}\cdot \theta_{2}^{-}.\]
By the equalities in (5.6), we have
\[\frac{B_{1}+B_{2}s}{s}=\theta_{1}^{+}+\theta_{1}^{-}\quad\text{and}\quad\frac{ B_{1}-B_{2}s}{-s}=\theta_{2}^{+}+\theta_{2}^{-}.\]
By these equalities, \(A_{1},A_{2},B_{1}\), and \(B_{2}\) are determined, and \(A_{1},A_{2},B_{1}\), and \(B_{2}\) are independent of \(u_{1},u_{2},u_{3}\), \(\zeta_{1},\zeta_{2}\), and \(\zeta_{3}\). It is clear that the polar parts of \(\omega_{12},\omega_{21}\), and \(\omega_{22}\) at \(t_{i}\) are independent of \(u_{1},u_{2},u_{3}\), \(\zeta_{1},\zeta_{2}\), and \(\zeta_{3}\).
#### 5.1.2. Fixing the polar part in the irregular case
We now study the situation \(t\in\{0,1,\lambda,\infty\}\). For sake of concreteness, we let \(t=0\), the other cases being similar. Then, \(s=0\) and \(t_{1}=t_{2}\), so the divisor \(D\) is reduced of length \(2\). A local holomorphic coordinate of the elliptic curve \(C\) in a neighbourhood of \(t_{1}\) is given by \(y_{1}\).
We fix \(\theta_{-2}^{\pm},\theta_{-1}^{+}\in\mathbb{C}\) so that \(\theta_{-2}^{+}\neq\theta_{-2}^{-}\) and set \(\theta_{-1}^{-}=-1-\theta_{-1}^{+}\). (To be coherent with Definition 2, we should write \(\theta_{1,-2}\) and \(\theta_{1,-1}\) for elements of the Cartan subalgebra, and \(\theta_{1,-2}^{\pm}\) and \(\theta_{1,-1}^{\pm}\) for their eigenvalues; however, we omit the subscript \(1\) for ease of notation, because there is only one singular point, so no confusion is possible.)
**Lemma 33**.: _Fix \(\theta_{-2}^{\pm},\theta_{-1}^{\pm}\) as above. Then, there exist unique values \(A_{1},A_{2},B_{1},B_{2}\in\mathbb{C}\) such that the eigenvalues of_
\[\operatorname{res}\begin{pmatrix}0&\omega_{12}\\ \omega_{21}&\omega_{22}\end{pmatrix}\]
_admit Laurent expansions of the form_
\[\left(\theta_{-2}^{\pm}\frac{1}{y_{1}^{2}}+\theta_{-1}^{\pm}\frac{1}{y_{1}}+O( 1)\right)\otimes\mathrm{d}y_{1}.\]
_Moreover, the values of the solutions are independent of \(u_{i},\zeta_{i}\)._
Proof.: By the inverse function theorem, there exists an analytic open subset \(U\subset\mathbb{C}\) and a holomorphic function \(h\colon U\to\mathbb{C}\) satisfying \(h(0)=0\) such that \(C\) is given by the explicit equation \(x_{1}=h(y_{1}^{2})\). It is obvious that this function \(h\) is independent of the choice of \(u_{i},\zeta_{i}\), and it is easy to see that \(h^{\prime}(0)=\frac{1}{\lambda}\neq 0\). From the defining equation of \(C\) we get
\[\frac{\mathrm{d}x_{1}}{y_{1}}=\frac{2\,\mathrm{d}y_{1}}{3x_{1}^{2}-2(1+\lambda) x_{1}+\lambda},\]
so \(\frac{\mathrm{d}x_{1}}{y_{1}}\) is a holomorphic \(1\)-form around \(t_{1}\). Moreover,
\[\frac{\mathrm{d}x_{1}}{x_{1}y_{1}}=\frac{\mathrm{d}y_{1}}{y_{1}^{2}}g(y_{1}^{2})\]
for some holomorphic function \(g\colon U\to\mathbb{C}\) satisfying \(g(0)=2\). The polar parts of the coefficients can be separated as
\[\omega_{12} =(A_{1}+A_{2}y_{1})\frac{\mathrm{d}x_{1}}{x_{1}y_{1}}+O(1)=2(A_{1}+ A_{2}y_{1})\frac{\mathrm{d}y_{1}}{y_{1}^{2}}+O(1)\] \[\omega_{21} =2\frac{\mathrm{d}y_{1}}{y_{1}^{2}}+O(1)\] \[\omega_{22} =(B_{1}+B_{2}y_{1})\frac{\mathrm{d}x_{1}}{x_{1}y_{1}}+O(1)=2(B_{1 }+B_{2}y_{1})\frac{\mathrm{d}y_{1}}{y_{1}^{2}}+O(1).\]
Now, the sum of the eigenvalues must be
\[(\theta_{-2}^{+}+\theta_{-2}^{-})\frac{1}{y_{1}^{2}}+(\theta_{-1}^{+}+\theta_{ -1}^{-})\frac{1}{y_{1}}.\]
These conditions determine
\[B_{1}=\frac{1}{2}(\theta_{-2}^{+}+\theta_{-2}^{-}),\qquad B_{2}=\frac{1}{2}( \theta_{-1}^{+}+\theta_{-1}^{-})=-\frac{1}{2}.\]
Moreover, we have
\[-\omega_{12}\omega_{21}=-4(A_{1}+A_{2}y_{1})\frac{\left(\mathrm{d}y_{1}\right) ^{\otimes 2}}{y_{1}^{4}}+O\left(\frac{1}{y_{1}^{2}}\right).\]
On the other hand, the product of the eigenvalues must have the expansion (up to a global factor \(\left(\mathrm{d}y_{1}\right)^{\otimes 2}\))
\[\theta_{-2}^{+}\theta_{-2}^{-}\frac{1}{y_{1}^{4}}+(\theta_{-2}^{+}\theta_{-1}^ {-}+\theta_{-2}^{-}\theta_{-1}^{+})\frac{1}{y_{1}^{3}}.\]
These condition then determine
\[A_{1}=-\frac{1}{4}\theta_{-2}^{+}\theta_{-2}^{-},\qquad A_{3}=-\frac{1}{4}( \theta_{-2}^{+}\theta_{-1}^{-}+\theta_{-2}^{-}\theta_{-1}^{+}).\]
This finishes the proof.
#### 5.1.3. Construction of the connection
We define
\[\beta\colon(\Omega_{C}^{1}(D))^{-1}\longrightarrow\Omega_{C}^{1} (D+B) (\mathcal{O}_{C}\text{-morphism})\] \[\delta\colon(\Omega_{C}^{1}(D))^{-1}\longrightarrow(\Omega_{C}^{ 1}(D))^{-1}\otimes\Omega_{C}^{1}(D+B) (\text{connection})\] \[\gamma\colon\mathcal{O}_{C}\longrightarrow(\Omega_{C}^{1}(D))^{ -1}\otimes\Omega_{C}^{1}(D) (\mathcal{O}_{C}\text{-morphism})\]
by using the trivializations (5.1) and (5.2) of \((\Omega_{C}^{1}(D))^{-1}\) as follows:
\[\beta=\begin{cases}\omega_{12}\colon\mathcal{O}_{U_{0}}\to\mathcal{O}_{U_{0}} \otimes\Omega_{C}^{1}(D+B)|_{U_{0}}\\ \mathrm{id}\circ\omega_{12}\circ f_{\infty}^{-1}\colon\mathcal{O}_{U_{\infty} }\to\mathcal{O}_{U_{\infty}}\otimes\Omega_{C}^{1}(D+B)|_{U_{\infty}},\end{cases}\]
\[\delta=\begin{cases}\mathrm{d}+\omega_{22}\colon\mathcal{O}_{U_{0}}\to \mathcal{O}_{U_{0}}\otimes\Omega_{C}^{1}(D+B)|_{U_{0}}\\ \mathrm{d}+f_{\infty 0}\circ\omega_{22}\circ f_{\infty 0}^{-1}+f_{\infty 0} \circ\mathrm{d}f_{\infty 0}^{-1}\colon\mathcal{O}_{U_{\infty}}\to\mathcal{O}_{U_{\infty}}\otimes \Omega_{C}^{1}(D+B)|_{U_{\infty}},\end{cases}\]
\[\gamma:=\begin{cases}\omega_{21}\colon\mathcal{O}_{U_{0}}\to\mathcal{O}_{U_{0}} \otimes\Omega_{C}^{1}(D+B)|_{U_{0}}\\ f_{\infty 0}\circ\omega_{21}\circ\mathrm{id}\colon\mathcal{O}_{U_{\infty}}\to \mathcal{O}_{U_{\infty}}\otimes\Omega_{C}^{1}(D+B)|_{U_{\infty}}.\end{cases}\]
Here \(f_{\infty 0}\) is the transition function of \((\Omega_{C}^{1}(D))^{-1}\) described in (5.3). Notice that
\[f_{\infty 0}\circ\omega_{22}\circ f_{\infty 0}^{-1}+f_{\infty 0}\circ\mathrm{d}f_{ \infty 0}^{-1}=\omega_{22}+\frac{\mathrm{d}x_{2}}{x_{2}},\]
which is holomorphic at \(\infty\in C\), since we have \(\operatorname{res}_{\infty}\omega_{22}=-2\). We define a connection as follows:
\[\nabla_{0}:=\mathrm{d}+\begin{pmatrix}0&\beta\\ \gamma&\delta\end{pmatrix}:\mathcal{O}_{C}\oplus(\Omega^{1}_{C}(D))^{-1} \longrightarrow\left(\mathcal{O}_{C}\oplus(\Omega^{1}_{C}(D))^{-1}\right) \otimes\Omega^{1}_{C}(D+B), \tag{5.7}\]
which is the companion normal form. Remark that
\[\operatorname{res}_{q_{j}}(\nabla_{0})=\begin{pmatrix}0&\zeta_{j}\\ 0&1\end{pmatrix}\]
for \(j=1,2,3\).
**Lemma 34**.: _The fact that \(\nabla_{0}\) has apparent singular points at \(q_{1},q_{2},q_{3}\) imposes 3 linear conditions on \(A_{3},A_{4},B_{3}\) in terms of spectral data, and \(((u_{j},v_{j}),\zeta_{j})\)'s; we can uniquely determine \(A_{3},A_{4},B_{3}\) from these conditions if, and only if, we have_
\[\det\begin{pmatrix}1&u_{1}&\zeta_{1}\\ 1&u_{2}&\zeta_{2}\\ 1&u_{3}&\zeta_{3}\end{pmatrix}\neq 0. \tag{5.8}\]
Proof.: It is just Lemma 7 specified to the present elliptic case with 2 poles. We set
\[C_{j}=\sum_{j^{\prime}\in\{1,2,3\}\setminus\{j\}}\frac{\zeta_{j^{\prime}}- \zeta_{j}}{2}\cdot\frac{v_{j}+v_{j^{\prime}}}{u_{j}-u_{j^{\prime}}}+\frac{A_{ 1}+A_{2}v_{j}-\zeta_{j}(B_{1}+B_{2}v_{j})-\zeta_{j}^{2}}{u_{j}-t}. \tag{5.9}\]
We denote by \(((a_{j})_{j},(b_{j})_{j},(c_{j})_{j})\) the \(3\times 3\)-matrix
\[((a_{j})_{j},(b_{j})_{j},(c_{j})_{j})=\begin{pmatrix}a_{1}&b_{1}&c_{1}\\ a_{2}&b_{2}&c_{2}\\ a_{3}&b_{3}&c_{3}\end{pmatrix}.\]
The condition where \(q_{1},q_{2},q_{3}\) are apparent singularities means that
\[((1)_{j},(u_{j})_{j},(-\zeta_{j})_{j})\begin{pmatrix}A_{3}\\ A_{4}\\ B_{3}\end{pmatrix}=-\begin{pmatrix}C_{1}\\ C_{2}\\ C_{3}\end{pmatrix}. \tag{5.10}\]
By Cramer's rule, the parameters \(A_{3},A_{4},B_{3}\) of the family of connections \(\nabla_{0}\) are uniquely determined
\[A_{3} =-\frac{\det((C_{j})_{j},(u_{j})_{j},(\zeta_{j})_{j})}{\det(((1)_ {j},(u_{j})_{j},(\zeta_{j})_{j}))}\qquad A_{4}=-\frac{\det((1)_{j},(C_{j})_{j},(\zeta_{j})_{j})}{\det((1)_{j},(u_{j})_{j},(\zeta_{j})_{j})}\] \[B_{3} =\frac{\det((1)_{j},(u_{j})_{j},(C_{j})_{j})}{\det((1)_{j},(u_{j} )_{j},(\zeta_{j})_{j})},\]
if and only if (5.8).
**Lemma 35**.: _We have:_
\[\det\begin{pmatrix}1&u_{1}&\zeta_{1}\\ 1&u_{2}&\zeta_{2}\\ 1&u_{3}&\zeta_{3}\end{pmatrix}=0\]
_if, and only if, \(E\) is not stable._
Proof.: The vanishing of the determinant gives that \(\zeta_{j}=\sigma(q_{j})\) for a global section \(\sigma\in H^{0}(C,\Omega^{1}_{C}(D))\). In other words, the quasi-parabolic structure on \(E_{0}\) given over each \(q_{j}\) by the eigenvectors corresponding to eigenvalue 1 lie on a subbundle \((\Omega^{1}_{C}(D))^{-1}\subset E_{0}\). After elementary transformations at each \(q_{j}\), we get \(L\subset E\) with \(\deg(L)=1\) (in fact \(L=\det(E)\)).
### Definition of a rank 2 vector bundle \(E\)
We set
\[\tilde{U}_{0}:=U_{0}\setminus\{q_{1},q_{2},q_{3}\}\quad\text{and}\quad\tilde{U}_{ \infty}:=U_{\infty}\setminus\{q_{1},q_{2},q_{3}\}.\]
We take an analytic open subsets \(\tilde{U}_{q_{j}}\)\((j=1,2,3)\) of \(C\) such that \(q_{j}\in\tilde{U}_{q_{j}}\) and \(\tilde{U}_{q_{j}}\) are small enough. In particular, \((u_{j},-v_{j})\not\in\tilde{U}_{q_{j}}\). On \(\tilde{U}_{q_{j}}\), the apparent singular point \(q_{j}\) is defined by \(x_{1}-u_{j}=0\). We have an open covering \((\tilde{U}_{k})_{k\in\{0,1,q_{1},q_{2},q_{3}\}}\) of \(C\). We define transition functions \(B_{k_{1}k_{2}}\)\((k_{1},k_{2}\in\{0,1,q_{1},q_{2},q_{3}\})\) as follows:
\[B_{0q_{j}}:=\begin{pmatrix}1&\frac{\zeta_{j}}{x_{1}-u_{j}}\\ 0&\frac{1}{x_{1}-u_{j}}\end{pmatrix}:\mathcal{O}_{\tilde{U}_{q_{j}}}^{\oplus 2 }|_{\tilde{U}_{0}\cap\tilde{U}_{q_{j}}}\stackrel{{\sim}}{{ \longrightarrow}}\mathcal{O}_{\tilde{U}_{0}}^{\oplus 2}|_{\tilde{U}_{0}\cap\tilde{U}_{q_{j}}};\] \[B_{0\infty}:=\begin{pmatrix}1&0\\ 0&-x_{2}\end{pmatrix}:\mathcal{O}_{\tilde{U}_{\infty}}^{\oplus 2}|_{\tilde{U}_{0} \cap\tilde{U}_{\infty}}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{O}_{\tilde{U}_{0}}^{\oplus 2}|_{\tilde{U}_{0}\cap\tilde{U}_{\infty}}.\]
Then we have a vector bundle
\[E=\left((\tilde{U}_{k})_{k\in\{0,1,q_{1},q_{2},q_{3}\}},\ (B_{k_{1}k_{2}})_{k_{1},k_{2} \in\{0,1,q_{1},q_{2},q_{3}\}}\right),\]
where \(E\) is trivial on each \(\tilde{U}_{k}\) and the transition function from \(\tilde{U}_{k_{2}}\) to \(\tilde{U}_{k_{1}}\) is \(B_{k_{1}k_{2}}\).
### Definition of a connection \(\nabla\) on \(E\)
We define matrices \(A_{0},A_{q_{j}},A_{\infty}\) as follows:
\[A_{0} :=\begin{pmatrix}0&\omega_{12}\\ \omega_{21}&\omega_{22}\end{pmatrix}, A_{\infty}:=\begin{pmatrix}0&-x_{2}\omega_{12}\\ -\frac{\omega_{21}}{x_{2}}&\omega_{22}+\frac{\mathrm{d}x_{2}}{x_{2}}\end{pmatrix},\] \[A_{q_{j}} :=\begin{pmatrix}\omega_{11}^{(j)}&\frac{\omega_{12}^{(j)}}{x_{1} -u_{j}}\\ (x_{1}-u_{j})\omega_{21}&\omega_{22}^{(j)}\end{pmatrix}.\]
The 1-form \(\omega_{12}\), \(\omega_{21}\), and \(\omega_{22}\) are defined in (5.4). The 1-form \(\omega_{12}^{(j)}\), \(\omega_{21}^{(j)}\), and \(\omega_{22}^{(j)}\) are defined as follows:
\[\omega_{11}^{(j)} =-\frac{\zeta_{j}}{x_{1}-t}\cdot\frac{\mathrm{d}x_{1}}{y_{1}},\] \[\omega_{12}^{(j)} =\sum_{j^{\prime}\in\{1,2,3\}\setminus\{j\}}\frac{\zeta_{j^{ \prime}}-\zeta_{j}}{2}\cdot\frac{y_{1}+v_{j^{\prime}}}{x_{1}-u_{j^{\prime}}} \cdot\frac{\mathrm{d}x_{1}}{y_{1}}\] \[\qquad+\left(\frac{A_{1}+A_{2}y_{1}-\zeta_{j}(B_{1}+B_{2}y_{1})- \zeta_{j}^{2}}{x_{1}-t}+A_{3}+A_{4}x_{1}-\zeta_{j}B_{3}\right)\frac{\mathrm{d} x_{1}}{y_{1}},\] \[\omega_{22}^{(j)} =\frac{1}{2}\cdot\frac{-y_{1}+v_{j}}{x_{1}-u_{j}}\cdot\frac{ \mathrm{d}x_{1}}{y_{1}}+\sum_{j^{\prime}\in\{1,2,3\}\setminus\{j\}}\frac{1}{2 }\cdot\frac{y_{1}+v_{j^{\prime}}}{x_{1}-u_{j^{\prime}}}\cdot\frac{\mathrm{d}x _{1}}{y_{1}}\] \[\qquad+\left(\frac{B_{1}+B_{2}y_{1}}{x_{1}-t}+B_{3}+\frac{\zeta_{ j}}{x_{1}-t}\right)\frac{\mathrm{d}x_{1}}{y_{1}}.\]
**Proposition 36**.:
* _The_ \((1,2)\)_-entry of_ \(A_{q_{j}}\) _is a section of_ \(\Omega_{C}^{1}(D)|_{\tilde{U}_{q_{j}}}\) _for each_ \(j=1,2,3\)_._
* _We define a local connection on each_ \(\tilde{U}_{k}\)__\((k\in\{0,1,q_{1},q_{2},q_{3}\})\) _by_ \[\begin{cases}\mathrm{d}+A_{0}\colon\mathcal{O}_{\tilde{U}_{0}}^{\oplus 2} \longrightarrow\mathcal{O}_{\tilde{U}_{0}}^{\oplus 2}\otimes\Omega_{C}^{1}(D)|_{ \tilde{U}_{0}}&\text{on }\tilde{U}_{0}\\ \mathrm{d}+A_{q_{j}}\colon\mathcal{O}_{\tilde{U}_{q_{j}}}^{\oplus 2} \longrightarrow\mathcal{O}_{\tilde{U}_{q_{j}}}^{\oplus 2}\otimes\Omega_{C}^{1}(D)|_{ \tilde{U}_{q_{j}}}&\text{on }\tilde{U}_{q_{j}}\\ \mathrm{d}+A_{\infty}\colon\mathcal{O}_{\tilde{U}_{0}}^{\oplus 2} \longrightarrow\mathcal{O}_{\tilde{U}_{0}}^{\oplus 2}\otimes\Omega_{C}^{1}(D)|_{ \tilde{U}_{\infty}}&\text{on }\tilde{U}_{\infty}.\end{cases}\]
_Then we can glue these local connections. So we have a global connection \(\nabla\colon E\to E\otimes\Omega^{1}_{C}(D)\) on \(E\)._
Proof.: Since \(A_{3},A_{4},B_{3}\) are determined so that these parameters satisfy the condition (5.10), we have
\[\omega^{(j)}_{12}|_{q_{j}}=\left(C_{j}+A_{3}+A_{4}u_{j}-\zeta_{j}B_{3}\right) \frac{\mathrm{d}x_{1}|_{q_{j}}}{v_{j}}=0.\]
Here, \(C_{j}\) is in (5.9). So \(\frac{\omega^{(j)}_{12}}{x_{1}-u_{j}}\) has no pole at \(q_{j}\) for each \(j=1,2,3\). Since we have
\[B^{-1}_{k_{1}k_{2}}A_{k_{1}}B_{k_{1}k_{2}}+B^{-1}_{k_{1}k_{2}}\,\mathrm{d}B_{k _{1}k_{2}}=A_{k_{2}}\]
for each \(k_{1},k_{2}\in\{0,\infty,q_{1},q_{2},q_{3}\}\), the connection \(\nabla\) acting on \(E\) is defined globally.
**Remark 37**.: _By Definition 13 in Section 3.3, we have trivializations of \(E\). On \(C\setminus\{t_{1},t_{2}\}\), the trivializations in Definition 13 coincide with the trivializations described in the present section. We have defined the trivialization in Definition 13 at \(t_{i}\)\((i=1,2)\) so that the residue matrix (respectively, the polar part in the reduced case) is a diagonal matrix. On the other hand, by the trivializations described in the present section, the residue matrix at \(t_{i}\)\((i=1,2)\) (respectively, the polar part) is not a diagonal matrix. The reason why the residue matrix at \(t_{i}\)\((i=1,2)\) is a diagonal matrix is that the corresponding description of the variation (3.11) satisfies the compatibility conditions of the quasi-parabolic structure in \(\mathcal{F}^{0}\) and \(\mathcal{F}^{1}\) of (3.1). On the other hand, now we are interested in behavior of the connection \(\nabla\) around \(q_{j}\)\((j=1,2,3)\). So now we do not consider the diagonalization of the residue matrices at \(t_{i}\)\((i=1,2)\) (respectively, of the polar part when \(D\) is reduced)._
### Canonical coordinates
We will calculate the canonical coordinates introduced in Section 3.5. For the transition functions \(B_{k_{1}k_{2}}\)\((k_{1},k_{2}\in\{0,1,q_{1},q_{2},q_{3}\})\) of \(E\), we have transition functions of \(\det(E)\) as follows:
\[\det(B_{0q_{j}})=\frac{1}{x_{1}-u_{j}}\colon\mathcal{O}_{\tilde{U}_{q_{j}}}|_ {\tilde{U}_{0}\cap\tilde{U}_{q_{j}}}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{O}_{\tilde{U}_{0}}|_{\tilde{U}_{0}\cap\tilde{U}_{q_{j}}};\]
\[\det(B_{0\infty})=-x_{2}\colon\mathcal{O}_{\tilde{U}_{\infty}}|_{\tilde{U}_{0} \cap\tilde{U}_{\infty}}\stackrel{{\sim}}{{\longrightarrow}} \mathcal{O}_{\tilde{U}_{0}}|_{\tilde{U}_{0}\cap\tilde{U}_{\infty}}.\]
So we have a cocycle \((\det(B_{k_{1}k_{2}}))_{k_{1},k_{2}\in\{0,1,q_{1},q_{2},q_{3}\}}\), which gives a class of \(H^{1}(C,\mathcal{O}^{*}_{C})\). We have
\[\mathrm{d}\log(\det(B_{0q_{j}}))=-\frac{\mathrm{d}x_{1}}{x_{1}-u_{j}}\quad \text{and}\quad\mathrm{d}\log(\det(B_{0\infty}))=\frac{\mathrm{d}x_{2}}{x_{2 }},\]
and these \(1\)-forms give a class of \(H^{1}(C,\Omega^{1}_{C})\). We denote by \(c_{1}\) and \(\mathbf{\Omega}(D,c_{1})\) the class of \(H^{1}(C,\Omega^{1}_{C})\) and the total space of the twisted cotangent bundle corresponding to \(c_{1}\), respectively. We have the following description of \(\mathrm{tr}(\nabla)\):
\[\mathrm{tr}(\nabla)=\begin{cases}\mathrm{d}+\omega_{22}\colon\mathcal{O}_{ \tilde{U}_{0}}\longrightarrow\mathcal{O}_{\tilde{U}_{0}}\otimes\Omega^{1}_{C} (D)|_{\tilde{U}_{0}}&\text{on }\tilde{U}_{0}\\ \mathrm{d}+\omega^{(j)}_{11}+\omega^{(j)}_{22}\colon\mathcal{O}_{\tilde{U}_{q _{j}}}\longrightarrow\mathcal{O}^{\oplus 2}_{\tilde{U}_{q_{j}}}\otimes\Omega^{1}_{C} (D)|_{\tilde{U}_{q_{j}}}&\text{on }\tilde{U}_{q_{j}}\\ \mathrm{d}+\omega_{22}+\frac{\mathrm{d}x_{2}}{x_{2}}\colon\mathcal{O}_{\tilde {U}_{\infty}}\longrightarrow\mathcal{O}^{\oplus 2}_{\tilde{U}_{\infty}}\otimes\Omega^{1}_{C} (D)|_{\tilde{U}_{\infty}}&\text{on }\tilde{U}_{\infty}.\end{cases}\]
Notice that we have
\[\omega^{(j)}_{11}+\omega^{(j)}_{22}=\omega_{22}+\mathrm{d}\log(\det(B_{0q_{j}} )),\text{ and }\]
\[\omega_{22}+\frac{\mathrm{d}x_{2}}{x_{2}}=\omega_{22}+\mathrm{d}\log(\det(B_{0 \infty})).\]
So these connection matrices of \(\operatorname{tr}(\nabla)\) give an explicit global section of \(\mathbf{\Omega}(D,c_{1})\to C\). We consider a section of \(\mathbf{\Omega}(D,c_{1})|_{\tilde{U}_{q_{j}}}\to\tilde{U}_{q_{j}}\)
\[\frac{\zeta_{j}\operatorname{d}\!x_{1}}{(x_{1}-t)y_{1}}+\omega_{11}^{(j)}+ \omega_{22}^{(j)}.\]
For this section on \(\tilde{U}_{q_{j}}\), we define \(p_{j}\) (\(j=1,2,3\)) by
\[p_{j}=\operatorname{res}_{q_{j}}\left(\frac{\zeta_{j}}{x_{1}-u_{j}}\cdot\frac{ \operatorname{d}\!x_{1}}{(x_{1}-t)y_{1}}\right)+\operatorname{res}_{q_{j}} \left(\frac{\omega_{11}^{(j)}+\omega_{22}^{(j)}}{x_{1}-u_{j}}\right).\]
Then we have a map
\[(E,\nabla)\longmapsto(u_{1},u_{2},u_{3},\zeta_{1},\zeta_{2},\zeta_{3}) \longmapsto(u_{1},u_{2},u_{3},p_{1},p_{2},p_{3}),\]
where
\[p_{j}=\frac{\zeta_{j}}{(u_{j}-t)v_{j}}-\frac{K^{\prime}(u_{j})}{4v_{j}^{2}}+ \sum_{j^{\prime}\in\{1,2,3\}\setminus\{j\}}\frac{1}{2}\cdot\frac{v_{j}+v_{j^ {\prime}}}{u_{j}-u_{j^{\prime}}}\cdot\frac{1}{v_{j}}\]
\[+\left(\frac{B_{1}+B_{2}v_{j}}{u_{j}-t}+B_{3}\right)\frac{1}{v_{j}}\]
Here we set \(K(x_{1}):=x_{1}(x_{1}-1)(x_{1}-\lambda)\). Notice that \(B_{1}\) and \(B_{2}\) are determined by Lemma 32 and \(B_{3}\) is determined by Lemma 34. Notice that \(B_{3}\) depends on \(\zeta_{1},\zeta_{2}\) and \(\zeta_{3}\). The symplectic structure is \(\sum_{j=1}^{3}\operatorname{d}\!p_{j}\wedge\operatorname{d}\!u_{j}\) by Theorem 20.
## 6. Canonical coordinates revised and another proof for birationality
In this section, we will give another proof of Proposition 17. For simplicity, we will consider the cases where \(D\) is a reduced effective divisor. Let \((E,\nabla)\in M^{0}_{X}\) be a connection on a fixed irregular curve \(X=(C,D,\{z_{i}\}_{i\in I},\{\theta_{i}\}_{i\in I},\theta_{res})\) with genericity conditions as before.
We set \(D=t_{1}+\cdots+t_{n}\) and the connection is given by
\[\nabla\colon E\longrightarrow E\otimes\Omega^{1}_{C}(D).\]
In this section, we assume that \(g=g(C)\geq 1\) and \(n\geq 1\) as in the previous sections. Moreover if \(g=g(C)=1\), we assume that \(n\geq 2\).
Note that we have the unique extension
\[0\longrightarrow\mathcal{O}_{C}\longrightarrow E\longrightarrow L_{0}\longrightarrow 0 \tag{6.1}\]
with \(L_{0}=\det(E)\). Moreover for \((E,\nabla)\in M^{0}_{X}\) we have \(\deg L_{0}=2g-1\) and \(\dim_{\mathbb{C}}H^{0}(C,E)=1\). Then we can define apparent singularities \(q_{1},\ldots,q_{N}\in C\) where \(N=4g-3+n\). Since \(\deg D=2g-2+n\geq 1\) and \(\deg L_{0}=2g-1\geq 1\), we see that \(\dim_{\mathbb{C}}H^{0}(C,\Omega^{1}_{C}(D))=g-1+n\geq 2\). We can choose \(\gamma\in H^{0}(C,\Omega^{1}_{C}(D))\) and \(s\in H^{0}(C,L_{0})\) whose zeros are given by
\[\{\gamma=0\}=\{c_{1},\ldots,c_{2g-2+n}\}\quad\text{and}\quad\{s=0\}=\{u_{1}, \ldots,u_{2g-1}\}.\]
We assume the following genericity conditions:
1. \(u_{i_{1}}\neq u_{i_{2}}\) (for \(i_{1}\neq i_{2}\)), and \(c_{k_{1}}\neq c_{k_{2}}\) (for \(k_{1}\neq k_{2}\));
2. \(\{u_{1},\ldots,u_{2g-1}\}\cap\{c_{1},\ldots,c_{2g-2+n}\}=\emptyset\);
3. \(\{q_{1},\ldots,q_{N}\}\cap\{u_{1},\ldots,u_{2g-1},c_{1},\ldots,c_{2g-2+n}\}=\emptyset\).
Set
\[U_{0}=C\setminus\{u_{1},\dots,u_{2g-1},c_{1},\dots,c_{2g-2+n}\}.\]
Moreover we take small an analytic neighborhood \(U_{i}\) of \(u_{i}\) for \(1\leq i\leq 2g-1\) and \(U_{2g-1+k}\) of \(c_{k}\) for \(1\leq k\leq 2g-2+n\). For \(i=1,\dots,4g-3+n\), we can identify \(U_{i}\) with a unit disc \(\Delta=\{z\in\mathbb{C}\mid|z|<1\}\) with the origin corresponding to \(u_{i}\) (\(1\leq i\leq 2g-1\)) and \(c_{i-2g+1}\) (\(2g\leq i\leq 4g-3+n\)). We can assume that \(U_{i_{1}}\cap U_{i_{2}}=\emptyset\) for \(i_{1}\neq i_{2}\), \(i_{1},i_{2}\geq 1\). Note that since \(U_{0}\) is an affine variety and \(U_{0}\cap U_{i}\cong\Delta\setminus\{0\}\) for \(i=1,\dots,4g-3+n\), the covering \(C=U_{0}\cup U_{1}\cup\dots\cup U_{4g-3+n}\) gives a Stein covering of \(C\). For \(0\leq i\leq 4g-3+n\), we have nonzero sections \(\boldsymbol{e}_{1}^{(i)}\in\mathcal{O}_{U_{i}},\boldsymbol{e}_{2}^{(i)}\in(L _{0})_{|U_{i}}\) giving trivializations of \(E\) on \(U_{i}\) respectively:
\[E_{|U_{i}}\simeq\mathcal{O}_{|U_{i}}\boldsymbol{e}_{1}^{(i)}\oplus\mathcal{O} _{|U_{i}}\boldsymbol{e}_{2}^{(i)}.\]
Moreover we have a transition matrix \(H_{0i}\) on \(U_{0}\cap U_{i}\) of the form
\[H_{0i}=\begin{pmatrix}1&h_{0i}\\ 0&g_{0i}\end{pmatrix} \tag{6.2}\]
satisfying
\[(\boldsymbol{e}_{1}^{(i)},\boldsymbol{e}_{2}^{(i)})=(\boldsymbol{e}_{1}^{(0) },\boldsymbol{e}_{2}^{(0)})H_{0i}=(\boldsymbol{e}_{1}^{(0)},h_{0i}\boldsymbol {e}_{1}^{(0)}+g_{0i}\boldsymbol{e}_{2}^{(0)}). \tag{6.3}\]
Here \(\{h_{0i}\}_{i}\in\operatorname{Ext}^{1}(L_{0},\mathcal{O}_{C})\cong H^{1}(C, L_{0}^{-1})\) corresponds to the extension class of (6.1) and \(\{g_{0i}\}_{i}\in H^{1}(C,\mathcal{O}_{C}^{*})\) gives the transition function of \(L_{0}=\det(E)\). With these trivializations we have connection matrices \(A^{(i)}\):
\[\nabla(\boldsymbol{e}_{1}^{(i)},\boldsymbol{e}_{2}^{(i)})=(\boldsymbol{e}_{ 1}^{(i)},\boldsymbol{e}_{2}^{(i)})A^{(i)} \tag{6.4}\]
of the form
\[A^{(i)}=\begin{pmatrix}a_{11}^{(i)}\gamma_{i}&a_{12}^{(i)}\gamma_{i}\\ a_{21}^{(i)}\gamma_{i}&a_{22}^{(i)}\gamma_{i}\end{pmatrix}. \tag{6.5}\]
Here \(a_{kl}^{(i)}\in\Gamma(U_{i},\mathcal{O}_{U_{i}})\) and \(\gamma_{i}\in\Gamma(U_{i},\Omega_{U_{i}}^{1}(D))\). We set \(\gamma_{0}=\gamma_{|U_{0}}\) as above.
From (6.3) and (6.4), we can verify the following
**Lemma 38**.: _For \(1\leq i\leq 4g-3+n\), on \(U_{0}\cap U_{i}\), we gave_
\[A^{(i)}=H_{0i}^{-1}A^{(0)}H_{0i}+H_{0i}^{-1}\operatorname{d}H_{0i}. \tag{6.6}\]
_Specifically, we have the following identities:_
\[a_{21}^{(i)}\gamma_{i}=a_{21}^{(0)}\gamma_{0}g_{0i}^{-1};\quad\text{and} \tag{6.7}\]
\[a_{22}^{(i)}\gamma_{i}=a_{22}^{(0)}\gamma_{0}+a_{21}^{(0)}\gamma_{0}h_{0i}g_{0 i}^{-1}+\frac{\operatorname{d}g_{0i}}{g_{0i}}. \tag{6.8}\]
The identity (6.7) shows that \(a_{21}^{(i)}\gamma_{i}\) defines a section of \(H^{0}(C,\Omega_{1}(D)\otimes L_{0})\) and the zeros of this section are nothing but the apparent singularities \(q_{1},\dots,q_{N}\). Evaluating the identity (6.8) at \(q_{j}\) (\(j=1,\dots,N\)), we then have
\[(a_{22}^{(i)}\gamma_{i})_{q_{j}}=(a_{22}^{(0)}\gamma_{0})_{q_{j}}+\left(\frac {\operatorname{d}g_{0i}}{g_{0i}}\right)_{q_{j}} \tag{6.9}\]
Noting that the cohomology class of the cocycle \(\left\{\frac{\operatorname{d}g_{0i}}{g_{0i}}\right\}_{i}\) corresponds to \(c_{d}=c_{1}(L_{0})\), from (6.9), we have the following
**Proposition 39**.: _For each \(0\leq j\leq N\), the data \((E,\nabla)\in M^{0}_{X}\) defines \(N\) points \((q_{j},\tilde{p}_{j})\) on the total space of \(\mathbf{\Omega}(D,c_{d})\) by the formula_
\[\tilde{p}_{j}=(a^{(0)}_{22}\gamma_{0})_{q_{j}}\in\Omega^{1}_{C}(D,c_{d})_{|q_{j}} \tag{6.10}\]
The above definition of \(\tilde{p}_{j}\) does not depend on the choice of the sections \(s\in H^{0}(C,L_{0})\) and \(\gamma\in H^{0}(C,\Omega^{1}_{C}(D))\) and defines the same map as in Definition 16:
\[f_{\mathrm{App}}\colon M^{0}_{X}\longrightarrow\mathrm{Sym}^{N}(\mathbf{ \Omega}(D,c_{d})). \tag{6.11}\]
Now we consider \(q_{j}\) as a local coordinate near \(q_{j}\) and we write \(\gamma=c(q_{j})\,\mathrm{d}q_{j}\) for some local holomorphic function \(c(q_{j})\). Then we have
\[\tilde{p}_{j}=p_{j}\,\mathrm{d}q_{j}\]
with
\[p_{j}=a^{(0)}_{22}(q_{j})c(q_{j}).\]
As we have proved in Theorem 20, the map \(f_{\mathrm{App}}\) is symplectic.
### From a connection to a Higgs field
Keeping the notation, let us consider the section \(s\in H^{0}(C,L_{0})\) as before, and set \(s^{(0)}=s\). Take trivialization of \(L_{0|U_{i}}\) over \(U_{i}\) we have a holomorphic function \(s^{(i)}\in\Gamma(U_{i},\mathcal{O}_{U_{i}})\) such that
\[s^{(0)}=g_{0i}s^{(i)}.\]
Note that \(s^{(i)}\) has zeros at \(u_{i}\in U_{i}\) for \(1\leq i\leq 2g-1\). Set \(D(s)=u_{1}+\cdots+u_{2g-1}\). We can show the following
**Lemma 40**.: _There exists a connection_
\[\nabla_{1}\colon E\longrightarrow E\otimes\Omega^{1}_{C}(D(s))\]
_such that for each \(0\leq i\leq N=4g-3+n\), on \(U_{i}\) it has the form_
\[\nabla^{(i)}_{1}=\mathrm{d}+S^{(i)}=\mathrm{d}+\begin{pmatrix}0&-\frac{\beta_ {i}}{s^{(i)}}\\ 0&-\frac{\mathrm{d}s^{(i)}}{s^{(i)}}\end{pmatrix}\]
_with respect to the trivialization \((\boldsymbol{e}^{(i)}_{1},\boldsymbol{e}^{(i)}_{2})\). Here \(\beta_{i}\in\Gamma(U_{i},\Omega^{1}_{U_{i}})\)._
Proof.: Since \(s^{(0)}=g_{0i}s^{(i)}\), one has
\[\frac{\mathrm{d}s^{(0)}}{s^{(0)}}=\frac{\mathrm{d}g_{0i}}{g_{0i}}+\frac{ \mathrm{d}s^{(i)}}{s^{(i)}}\]
in \(U_{0i}=U_{0}\cap U_{i}\). The compatibility condition for connection matrices \(S^{(i)}\) is
\[S^{(i)}=H^{-1}_{0i}S^{(0)}H_{0i}+H^{-1}_{0i}\,\mathrm{d}H_{0i}. \tag{6.12}\]
The right hand side of (6.12) is
\[\begin{pmatrix}0&-g_{0i}\frac{\beta_{0}}{s^{(0)}}+h_{0i}\left(\frac{\mathrm{d }s^{(0)}}{s^{(0)}}-\frac{\mathrm{d}g_{0i}}{g_{0i}}\right)+\mathrm{d}h_{0i}\\ 0&-\frac{\mathrm{d}s^{(0)}}{s^{(0)}}+\frac{\mathrm{d}g_{0i}}{g_{0i}}\end{pmatrix} \tag{6.13}\]
Since \(\{h_{0i}\}_{i}\) is a class in \(H^{1}(C,L_{0}^{-1})\) and \(s\in H^{0}(C,L_{0})\), the class \(\{s^{(i)}h_{0i}\}_{i}\) defines a class in \(H^{1}(C,\mathcal{O}_{C})\). Then, by the Hodge theory, the derivative \(\{\mathrm{d}(s^{(i)}h_{0i})\}_{i}\in H^{1}(C,\Omega^{1}_{C})\) vanishes, so there exist \(\beta_{i}\in\Gamma(U_{i},\Omega^{1}_{U_{i}})\) such that
\[\mathrm{d}(s^{(i)}h_{0i})=\beta_{0}-\beta_{i}.\]
Choose such \(\beta_{i}\)'s for the formula. Then we have
\[\mathrm{d}h_{0i}=-h_{0i}\frac{\mathrm{d}s^{(i)}}{s^{(i)}}+g_{0i}\frac{\beta_{0}}{ s^{(0)}}-\frac{\beta_{i}}{s^{(i)}}.\]
Then the right hand side of (6.13) becomes
\[\begin{pmatrix}0&-\frac{\beta_{i}}{s^{(i)}}\\ 0&-\frac{\mathrm{d}s^{(i)}}{s^{(i)}}\end{pmatrix}\]
as desired.
For any \((E,\nabla)\in M^{0}_{X}\), the difference
\[\nabla-\nabla_{1}\colon E\longrightarrow E\otimes\Omega^{1}_{C}(D+D(s))\]
defines an \(\mathcal{O}_{C}\)-homomorphism, that is a rational Higgs fields on \(E\). We reprove Proposition 17.
**Theorem 41**.: _For generic \((E,\nabla)\in M^{0}_{X}\), the point \((q_{j},\tilde{p}_{j})_{j=1,\dots,N}\in\mathrm{Sym}^{N}(\boldsymbol{\Omega}(D, c_{d}))\) determines \((E,\nabla)\). So the map \(f_{\mathrm{App}}\) is birational._
Proof.: Consider the Higgs field
\[\Phi=\Phi_{\nabla}=\nabla-\nabla_{1}\colon E\longrightarrow E\otimes\Omega^ {1}_{C}(D+D(s))\]
where \(D=t_{1}+\dots+t_{n}\) and \(D(s)=u_{1}+\dots+u_{2g-1}\) as in the notation above. We assume that the set of apparent singularities \(q_{1},\dots,q_{N}\) of \((E,\nabla)\) is disjoint from \(D\) and \(D(s)\). We will consider the characteristic curve of \(\Phi\). On \(U_{i}\), we have
\[\Phi_{i}=A^{(i)}-S^{(i)}=\begin{pmatrix}\tilde{a}_{11}&\tilde{a}_{12}+\frac{ \beta_{i}}{s^{(i)}}\\ \tilde{a}_{21}&\tilde{a}_{22}+\frac{\mathrm{d}s^{(i)}}{s^{(i)}}\end{pmatrix}.\]
The characteristic curve \(C_{s}\) can be defined in the total space of \(\boldsymbol{\Omega}(D+D(s))\) of the line bundle \(\Omega^{1}_{C}(D+D(s))\) by
\[C_{s}:x^{2}-b_{1}x-b_{2}=0\]
with \(b_{i}\in H^{0}(C,(\Omega^{1}_{C}(D+D(s)))^{\otimes i})\), and \(x\) the canonical section. The dimension of the family of spectral curves is thus given by
\[\dim H^{0}(C,\Omega^{1}_{C}(D+D(s)))+\dim H^{0}(C,(\Omega^{1}_{C }(D+D(s)))^{\otimes 2}) = N+1-g+2N+1-g\] \[= 3N+2-2g=3(4g-3+n)+2-2g\] \[= 10g-7+3n.\]
Then \(\Phi\) is constrained by the following conditions.
1. At \(t_{i},i=1,\dots,n\), \(\Phi\) has eigenvalues fixed by data \(X\). These impose \(2n-1\) conditions because of the Fuchs relation.
2. At \(u_{k}\), \(k=1,\dots,2g-1\), take a local coordinate \(z_{k}\) such that \(z_{k}(u_{k})=0\). Then \(\Phi\) has the following form near \(z_{k}=0\) \[\Phi=\begin{pmatrix}0&\frac{\beta_{i}(0)}{\tilde{z}_{k}}\\ 0&\frac{\mathrm{d}z_{k}}{z_{k}}\end{pmatrix}+\text{holomorphic}.\] Then eigenvalues of the residue matrix are \(0,1\) and the \(\beta_{i}(0)\) gives a restriction on \(C_{s}\). Then totally we have \(3\times(2g-1)\) conditions.
3. At \(q_{j},j=1,\ldots,N\), the points \(\tilde{a}_{22}(q_{j})+\frac{ds^{(i)}}{s^{(i)}}(q_{j})=\tilde{p}_{j}+c_{j}\in \mathbf{\Omega}(D+D(s))\) lie on the characteristic curve \(C_{s}\). These give \(N=4g-3+n\) conditions.
For generic choice of \(q_{1},\cdots,q_{N}\) and \(s\in H^{0}(C,L_{0})\), we can see using the method of Lemma 7 and Proposition 17 that these conditions are independent, so we obtain a total of
\[2n-1+3(2g-1)+(4g-3+n)=10g-7+3n\]
conditions, so these determine the spectral curve \(C_{s}\). Now the divisor \(\mu=\sum_{j=1}^{N}(\tilde{p}_{j}-c_{j})+\sum_{k=1}^{2g-1}(1_{k})\) determines the rank \(1\) sheaf \(\mathcal{O}_{C_{s}}(\mu)\) where \((1_{k})\in C_{s}\) denotes the point over \(u_{k}\) corresponding to the eigenvalue \(-1\) of the residue of \(\Phi\) at \(u_{k}\). Then \((\pi\colon C_{s}\longrightarrow C,\mathcal{O}_{C_{s}}(\mu))\) determines \((E,\Phi)\) uniquely by [3, Proposition 3.6]. Hence \(E\) and \(\nabla=\Phi+\nabla_{1}\) is determined uniquely.
|
2303.00131 | A Low-Complexity Solution to Sum Rate Maximization for IRS-assisted
SWIPT-MIMO Broadcasting | This paper focuses on the fundamental problem of maximizing the achievable
weighted sum rate (WSR) at information receivers (IRs) in an intelligent
reflecting surface (IRS) assisted simultaneous wireless information and power
transfer system under a multiple-input multiple-output (SWIPT-MIMO) setting,
subject to a quality-of-service (QoS) constraint at the energy receivers (ERs).
Notably, due to the coupling between the transmit precoding matrix and the
passive beamforming vector in the QoS constraint, the formulated non-convex
optimization problem is challenging to solve. We first decouple the design
variables in the constraints following a penalty dual decomposition method, and
then apply an alternating gradient projection algorithm to achieve a stationary
solution to the reformulated optimization problem. The proposed algorithm
nearly doubles the WSR compared to that achieved by a block-coordinate descent
(BCD) based benchmark scheme. At the same time, the complexity of the proposed
scheme grows linearly with the number of IRS elements while that of the
benchmark scheme is proportional to the cube of the number of IRS elements. | Vaibhav Kumar, Anastasios Papazafeiropoulos, Muhammad Fainan Hanif, Le-Nam Tran, Mark F. Flanagan | 2023-02-28T23:31:43Z | http://arxiv.org/abs/2303.00131v1 | # A Low-Complexity Solution to Sum Rate Maximization for IRS-assisted SWIPT-MIMO Broadcasting
###### Abstract
This paper focuses on the fundamental problem of maximizing the achievable weighted sum rate (WSR) at information receivers (IRs) in an intelligent reflecting surface (IRS) assisted simultaneous wireless information and power transfer system under a multiple-input multiple-output (SWIPT-MIMO) setting, subject to a quality-of-service (QoS) constraint at the energy receivers (ERs). Notably, due to the coupling between the transmit precoding matrix and the passive beamforming vector in the QoS constraint, the formulated non-convex optimization problem is challenging to solve. We first decouple the design variables in the constraints following a penalty dual decomposition method, and then apply an alternating gradient projection algorithm to achieve a stationary solution to the reformulated optimization problem. The proposed algorithm nearly doubles the WSR compared to that achieved by a block-coordinate descent (BCD) based benchmark scheme. At the same time, the complexity of the proposed scheme grows linearly with the number of IRS elements while that of the benchmark scheme is proportional to the cube of the number of IRS elements.
Intelligent reflecting surface, MIMO, SWIPT, energy harvesting, penalty dual decomposition.
## I Introduction
The advancement in meta-materials technology has led to the development of intelligent reflecting surfaces (IRSs) which is being foreseen as a groundbreaking hardware technology for beyond-fifth-generation (B5G) and sixth-generation (6G) wireless communications systems [1]. Recent research has shown promising advantages of IRSs to support energy-efficient high-speed communication while also supporting massive connectivity. In parallel, simultaneous wireless information and power transfer (SWIPT) is another appealing technology to cater to the energy requirements of low-powered Internet-of-Things (IoT) devices [2, 3]. In recent years, a significant research effort has been made towards investigating the benefits of IRSs in SWIPT-aided wireless communications systems, especially to improve the power transfer efficiency and to increase the operational range of energy receivers (ERs) [4].
In this context, one of the early works on IRS-aided SWIPT considered the problem of maximizing the weighted received sum power at the ERs subject to a signal-to-interference-plus-noise ratio (SINR) constraint at the information receivers (IRs) in an IRS-assisted SWIPT multiple-input single-output (MISO) system [5]. Similarly, the fundamental problem of weighted sum rate (WSR) maximization (at the IRs) in an IRS-assisted SWIPT multiple-input multiple-output (MIMO) system, subject to a minimum weighted sum harvested power constraint (at the ERs) was considered in [6]. It is important to note that the beamforming optimization problems in IRS-assisted systems are challenging to solve in general, due to the coupling of the design variables in the objective and/or constraint(s). Although alternating optimization (AO) based schemes are one of the most popular approaches to tackle such problems in IRS-assisted communications, a near-optimal solution is not guaranteed if the design variables are coupled in the constraints (see [7] and the references therein).
It is well-known that the problem of WSR maximization in a SWIPT-MIMO system is similar to that of WSR maximization in a MIMO system subject to one or more interference constraints (e.g., underlay spectrum sharing MIMO systems). Therefore, for the WSR maximization problem in the IRS-assisted SWIPT-MIMO system, the authors in [6] followed the approach of WSR maximization proposed in [8] and [9]. In particular, to obtain the optimal transmit precoding matrices (TPMs) and the passive beamforming vector at the IRS that jointly maximize the WSR, a block-coordinate descent (BCD) method was used in [6]. It is interesting to note that the shortcomings (in terms of performance and computational complexity) of the BCD-based beamforming design approach for the IRS-assisted MIMO underlay spectrum sharing system were highlighted in [10], where the authors also proposed a high-performance and low-complexity solution using a penalty dual decomposition based alternating gradient projection (PDDAGP) method. Motivated by this observation, in this paper we propose the PDDAGP method for optimal beamforming design in the IRS-aided SWIPT-MIMO system, which results in a significantly higher WSR than that achieved by the BCD-based approach, and also incurs a notably lower complexity compared to the benchmark scheme.
_Notations:_ Bold uppercase and lowercase letters respectively denote matrices and vectors. For a complex-valued
matrix \(\mathbf{X}\), the (ordinary) transpose, conjugate transpose, trace, determinant, and Frobenius norm are denoted by \(\mathbf{X}^{\mathsf{T}}\), \(\mathbf{X}^{\mathsf{H}}\), \(\mathsf{tr}(\mathbf{X})\), \(|\mathbf{X}|\), and \(\|\mathbf{X}\|\), respectively. The absolute value of a complex number \(x\) is denoted by \(|x|\). The vector space of all complex-valued matrices of size \(M\times N\) is denoted by \(\mathbb{C}^{M\times N}\). Using \(\mathsf{vec}_{\mathsf{d}}(\mathbf{X})\) we denote a column vector formed from the elements on the main diagonal of \(\mathbf{X}\). For a vector \(\mathbf{x}\), \(\mathsf{diag}(\mathbf{x})\) denotes a square diagonal matrix whose main diagonal has the same elements as those of \(\mathbf{x}\). The complex-valued gradient of a function \(f(\cdot)\) with respect to (w.r.t.) \(\mathbf{X}^{*}\) is denoted by \(\nabla_{\mathbf{X}}f(\cdot)\), where \(\mathbf{X}^{*}\) represents the complex conjugate of \(\mathbf{X}\), and Euclidean projection of \(\mathbf{X}\) onto the set \(\mathcal{X}\) is defined by \(\Pi_{\mathcal{X}}\{\mathbf{x}\}\triangleq\operatorname*{argmin}_{\hat{\mathbf{ x}}\in\mathcal{X}}\|\mathbf{x}-\hat{\mathbf{x}}\|\). The expectation operation is denoted by \(\mathbb{E}\{\cdot\}\). The identity and zero matrices are respectively represented by \(\mathbf{I}\) and \(\mathbf{0}\), and \(\sqrt{-1}\) is represented by \(\iota\).
## II System Model and Problem Formulation
Similar to [6], we consider an IRS-assisted SWIPT-MIMO system consisting of one base station (BS), \(M_{\mathrm{I}}\) IRs, \(M_{\mathrm{E}}\) ERs, and one passive IRS. It is assumed that the BS is equipped with \(N_{\mathrm{B}}\) antennas, each of the IRs and ERs are respectively equipped with \(N_{\mathrm{I}}\) and \(N_{\mathrm{E}}\) antennas, respectively, and the IRS consists of \(N_{\mathrm{S}}\) reflecting elements. The set of indices for IRs and ERs are respectively denoted by \(\mathcal{M}_{\mathrm{I}}\triangleq\{1,2,\ldots,M_{\mathrm{I}}\}\) and \(\mathcal{M}_{\mathrm{E}}\triangleq\{1,2,\ldots,M_{\mathrm{E}}\}\). The channel matrices for BS-IRS, BS-\(m^{\mathrm{th}}\) IR, BS-\(\ell^{\mathrm{th}}\) ER, IRS-\(m^{\mathrm{th}}\) IR, and IRS-\(\ell^{\mathrm{th}}\) ER are respectively denoted by \(\mathbf{H}_{\mathrm{S}}\in\mathbb{C}^{N_{\mathrm{S}}\times N_{\mathrm{B}}}\), \(\mathbf{H}_{m\mathrm{I}}\in\mathbb{C}^{N_{\mathrm{I}}\times N_{\mathrm{B}}}\), \(\mathbf{H}_{\ell\mathrm{E}}\in\mathbb{C}^{N_{\mathrm{E}}\times N_{\mathrm{B}}}\), \(\mathbf{G}_{m\mathrm{I}}\in\mathbb{C}^{N_{\mathrm{I}}\times N_{\mathrm{S}}}\) and \(\mathbf{G}_{\ell\mathrm{E}}\in\mathbb{C}^{N_{\mathrm{E}}\times N_{\mathrm{S}}}\). The IRS passive beamforming vector is denoted by \(\boldsymbol{\phi}=[\phi_{1},\phi_{2},\ldots,\phi_{N_{\mathrm{S}}}]^{\mathsf{T} }\in\mathbb{C}^{N_{\mathrm{S}}\times 1}\), where \(\phi_{n_{\mathrm{S}}}\triangleq\exp\left(\iota\theta_{n_{\mathrm{S}}}\right)\) and \(\theta_{n_{\mathrm{S}}}\in[0,2\pi),\forall n_{\mathrm{S}}\in\mathcal{N}_{ \mathrm{S}}\triangleq\{1,2,\ldots,N_{\mathrm{S}}\}\).1 We assume the availability of perfect instantaneous channel state information (CSI) at the BS for all of the wireless links.2 We denote the signal vector transmitted from the BS is given by \(\mathbf{w}=\sum_{m\in\mathcal{M}_{\mathrm{I}}}\mathbf{F}_{m}\mathbf{s}_{m},\) where \(\mathbf{s}_{m}\in\mathbb{C}^{\min\{N_{\mathrm{B}},N_{\mathrm{I}}\}\times 1}\) is the signal vector intended for the \(m^{\mathrm{th}}\) IR, and \(\mathbf{F}_{m}\in\mathbb{C}^{N_{\mathrm{B}}\times\min\{N_{\mathrm{B}}\times N _{\mathrm{I}}\}}\) is the transmit precoding matrix (TPM) corresponding to \(\mathbf{s}_{m}\). We assume that \(\mathbb{E}\{\mathbf{s}_{m}\mathbf{s}_{m}^{\mathsf{H}}\}=\mathbf{I}\) and \(\mathbb{E}\{\mathbf{s}_{m}\mathbf{s}_{m}^{\mathsf{H}}\}=\mathbf{0}\)\(\forall m\neq m^{\prime}\in\mathcal{M}_{\mathrm{I}}\). The signal vector received at the \(m^{\mathrm{th}}\) IR is given by \(\mathbf{y}_{m\mathrm{I}}=(\mathbf{H}_{m\mathrm{I}}+\mathbf{G}_{m\mathrm{I}} \boldsymbol{\Phi}\mathbf{H}_{\mathrm{S}})\mathbf{w}+\mathbf{n}_{m\mathrm{I}}\), where \(\boldsymbol{\Phi}\triangleq\mathsf{diag}(\boldsymbol{\phi})\), and \(\mathbf{n}_{m\mathrm{I}}\in\mathbb{C}^{N_{\mathrm{I}}\times 1}\sim\mathcal{CN}( \mathbf{0},\sigma_{m\mathrm{I}}^{2}\mathbf{I})\) is the additive white Gaussian noise (AWGN) vector at the \(m^{\mathrm{th}}\) IR. Similarly, the received signal vector at the \(\ell^{\mathrm{th}}\) ER is given by \(\mathbf{y}_{\ell\mathrm{E}}=(\mathbf{H}_{\ell\mathrm{E}}+\mathbf{G}_{\ell \mathrm{E}}\boldsymbol{\Phi}\mathbf{H}_{\mathrm{S}})\mathbf{w}+\mathbf{n}_{ \ell\mathrm{E}}\), where \(\mathbf{n}_{\ell\mathrm{E}}\in\mathbb{C}^{N_{\mathrm{E}}\times 1}\sim\mathcal{CN}( \mathbf{0},\sigma_{\ell\mathrm{E}}^{2}\mathbf{I})\) is the AWGN vector at the \(\ell^{\mathrm{th}}\) ER. For the rest of this paper, we consider \(\sigma_{m\mathrm{I}}^{2}=\sigma_{\ell\mathrm{E}}^{2}=\sigma^{2},\forall m\in \mathcal{M}_{\mathrm{I}},\ell\in\mathcal{M}_{\mathrm{E}}\). Also, with a slight abuse of notation, we define \(\mathbf{H}_{\mathrm{S}}\leftarrow\mathbf{H}_{\mathrm{S}}/\sigma\), \(\mathbf{H}_{m\mathrm{I}}\leftarrow\mathbf{H}_{m\mathrm{I}}/\sigma\) and \(\mathbf{H}_{\ell\mathrm{E}}\leftarrow\mathbf{H}_{\ell\mathrm{E}}/\sigma\); this normalization step will mitigate potential numerical issues caused by dealing with extremely small values. We further define \(\mathbf{Z}_{m}\triangleq\mathbf{H}_{m\mathrm{I}}+\mathbf{G}_{m\mathrm{I}} \boldsymbol{\Phi}\mathbf{H}_{\mathrm{S}}\) and \(\boldsymbol{\Xi}_{\ell}\triangleq\mathbf{H}_{\ell\mathrm{E}}+\mathbf{G}_{\ell \mathrm{E}}\boldsymbol{\Phi}\mathbf{H}_{\mathrm{S}}\). Therefore, the instantaneous achievable rate at the \(m^{\mathrm{th}}\) IR is given by
\[R_{m}(\mathbf{X},\boldsymbol{\phi})=\ln\big{|}\mathbf{F}_{\mathrm{I}}\mathbf{Z}_{m} \mathbf{X}_{m}\mathbf{Z}_{m}^{\mathrm{H}}\mathbf{B}_{m}^{-1}\big{|}\!=\!\ln| \mathbf{A}_{m}|\!-\!\ln|\mathbf{B}_{m}|, \tag{1}\]
where \(\mathbf{X}\triangleq\{\mathbf{X}_{m}\}_{m\in\mathcal{M}_{\mathrm{I}}}\), \(\mathbf{X}_{m}\triangleq\mathbf{F}_{m}\mathbf{F}_{\mathrm{H}}^{\mathsf{H}}\) (this is the transmit covariance matrix), \(\mathbf{A}_{m}\triangleq\mathbf{I}+\mathbf{Z}_{m}\mathbf{\Sigma}\mathbf{\Sigma} \mathbf{Z}_{m}^{\mathrm{H}}\), \(\mathbf{\Sigma}\triangleq\sum_{k\in\mathcal{M}_{\mathrm{I}}}\mathbf{X}_{k}\), \(\mathbf{B}_{m}\triangleq\mathbf{I}+\mathbf{Z}_{m}\mathbf{\Sigma}_{m}\mathbf{Z}_ {m}^{\mathrm{H}}\) (this is the interference-plus-noise covariance matrix), and \(\mathbf{\Sigma}_{m}\triangleq\mathbf{\Sigma}-\mathbf{X}_{m}\). The total harvested power at the \(\ell^{\mathrm{th}}\) ER is given by \(P_{\ell\mathrm{H}}(\mathbf{X},\boldsymbol{\phi})=\eta\tr\big{(}\mathbf{\Xi}_{ \ell}\mathbf{\Sigma}\mathbf{\Xi}_{\ell}^{\mathrm{H}}\big{)}\), where \(0<\eta\leq 1\) is the energy harvesting efficiency. Therefore, a WSR maximization problem for the IRS-assisted SWIPT-MIMO system can be formulated as follows:
\[\operatorname*{maximize}_{\mathbf{X},\boldsymbol{\phi}} \big{\{}R_{\mathrm{sum}}\big{(}\mathbf{X},\boldsymbol{\phi}\big{)} \triangleq\sum\nolimits_{m\in\mathcal{M}_{\mathrm{I}}}\omega_{m}R_{m}( \mathbf{X},\boldsymbol{\phi})\big{\}} \tag{2a}\] \[\mathrm{subject\ to} P_{\mathrm{H}}\big{(}\mathbf{X},\boldsymbol{\phi}\big{)}\geq 1,\] (2b) \[\mathrm{tr}\left(\mathbf{\Sigma}\right)\leq P_{\mathrm{B}},\] (2c) \[|\phi_{n_{\mathrm{S}}}|=1\ \forall n_{\mathrm{S}}\in\mathcal{N}_{ \mathrm{S}}. \tag{2d}\]
In (2), \(\omega_{m}\) denotes the rate weighting factor for the \(m^{\mathrm{th}}\) IR,
It is important to note that the design variables are decoupled in the constraints in (4), and the coupling exists only in the objective function, i.e., \(\mathcal{R}_{\mu,\rho}(\mathbf{X},\mathbf{\phi},\tau)\).
Before proposing a low-complexity and high-performance algorithm to obtain a stationary solution to (4), we derive closed-form expressions for \(\nabla_{\mathbf{X}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\mathbf{\phi},\tau)\) and \(\nabla_{\mathbf{\phi}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau\big{)}\). One can easily note that \(\nabla_{\mathbf{X}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau\big{)} =\big{\{}\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{ \phi},\tau\big{)}\big{\}}_{m\in\mathcal{M}_{1}}\). A closed-form expression for \(\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\mathbf{\phi},\tau)\) is given in the following theorem.
**Theorem 1**: _A closed-form expression for \(\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau \big{)}\) is given by \(\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau \big{)}=\sum_{k\in\mathcal{M}_{1}}\omega_{k}\nabla_{\mathbf{X}_{m}}R_{k}\big{(} \mathbf{X},\mathbf{\phi}\big{)}+\big{\{}\mu+(1/\rho)f\big{(}\mathbf{X},\mathbf{\phi}, \tau\big{)}\nabla_{\mathbf{X}_{m}}P_{\text{H}}(\mathbf{X},\mathbf{\phi})\), where_
\[\nabla_{\mathbf{X}_{m}}R_{k}\big{(}\mathbf{X},\mathbf{\phi}\big{)}=\begin{cases} \mathbf{Z}_{m}^{\mathsf{H}}\mathbf{B}_{m}^{-1/2}\mathbf{C}_{m}^{-1}\mathbf{B }_{m}^{-1/2}\mathbf{Z}_{m},\text{ if }m=k\\ \mathbf{Z}_{m}^{\mathsf{H}}\Big{(}\bar{\mathbf{B}}_{m,k}^{-1/2}\bar{\mathbf{C}}_ {m,k}^{-1/2}\bar{\mathbf{B}}_{m,k}^{-1/2}\Big{)}\\ -\bar{\mathbf{B}}_{m,k}^{-1/2}\bar{\mathbf{C}}_{m,k}^{-1}\bar{\mathbf{B}}_{m,k} ^{-1/2}\Big{)}\mathbf{Z}_{k},\text{ otherwise,}\end{cases}\]
\(\mathbf{C}_{m}\triangleq\mathbf{I}+\mathbf{B}_{m}^{-1/2}\mathbf{Z}_{m}\mathbf{ X}_{m}\mathbf{Z}_{m}^{\mathsf{H}}\mathbf{B}_{m}^{-1/2}\), \(\bar{\mathbf{B}}_{m,k}\triangleq\mathbf{I}+\mathbf{Z}_{k}\mathbf{\Sigma}_{m} \mathbf{Z}_{k}^{\mathsf{H}}\), \(\mathbf{C}_{m,k}\triangleq\mathbf{I}+\mathbf{B}_{m,k}^{-1/2}\mathbf{Z}_{k} \mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\mathbf{B}_{m,k}^{-1}\), \(\bar{\mathbf{B}}_{m,k}\triangleq\mathbf{I}+\mathbf{Z}_{k}\mathbf{Z}_{m,k} \mathbf{Z}_{k}^{\mathsf{H}}\), \(\bar{\mathbf{B}}_{m,k}\triangleq\mathbf{\Sigma}_{m}-\mathbf{X}_{k}\), \(\bar{\mathbf{C}}_{m,k}\triangleq\mathbf{I}+\bar{\mathbf{B}}_{m,k}^{-1/2} \mathbf{Z}_{k}\mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\bar{\mathbf{B}}_{m,k }^{-1/2}\), and \(\nabla_{\mathbf{X}_{m}}P_{\text{H}}(\mathbf{X},\mathbf{\phi})=(\eta/\bar{P}_{\text{ H}})\sum_{l\in\mathcal{M}_{\text{E}}}\alpha_{l}\mathbf{\Xi}_{l}^{\mathsf{H}}\mathbf{\Xi}_{l}\)._
See Appendix A.
Next, we obtain a closed-form expression for the complex-valued gradient of \(\mathcal{R}_{\mu,\rho}(\mathbf{X},\mathbf{\phi},\tau)\) w.r.t. \(\mathbf{\phi}\).
**Theorem 2**: _A closed-form expression for \(\nabla_{\mathbf{\phi}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\mathbf{\phi},\tau)\) is given by \(\nabla_{\mathbf{\phi}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\mathbf{\phi},\tau)=\sum_{m\in \mathcal{M}_{1}}\omega_{m}\nabla_{\mathbf{\phi}}R_{m}(\mathbf{X},\mathbf{\phi})+\big{\{} \mu+(1/\rho)f(\mathbf{X},\mathbf{\phi},\tau)\nabla_{\mathbf{\phi}}P_{\text{H}}\big{(} \mathbf{X},\mathbf{\phi}\big{)},\text{ where }\nabla_{\mathbf{\phi}}R_{m}(\mathbf{X},\mathbf{\phi})=\text{vec} \big{\{}\mathbf{\phi}_{m}^{\mathsf{H}}\mathbf{D}_{m}\mathbf{H}_{\text{S}}^{\mathsf{ H}}\big{\}}\), \(\mathbf{D}_{m}\triangleq\mathbf{A}_{m}^{-1}\mathbf{Z}_{m}\mathbf{Z}_{m} \mathbf{S}-\mathbf{B}_{m}^{-1}\mathbf{Z}_{m}\mathbf{S}_{m}\), and \(\nabla_{\mathbf{\phi}}P_{\text{H}}\big{(}\mathbf{X},\mathbf{\phi}\big{)}=(\eta/\bar{P}_ {\text{H}})\sum_{l\in\mathcal{M}_{\text{E}}}\alpha_{l}\mathbf{\Xi}_{l}^{\mathsf{H}} \mathbf{\Xi}_{l}\mathbf{\Xi}_{l}\mathbf{\Xi}_{l}\)._
See Appendix B.
We now propose the PDDGAP algorithm, shown in **Algorithm 1**, to attain a high-performance solution to (4). We define \(\mathcal{X}\triangleq\big{\{}\big{\{}\mathbf{X}_{m}\big{\}}_{m\in\mathcal{M}_{1} }\big{[}2\alpha\big{\}}\big{]}\) as the feasible set of transmit covariance matrices \(\mathbf{X}\). Similarly, \(\mathbf{\Theta}\triangleq\big{\{}\mathbf{\phi}|(\text{2d})\big{\}}\) is defined as the feasible set for the design variable \(\mathbf{\phi}\). In **Algorithm 1**, we use AO to iteratively update the variables \(\mathbf{X}\) and \(\mathbf{\phi}\). In the \(r^{\text{th}}\) iteration, to update \(\mathbf{X}^{(r)}\) for a given \(\mathbf{\phi}^{(r)}\), we ascend in the direction of \(\nabla_{\mathbf{X}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r)},\mathbf{\phi}^{(r) },\tau^{(r)}\big{)}\) with step size \(\delta_{\mathbf{X}}\), and then project the resulting point onto the set \(\mathcal{X}\) (see lines 4 and 5). After obtaining \(\mathbf{X}^{(r+1)}\), we update \(\mathbf{\phi}^{(r)}\) by ascending in the direction of \(\nabla_{\mathbf{\phi}}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r+1)},\mathbf{\phi}^{(r )},\tau^{(r)}\big{)}\) using the step size \(\delta_{\mathbf{\phi}}\), and then project the resultant vector onto \(\mathbf{\Theta}\) to obtain \(\mathbf{\phi}^{(r+1)}\) (lines 6 and 7). Next, following the constraint \(\tau\geq 0\) in (4), we obtain \(\tau^{(r+1)}\) as shown in line 8. The inner loop in **Algorithm 1** converges when \(\Big{[}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r+1)},\mathbf{\phi}^{(r+1)},\tau^{(r+ 1)}\big{)}-\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r)},\mathbf{\phi}^{(r)}, \tau^{(r)}\big{)}\Big{]}/\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r)},\mathbf{ \phi}^{(r)},\tau^{(r)}\big{)}<\epsilon\). Once the inner loop achieves convergence, we update the Lagrange multiplier \(\mu\) and penalty multiplier \(\rho\) as given in lines 10 and 11, respectively, and repeat the entire process. The outer loop converges when \(\Big{[}\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r+1)},\mathbf{\phi}^{(r+1)},\tau^{( r+1)}\big{)}-\mathcal{R}_{\text{sum}}\big{(}\mathbf{X}^{(r+1)},\mathbf{\phi}^{(r+1)} \big{)}\Big{]}/\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X}^{(r+1)},\mathbf{\phi}^{(r +1)},\tau^{(r+1)}\big{)}<\epsilon\). Note that projection onto \(\mathcal{X}\) follows the standard water-filling solution, and projection onto \(\mathbf{\Theta}\) is a simple scaling operation (see [10, eqn. (6)] for details). Moreover, appropriate values of \(\delta_{\mathbf{X}}\) and \(\delta_{\mathbf{\phi}}\) can be obtained using the backtracking line search scheme discussed in [10, eqn. (8)]. Following the arguments in [12], it can be proved that when convergence is achieved, the stationary solution of (4) becomes a stationary solution to (2). Moreover, the convergence of the proposed PDDGAP algorithm can be readily proved following the line of argument in [10, Sec. III-C],
\(\mathbf{\Xi}_{\ell},\forall\ell\in\mathcal{M}_{\mathrm{E}}\) are respectively given by \(\mathcal{O}\big{(}M_{\mathrm{I}}N_{\mathrm{B}}N_{\mathrm{S}}\big{(}1+N_{ \mathrm{I}}\big{)}\big{)}\) and \(\mathcal{O}\big{(}M_{\mathrm{E}}N_{\mathrm{B}}N_{\mathrm{S}}N_{\mathrm{E}} \big{)}\). Similarly, given \(\mathbf{X}^{(r)}\), \(\mathbf{\phi}^{(r)}\), \(\tau^{(r)}\), \(\mathbf{Z}_{m},\forall m\in\mathcal{M}_{\mathrm{I}}\) and \(\mathbf{\Xi}_{\ell},\forall\ell\in\mathcal{M}_{\mathrm{E}}\), the complexity of obtaining \(\mathbf{X}^{(r+1)}\) is given by \(\mathcal{O}\big{(}M_{\mathrm{E}}N_{\mathrm{E}}^{3}+M_{\mathrm{I}}^{3}N_{ \mathrm{B}}^{3}+2M_{\mathrm{I}}^{2}N_{\mathrm{B}}^{2}N_{\mathrm{I}}+2M_{ \mathrm{I}}^{2}N_{\mathrm{B}}N_{\mathrm{I}}^{2}+12M_{\mathrm{I}}^{2}N_{ \mathrm{I}}^{3}+2M_{\mathrm{I}}N_{\mathrm{B}}N_{\mathrm{I}}^{2}-8M_{\mathrm{I }}N_{\mathrm{I}}^{3}\big{)}\). Analogously, the computational complexity of obtaining \(\mathbf{\phi}^{(r+1)}\) is given by \(\mathcal{O}\big{(}M_{\mathrm{I}}\big{\{}N_{\mathrm{I}}^{3}+2N_{\mathrm{I}}N_{ \mathrm{B}}\big{(}N_{\mathrm{B}}+N_{\mathrm{I}}\big{)}+N_{\mathrm{I}}N_{ \mathrm{B}}N_{\mathrm{S}}+N_{\mathrm{I}}N_{\mathrm{S}}\big{\}}+M_{\mathrm{E}} \big{(}N_{\mathrm{B}}^{2}N_{\mathrm{S}}+N_{\mathrm{E}}N_{\mathrm{B}}N_{ \mathrm{S}}+N_{\mathrm{S}}N_{\mathrm{E}}\big{)}\big{)}\). Note that since the complexity of projection operations will be comparatively smaller, we have neglected the associated terms. In the end, the complexity of computing \(\tau^{(r+1)}\) is \(\mathcal{O}\big{(}M_{\mathrm{E}}N_{\mathrm{B}}N_{\mathrm{E}}\big{(}N_{\mathrm{S }}+N_{\mathrm{B}}N_{\mathrm{S}}+N_{\mathrm{E}}\big{)}\big{)}\). Therefore, the overall per-iteration complexity of **Algorithm 1** is given by \(\mathcal{O}\big{(}M_{\mathrm{E}}N_{\mathrm{B}}\big{(}N_{\mathrm{B}}N_{\mathrm{ E}}N_{\mathrm{S}}+N_{\mathrm{B}}N_{\mathrm{S}}+N_{\mathrm{E}}^{2}+2N_{ \mathrm{E}}N_{\mathrm{S}}\big{)}+M_{\mathrm{E}}N_{\mathrm{E}}\big{(}N_{\mathrm{ E}}^{2}+N_{\mathrm{S}}\big{)}+M_{\mathrm{I}}N_{\mathrm{I}}N_{\mathrm{B}}\big{(}2M_{ \mathrm{I}}N_{\mathrm{B}}+2M_{\mathrm{I}}N_{\mathrm{I}}+2M_{\mathrm{I}}N_{ \mathrm{B}}+4N_{\mathrm{I}}+2N_{\mathrm{S}}\big{)}+12M_{\mathrm{I}}^{2}N_{ \mathrm{I}}^{3}+M_{\mathrm{I}}\big{(}N_{\mathrm{B}}N_{\mathrm{S}}-7N_{ \mathrm{I}}^{3}+N_{\mathrm{I}}N_{\mathrm{S}}\big{)}\big{)}\). Since a practical deployment of an IRS-aided communication system is expected to involve a very large number of reflecting elements, it is expected that \(N_{\mathrm{S}}\gg\max\big{\{}N_{\mathrm{B}},N_{\mathrm{I}},N_{\mathrm{E}},M_{ \mathrm{I}},M_{\mathrm{E}}\big{\}}\), and therefore, the per-iteration complexity of **Algorithm 1** is well approximated by \(\mathcal{O}\big{(}N_{\mathrm{S}}\big{(}M_{\mathrm{E}}N_{\mathrm{E}}N_{ \mathrm{B}}\big{(}2+N_{\mathrm{B}}\big{)}+2M_{\mathrm{I}}N_{\mathrm{I}}N_{ \mathrm{B}}\big{)}\big{)}\), which is _linear_ in \(N_{\mathrm{S}}\). This establishes the fact that the proposed PDDAGP algorithm is much more suitable for large-scale IRS-assisted SWIPT-MIMO systems in rapidly changing environments, compared to the BCD-based algorithm in [6] whose complexity grows with the _third power_ of \(N_{\mathrm{S}}\).
## IV Numerical Results and Discussion
In this section, we present numerical results to establish the performance superiority of the proposed PDDAGP algorithm over the BCD-based scheme of [6]. Similar to [6], we consider that the BS is located at \((0~{}\mathrm{m},0~{}\mathrm{m})\), the IRS is located at \((5~{}\mathrm{m},2~{}\mathrm{m})\), the IRS are uniformly and randomly distributed inside a circle of radius \(4~{}\mathrm{m}\) centered at \((400~{}\mathrm{m},0~{}\mathrm{m})\), and the ERs are uniformly and randomly distributed inside a circle of radius \(1~{}\mathrm{m}\) centered at \((x_{\mathrm{E}},0~{}\mathrm{m})\). The path loss and small-scale fading models for all of the wireless links also follow [6]. Furthermore, we assume \(P_{\mathrm{th}}=0.2~{}\mathrm{mW}\), \(M_{\mathrm{I}}=2\), \(M_{\mathrm{E}}=4\), \(\omega_{m}=1~{}\forall m\in\mathcal{M}_{\mathrm{I}}\), \(\alpha_{\ell}=1~{}\forall\ell\in\mathcal{M}_{\mathrm{E}}\), \(\eta=0.5\), \(N_{\mathrm{B}}=4\), \(N_{\mathrm{I}}=N_{\mathrm{E}}=2\), \(N_{\mathrm{S}}=100\), \(x_{\mathrm{E}}=5~{}\mathrm{m}\), \(\kappa=0.1\), \(\epsilon=10^{-3}\), a noise power spectral density of -160 dBm/Hz, and a total channel bandwidth of 1 MHz, _unless stated otherwise_. The initial values are set as \(\tau^{(0)}=0\), \(\mu^{(0)}=0\), \(\rho^{(0)}=0\), \(\mathbf{X}^{(0)}=\mathbf{0}\), and \(\mathbf{\theta}^{(0)}=[1,1,\ldots,1]^{\mathsf{T}}\). In Figs. 2-4, the average WSR is obtained over 100 random locations and independent small-scale fading realizations. Moreover, the numbers (in dBm) in the legends of Figs. 4 and 4 correspond to the value of \(P_{\mathrm{B}}\).
Fig. 1 shows a representative sample convergence result for the proposed PDDAGP algorithm, where each iteration corresponds to lines 3-9 in **Algorithm 1**. Following the arguments in [10, Sec. III-C], it can be proved that for a given \((\mu,\rho)\), **Algorithm 1** generates a strictly non-decreasing sequence of \(\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau\big{)}\). This fact is also evident in the figure. Once the inner loop in **Algorithm 1** converges, we update the Lagrange multiplier \(\mu\) and decrease the value of the penalty parameter \(\rho\). Due to a stricter penalty, \(\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau\big{)}\) drops suddenly (as seen in the figure when \(\rho\) changes) and then for the new \((\mu,\rho)\), the sequence \(\mathcal{R}_{\mu,\rho}\big{(}\mathbf{X},\mathbf{\phi},\tau\big{)}\) increases again. This whole process is repeated until the constraints in (2b)-(2d) are satisfied, which in turn nullifies the impact of the penalty in (3), resulting in the convergence of the algorithm.
The impact of the number of IRS elements on the average WSR is shown in Fig. 2. With an increased number of elements, the IRS creates highly directed beams toward IRs and ERs, which results in an increase in the WSR. However, in contrast to the BCD-based approach of [6], the proposed algorithm enjoys the following benefits: (i) relaxed constraints in the latter (since the constraint in (2b) is included in the objective in (4)), and (ii) the design variables \(\mathbf{X}\) and \(\mathbf{\phi}\) are decoupled in the constraints in the proposed algorithm. These benefits result in a larger beamforming gain compared to the BCD-based approach. For the particular setting in this paper, the beamforming gain of the proposed PDDAGP-based method nearly doubles the average WSR compared to that achieved via the BCD-based approach.
In Fig. 4, we show the effect of increasing the value of weighted harvested power requirement (\(P_{\mathrm{th}}\)) on the average WSR for the proposed PDDAGP-based algorithm, and compare its performance with that of the BCD-based algorithm proposed in [6]. As the value of \(P_{\mathrm{th}}\) increases, the QoS constraints at the ERs become more demanding. This calls for a significant part of the beams from the BS and the IRS to be steered toward the ERs, resulting in a decrease in the WSR at the IRs. However, due to the luxury of relaxed constraints
and decoupled optimization variables, the proposed PDDAGP algorithm results in superior beamforming designs compared to the BCD-based algorithm.
Fig. 4 shows the effect of the location of ERs on the WSR of the IRs. As the value of \(x_{\mathrm{E}}\) increases (which increases the distance between the BS and ERs), the average channel quality of the BS-ER and IRS-ER links degrades, resulting in a challenging QoS constraint at the ERs. This in turn results in a decreased WSR at the IRs due to reasons similar to those discussed in the preceding paragraph. However, the proposed PDDAGP algorithm significantly outperforms the BCD-based benchmark solution. This also indicates that for given \(P_{\mathrm{th}}\) and \(R_{\mathrm{sum}}(\mathbf{X},\boldsymbol{\theta})\), the proposed algorithm helps to increase the operating distance of the ERs, i.e., it allows the ERs to be located further from the BS, compared to that facilitated by the BCD-based scheme.
## V Conclusion
In this paper, we investigated the fundamental problem of WSR maximization at the IRs in an IRS-assisted SWIPT-MIMO system, subject to satisfying a total weighted harvested power constraint at the ERs. For the formulated non-convex optimization problem, we proposed the PDDAGP algorithm, which is shown to outperform the BCD-based benchmark solution. Numerical results confirmed that the proposed algorithm attains a notably higher WSR, and also increases the operating range of ERs for a given target WSR and target weighted harvested power, compared to the BCD-based benchmark solution. The complexity of the proposed algorithm was shown to be a linear function of the number of IRS elements, while that of the benchmark solution scales with the third power of the number of reflecting elements of the IRS.
## Appendix A Proof of Theorem 1
Using (3), it is straightforward to note that \(\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\boldsymbol{\phi}, \tau)=\sum_{k\in\mathcal{M}_{1}}\omega_{k}\nabla_{\mathbf{X}_{m}}R_{k}\big{(} \mathbf{X},\boldsymbol{\phi}\big{)}-\big{\{}\mu+\frac{1}{\rho}f(\mathbf{X}, \boldsymbol{\phi},\tau)\big{\}}\nabla_{\mathbf{X}_{m}}f(\mathbf{X},\boldsymbol {\phi},\tau)\). For the case when \(m=k\), using (1), \(\nabla_{\mathbf{X}_{m}}R_{k}(\mathbf{X},\boldsymbol{\phi})=\nabla_{\mathbf{X}_{ m}}R_{m}(\mathbf{X},\boldsymbol{\phi})\) is given by \(\nabla_{\mathbf{X}_{m}}R_{m}(\mathbf{X},\boldsymbol{\phi})=\nabla_{\mathbf{X}_ {m}}\big{(}\ln|\mathbf{A}_{m}|-\ln|\mathbf{B}_{m}|\big{)}=\nabla_{\mathbf{X}_ {m}}\ln|\mathbf{I}+\mathbf{B}_{m}^{-1/2}\mathbf{Z}_{m}\mathbf{X}_{m}\mathbf{Z} _{m}^{\mathsf{H}}\mathbf{B}_{m}^{-1/2}|=\mathbf{Z}_{m}^{\mathsf{H}}\mathbf{B}_{ m}^{-1/2}\mathbf{C}_{m}^{-1}\mathbf{B}_{m}^{-1/2}\mathbf{Z}_{m},\) where the last equality follows from [13, eqns. (6.195) and (6.200)-(6.207)], and \(\mathbf{C}_{m}\triangleq\mathbf{I}+\mathbf{B}_{m}^{-1/2}\mathbf{Z}_{m}\mathbf{ X}_{m}\mathbf{Z}_{m}^{\mathsf{H}}\mathbf{B}_{m}^{-1/2}\). Similarly for the case when \(m\neq k\), we have \(\nabla_{\mathbf{X}_{m}}\ln|\mathbf{I}+\mathbf{B}_{k}^{-1/2}\mathbf{Z}_{k} \mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\mathbf{B}_{m}^{-1/2}\) and \(\nabla_{\mathbf{X}_{m}}\ln|\mathbf{I}+\mathbf{B}_{k,m}^{-1/2}\mathbf{Z}_{k} \mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\mathbf{B}_{k,m}^{-1/2}|-\nabla_{ \mathbf{X}_{m}}\ln|\mathbf{I}+\mathbf{B}_{k,m}^{-1/2}\mathbf{Z}_{k}\mathbf{X}_ {m}\mathbf{Z}_{k}^{\mathsf{H}}\mathbf{B}_{k,m}^{-1/2}|=\mathbf{Z}_{k}^{\mathsf{ H}}\mathbf{B}_{k,m}^{-1/2}\mathbf{C}_{m,k}^{-1}\mathbf{B}_{k,m}^{-1/2}\mathbf{Z}_{k},\) where \(\mathbf{B}_{k,m}^{\mathsf{H}}\triangleq\mathbf{I}+\sum_{\varepsilon\in \mathcal{M}\setminus\{m\}}\mathbf{Z}_{k}\mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{ H}}\), \(\mathbf{C}_{k,m}\triangleq\mathbf{I}+\mathbf{B}_{k,m}^{-1/2}\mathbf{Z}_{k} \mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\mathbf{B}_{k,m}^{-1/2}\), \(\mathbf{B}_{k,m}\triangleq\mathbf{I}+\sum_{\varepsilon\in\mathcal{M}\setminus\{ k,m\}}\mathbf{Z}_{k}\mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\), \(\mathbf{C}_{k,m}\triangleq\mathbf{I}+\mathbf{B}_{k,m}^{-1/2}\mathbf{Z}_{k} \mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\), \(\mathbf{C}_{k,m}\triangleq\mathbf{I}+\mathbf{B}_{k,m}^{-1/2}\mathbf{Z}_{k} \mathbf{X}_{m}\mathbf{Z}_{k}^{\mathsf{H}}\mathbf{B}_{k,m}^{-1/2}\). Following a similar line of argument, \(\nabla_{\mathbf{X}_{m}}f(\mathbf{X},\boldsymbol{\phi},\tau)=-\nabla_{\mathbf{X}_ {m}}P_{\mathrm{th}}(\mathbf{X},\boldsymbol{\phi})=-(\eta/\hat{P}_{\mathrm{th}}) \sum_{\ell\in\mathcal{M}_{\mathrm{th}}}\alpha_{\ell}\mathbf{\Xi}_{\ell}^{ \mathsf{H}}\mathbf{\Xi}_{\ell}\). With the help of the derived closed-form expression for \(\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\boldsymbol{\phi},\tau)\) and \(\nabla_{\mathbf{X}_{m}}f(\mathbf{X},\boldsymbol{\phi},\tau)\), we obtain the closed-form expression for \(\nabla_{\mathbf{X}_{m}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\boldsymbol{\phi},\tau)\) as given in _Theorem 1_. This concludes the proof.
## Appendix B Proof of Theorem 2
Using (3), it can be noted that \(\nabla_{\boldsymbol{\phi}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\boldsymbol{\phi}, \tau)=\sum_{m\in\mathcal{M}_{1}}\omega_{m}\nabla_{\boldsymbol{\phi}}R_{m}\big{(} \mathbf{X},\boldsymbol{\phi}\big{)}+\big{\{}\mu+\frac{1}{\rho}f(\mathbf{X}, \boldsymbol{\phi},\tau)\big{\}}\nabla_{\boldsymbol{\phi}}f(\mathbf{X},\boldsymbol {\phi},\tau)\). Next, to obtain \(\nabla_{\boldsymbol{\phi}}R_{m}\big{(}\mathbf{X},\boldsymbol{\phi}\big{)}\), we first use \(\nabla_{\boldsymbol{\phi}}\big{(}\ln|\mathbf{A}_{m}|\big{)}-\nabla_{\boldsymbol{ \phi}}\big{(}\ln|\mathbf{B}_{m}|\big{)}\). Next, we have \(\nabla_{\boldsymbol{\phi}}\ln|\mathbf{A}_{m}|=\mathrm{tr}\left\{\mathbf{A}_{m} ^{-1}\sum_{k\in\mathcal{M}_{1}}\mathbf{Z}_{m}\mathbf{X}_{k}\nabla_{\boldsymbol{ \phi}}\big{(}\mathbf{Z}_{m}^{\mathsf{H}}\big{)}\right\}=\sum_{k\in\mathcal{M}_{1 }}\mathbf{I}\left\{\mathbf{G}_{m}^{\mathsf{H}}\mathbf{A}_{m}^{-1}\mathbf{Z}_{m} \mathbf{X}_{k}\mathbf{H}_{\mathrm{S}}^{\mathsf{H}}\nabla_{\boldsymbol{\phi}} \big{(}\boldsymbol{\Phi}^{\mathsf{H}}\big{)}\right\}\). Similarly, we can obtain \(\nabla_{\boldsymbol{\phi}}\ln|\mathbf{B}_{m}|=\mathrm{tr}\left\{\mathbf{G}_{m }^{\mathsf{H}}\mathbf{B}_{m}^{-1}\mathbf{Z}_{m}\mathbf{X}_{k}\mathbf{H}_{ \mathrm{S}}^{\mathsf{H}}\nabla_{\boldsymbol{\phi}}\big{(}\boldsymbol{\Phi}^{ \mathsf{H}}\big{)}\right\}\). Using the definition of the complex-valued gradient, [13, eqn. (6.153)], together with the preceding expressions yields \(\nabla_{\boldsymbol{\phi}}R_{m}\big{(}\mathbf{X},\boldsymbol{\phi}\big{)}= \text{vec}_{\mathbf{d}}\left\{\mathbf{G}_{m}^{\mathsf{H}}\mathbf{D}_{m} \mathbf{H}_{\mathrm{S}}^{\mathsf{H}}\right\}\), where \(\mathbf{D}_{m}\triangleq\mathbf{A}_{m}^{-1}\mathbf{Z}_{m}\mathbf{\Sigma}- \mathbf{B}_{m}^{-1}\mathbf{Z}_{m}\mathbf{\Sigma}_{m}\). Analogously, it can be shown that \(\nabla_{\boldsymbol{\phi}}f(\mathbf{X},\boldsymbol{\phi},\tau)=-(\eta/\hat{P}_{ \mathrm{th}})\sum_{\ell\in\mathcal{M}_{\mathrm{th}}}\alpha_{\ell}\text{vec}_{ \mathbf{d}}(\mathbf{G}_{\ell\mathrm{E}}^{\mathsf{H}}\mathbf{\Xi}_{\ell} \mathbf{Z}_{\ell}\mathbf{H}_{\mathrm{S}}^{\mathsf{H}})\). With the help of these arguments, we obtain the closed-form expression for \(\nabla_{\boldsymbol{\phi}}\mathcal{R}_{\mu,\rho}(\mathbf{X},\boldsymbol{\phi},\tau)\) as given in _Theorem 2_. This completes the proof.
|
2306.00089 | A Data-Driven Computational Model for Engineered Cardiac Microtissues | Engineered heart tissues (EHTs) present a potential solution to some of the
current challenges in the treatment of heart disease; however, the development
of mature, adult-like cardiac tissues remains elusive. Mechanical stimuli have
been observed to improve whole-tissue function and cardiomyocyte (CM)
maturation, although our ability to fully utilize these mechanisms is hampered,
in part, by our incomplete understanding of the mechanobiology of EHTs. In this
work, we leverage the experimental data produced by a mechanically tunable
experimental setup to generate tissue-specific computational models of EHTs.
Using imaging and functional data, our modeling pipeline generates models with
tissue-specific ECM and myofibril structure, allowing us to estimate CM active
stress. We use this experimental and modeling pipeline to study different
mechanical environments, where we contrast the force output of the tissue with
the computed active stress of CMs. We show that the significant differences in
measured experimental forces can largely be explained by the levels of
myofibril formation achieved by the CMs in the distinct mechanical
environments, with active stress showing more muted variations across
conditions. The presented model also enables us to dissect the relative
contributions of myofibrils and extracellular matrix to tissue force output, a
task difficult to address experimentally. These results highlight the
importance of tissue-specific modeling to augment EHT experiments, providing
deeper insights into the mechanobiology driving EHT function. | Javiera Jilberto, Samuel J. DePalma, Jason Lo, Hiba Kobeissi, Lani Quach, Emma Lejeune, Brendon M. Baker, David Nordsletten | 2023-05-31T18:06:36Z | http://arxiv.org/abs/2306.00089v1 | # A Data-Driven Computational Model for Engineered Cardiac Microtissues
###### Abstract
Engineered heart tissues (EHTs) present a potential solution to some of the current challenges in the treatment of heart disease; however, the development of mature, adult-like cardiac tissues remains elusive. Mechanical stimuli have been observed to improve whole-tissue function and cardiomyocyte (CM) maturation, although our ability to fully utilize these mechanisms is hampered, in part, by our incomplete understanding of the mechanobiology of EHTs. In this work, we leverage the experimental data produced by a mechanically tunable experimental setup to generate tissue-specific computational models of EHTs. Using imaging and functional data, our modeling pipeline generates models with tissue-specific ECM and myofibril structure, allowing us to estimate CM active stress. We use this experimental and modeling pipeline to study different mechanical environments, where we contrast the force output of the tissue with the computed active stress of CMs. We show that the significant differences in measured experimental forces can largely be explained by the levels of myofibril formation achieved by the CMs in the distinct mechanical environments, with active stress showing more muted variations across conditions. The presented model also enables us to dissect the relative contributions of myofibrils and extracellular matrix to tissue force output, a task difficult to address experimentally. These results highlight the importance of tissue-specific modeling to augment EHT experiments, providing deeper insights into the mechanobiology driving EHT function.
keywords: computational modeling, engineered heart tissues, cardiac biomechanics, mechanobiology +
## 1 Introduction
The development of engineered heart tissues (EHTs) for use in regenerative therapies, drug testing, and disease modeling has the potential to improve the life expectancy of millions of people that suffer from cardiac disease [1]. Current state-of-the-art EHTs are manufactured using cardiomyocytes (CMs) derived from induced pluripotent stem cells (iPSCs) and scaffolds that mimic the structure and mechanics of native cardiac tissue [2]. However, iPSC-CM maturation in current EHT platforms remains a challenge [3]. This is evidenced by the lack of hallmark attributes of mature CMs, such as myofibril alignment, protein expression, calcium handling, and electrophysiological response, among others [4]. Biophysical stimuli, such as electrical pacing [5; 6] and mechanical loading [7; 8; 9], have been shown to enhance iPSC-CM maturation in different EHT platforms. However, the underlying mechanisms driving these observations remain incompletely understood [10], preventing scientists from efficiently optimizing the application of these techniques.
Several _in-vitro_ studies have attempted to elucidate the impact of different mechanical microenvironmental perturbations on the maturity of iPSC-CMs. For example, Leonard et al. [8] showed that increasing the resistance against which the EHTs contract increases force generation. Similarly, Bliley et al. [9] showed that EHTs grown under passive stretch evolve into more mature tissues. Furthermore, other studies have shown that culturing CMs on hydrogels with a modulus similar to that of the healthy adult myocardium enhances electro-mechanical activity [11; 12]. DePalma et al. expanded upon this, showing that iPSC-CMs adapt their contractile behavior in response to ECM mechanics on synthetic, fibrous matrices that better mimic the anisotropic mechanics of the cardiac ECM [7]. This and other work by Allen et al. [13] also showed that increasing the anisotropy of the fibrous matrix results in more aligned myofibrils. These studies highlight the impact of different mechanical cues on EHT formation and function. However, the inherent variability of EHTs - from their formation, maturation, ECM properties and structure, myofibril formation, etc. - cloud our interpretation of each mechanical parameter's relevance and the underlying mechanobiological drivers.
A way of tackling this complex task and providing insight into the multifaceted differences in EHTs is through biomechanical modeling. Biomechanical models have been used to understand the mechanics of _in-vitro_ systems, helping decipher the mechanobiology behind the alignment of cells [14; 15], the biomechanics of microtissue failure due to necking [16], the cell force transmission through fibrous substrates [17; 18], and mechanics at cell-to-cell junctions [19]. These models enable the examination of experimental conditions - as well as exploration through _in silico_ testing whereby scenarios that would be virtually impossible to construct experimentally can be studied. Biomechanical models - particularly at the whole-organ level - have further
enabled the integration and assimilation of imaging and patient data [20; 21; 22; 23], providing pipelines for generating _patient-specific models_ describing cardiac function. These pipelines enable the integration of realistic structure/function and provide a platform for understanding the significance of these factors on localized mechanics, such as strain or stress.
To bypass many of the uncertainties associated with EHTs and delve into the underlying impact of structure and function, we propose the integration of a novel EHT platform with _tissue-specific_ computational models. In this study, we leverage the experimental data obtained in the fibroTUG platform [24], which uses electrospinning techniques to generate synthetic, fibrous matrices with defined alignment and material characteristics. Imaging provides detailed information on the ECM and myofiber architecture on a tissue-by-tissue basis. Using image-based modeling and data assimilation, we create _in-silico_ twins of individual fibroTUGs and explore how different factors of the ECM and myofibril structure impact localized cellular function and the resultant force measures commonly reported in the literature. Through this integrated experimental-computational approach, we observe that the resultant forces of EHTs can be substantially biased by ECM alignment and mechanobiological factors driving myofibril formation and function.
The paper is structured as follows: in the methods section, we detail the process of creating the computational model from the experimental data. In the results section, we validate our model by comparing the simulation results to image-based algorithms and use experimental and non-experimental conditions to analyze the role of the different mechanical variables in the iPSC-CMs stress and force output relationship in EHTs. We follow with a discussion of the model results and how we can use computational modeling approaches to explore the mechanobiology of these tissues.
## 2 Materials and Methods
### Experimental Setup
FibroTUG microtissues were fabricated as described in DePalma et. al (2023) [24] (see also Fig. 1A). Briefly, dextran vinyl sulfone (DVS) fibers were electrospun onto an array of polydimethylsiloxane (PDMS) posts attached to a rotating mandrel. The bending stiffness of the posts \(k_{\mathrm{p}}\) can be tuned by altering the geometry of the posts and is measured experimentally, as described in DePalma et al. (2023) [24]. By varying the rotational velocity of the mandrel, the alignment of the fibers can also be controlled, resulting in aligned fibrous matrices that exhibit high anisotropy or random matrices that are more locally isotropic [7]. The DVS fibers present between posts were stabilized by exposing them to UV light. Upon hydration, the unstabilized fibers dissolve, leaving only the fibers suspended between two posts. Secondary crosslinking in
solutions with varying concentrations of photoinitiator (LAP) resulted in matrices of varying stiffness (higher LAP results in stiffer matrices). The stiffness of the matrix was characterized by indenting the matrices incrementally, measuring the resulting post-deflection (and thus the post-force \(F_{\mathrm{p}}\)), and then calculating the global strain of the matrix (Fig. 1B). Further, images of the indented DVS fibers were taken to record their organization and pair it with their force response. This setup allows us to tune and control the post stiffness (soft, \(k_{\mathrm{p}}=0.41\) N/m; stiff, \(k_{\mathrm{p}}=1.2\) N/m), fiber alignment (aligned or random), and fibrous matrix stiffness (soft, LAP\(=0.1\) mg/mL; stiff, LAP\(=5.0\) mg/mL), parameters that determine the mechanical environment where the iPSC-CMs develop [24]. While many permutations are possible, in this paper, we studied the following permutations: soft fibers/soft posts, stiff fibers/soft posts, and soft fibers/stiff posts for both aligned and random matrices, leading to a total of six conditions.
After defining matrix conditions, purified cultures of iPSC-CMs were seeded onto the fibroTUG and cultured in this environment for seven days. On day 7, time-lapse videos of the microtissue's spontaneous contractions were acquired (see Video S1). These videos were processed to obtain post-displacement curves as seen in Fig. 1C. Finally, immunofluorescence staining is used to image cell nuclei, titin, and the DVS fibers (see Fig. 1D).
### Image Processing
This subsection details the process of extracting information from the DVS fiber and titin images (Fig. 2) to obtain quantities that characterize the structure of the matrix and the myofibril network that can then be projected into a 2D fibroTUG model.
_DVS fibers_. The processing starts by creating a mask of the fibers that is used to compute the local fiber density \(\rho_{\mathrm{f}}\), alignment \(\mathbf{f}_{0}\), and dispersion \(\kappa_{\mathrm{f}}\) as shown in Fig. 2A. The methods used to compute these quantities are detailed in Supplementary Information Section S1. The fiber density \(\rho_{f}\) takes values between 0 (no fibers) and 1 (fiber), allowing us to define the mechanical presence of fibers. The fiber alignment vector, \(\mathbf{f}_{0}\), provides the direction of the local stiffness anisotropy in the direction of the fibers. Finally, the dispersion \(\kappa_{\mathrm{f}}\) is a parameter used in continuum models [25] to represent regions where fibers follow a distribution around a mean vector. In our case, it allows the local stiffness to move from anisotropic (no dispersion, \(\kappa_{\mathrm{f}}=0\)) to isotropic (\(\kappa_{\mathrm{f}}=0.5\)) depending on the local fibers.
_Titin_. Myofibris were identified via immunofluorescence imaging of iPSC-CMs containing a titin-GFP reporter, which allows for the visualization of the sarcomeres' Z-discs [26]. When images of titin were available for the whole domain, tissue-specific fields describing the structure of the myofibril network were computed. This was done following a similar strategy to the processing of the DVS fibers, where the titin
images were used to find the myofibril density \(\rho_{\rm m}\), alignment \({\bf m}_{0}\), and dispersion \(\kappa_{\rm m}\) (Fig. 2B). The steps to compute these quantities from the images are presented in Supplementary Information Section S2. The myofibril density \(\rho_{\rm m}\) (taking values from 0 to 1) allows us to define the contractile regions. The alignment vector \({\bf m}_{0}\) defines the direction of the contraction and the dispersion \(\kappa_{\rm m}\) activates an isotropic contraction in regions of unorganized myofibrils.
To obtain the visualization of the titin in the whole tissue shown in Fig. 1D, several smaller images were stitched together since a high magnification is needed to have a clear view of the Z-discs. This is a slow process, and since the main objective was to quantify myofibril structural characteristics, we decided to accelerate the process by only imaging the center of the tissues and statistically characterize the myofibril alignment and density. Given the six different experimental conditions, the myofibril orientation was analyzed and compiled for all the images available (\(N\geq 7\) per condition). A Von Mises distribution [25] was fit to the histogram of angles (measured relative to the post-to-post direction, see Supplementary Information Section S2.1 for more details). The distribution is characterized by a parameter \(\xi\) (assuming the mean is zero), with high values of \(\xi\) indicating myofibrils oriented preferentially in the post-to-post direction. The resulting probability density functions (PDFs) and the data histograms are shown in Fig. 3A. The myofibril density was computed for these partial images and then a mean value was computed per condition (see Fig. 3B). The mean density values were then normalized by the aligned, soft fibers/soft post condition value. Table 1 shows the final density values used. Given a fiber network and geometry, we used the probability fitting and the density measurements specific for each condition to generate computational myofibril fields that followed these two parameters on top of the fiber fields. The procedure to create these fields is shown in Supplementary Information Section S3. One important thing to notice is that whenever this approach was taken, no myofibril dispersion was considered (\(\kappa_{\rm m}=0\)), as it is difficult to generate computationally in a meaningful way. The impact of these considerations was studied in Section S3.1 and S3.2 of the Supplementary Information.
### Biomechanical Model
To model microtissue mechanics, we used a constrained mixture continuum mechanics framework [27; 28; 29]. Due to the thinness of the fibroTUG tissues (\(\sim 12\mu m\) compared to the \(\sim 400\mu m\) in length), we consider the tissue domain \(\Omega\subset\mathbb{R}^{2}\). The boundary at the post is denoted by \(\Gamma_{p}\). The reference coordinates of a given point in \(\Omega\) are denoted by \({\bf X}\). Under internal and external loads, this point moves to a deformed position \({\bf x}={\bf X}+{\bf u}\), where \({\bf u}\) is the displacement field. The deformation gradient tensor \({\bf F}=\nabla{\bf u}+{\bf I}\) describes the deformation of the material with respect to the reference coordinates and \(J=\det{\bf F}\) the relative volume
change [28]. We further define \(\mathbf{C}=\mathbf{F}^{T}\mathbf{F}\) as the right Green Cauchy deformation tensors. Constitutive relations are often expressed in terms of the invariants of \(\mathbf{C}\)[30]. In this work, we considered the following,
\[I_{1}=\mathbf{C}:\mathbf{I},\qquad\bar{I}_{1}=J^{-2/3}I_{1},\]
where \(\bar{I}_{1}\) is the isochoric version of \(I_{1}\)[31]. Furthermore, invariants describing the deformation in the fiber and myofibril direction are,
\[I_{\rm 4f}=\mathbf{C}:\mathbf{f}_{0}\otimes\mathbf{f}_{0},\qquad I_{\rm 4m}= \mathbf{C}:\mathbf{m}_{0}\otimes\mathbf{m}_{0},\]
and these are further modified to account for local dispersion \(\kappa_{\rm f}\), \(\kappa_{\rm m}\)[25, 32],
\[I_{\rm 4f}^{*}=\kappa_{\rm f}I_{1}+(1-2\kappa_{\rm f})I_{\rm 4f},\qquad I_{\rm 4 m}^{*}=\kappa_{\rm m}I_{1}+(1-2\kappa_{\rm m})I_{\rm 4m}.\]
When the fiber dispersion \(\kappa_{\rm f}=0\), \(I_{\rm 4f}^{*}=I_{\rm 4f}\), i.e., the local response of \(I_{\rm 4f}^{*}\) is fully anisotropic, whereas when \(\kappa_{\rm f}=0.5\), \(I_{\rm 4f}^{*}=I_{1}/2\), and the material behaves locally as a fully isotropic material. The same is true for the myofibril invariant \(I_{\rm 4m}^{*}\).
As mentioned in Section 2.1, a fibroTUG tissue consists of two components, DVS fibers and iPSC-CMs. We assumed the two components work in parallel and, therefore, the strain energy density of the tissue \(\Psi\) is given by the sum of the energy of the fibers \(\Psi_{ECM}\) and the iPSC-CMs \(\Psi_{CM}\),
\[\Psi=\Psi_{ECM}+\Psi_{CM}, \tag{1}\]
The strain energy function \(\Psi_{\rm ECM}\) is given by the strain energy density describing the fiber mechanics \(\Psi_{\rm f}\) and a term that delivers numerical stability on the areas where there are no fibers \(\Psi_{\rm st}\),
\[\Psi_{ECM}=\Psi_{\rm f}(\mathbf{C};\rho_{\rm f},\mathbf{f}_{0},\kappa_{\rm f })+\Psi_{\rm st}(\mathbf{C}).\]
The first term corresponds to a modified neofiber material law [33] and integrates the structural information obtained from the DVS fiber images,
\[\Psi_{\rm f}(\mathbf{C};\rho_{\rm f},\mathbf{f}_{0},\kappa_{\rm f })=\rho_{\rm f}\left(\frac{C_{1}}{4}(I_{\rm 4f}^{*}-1)^{2}+\frac{C_{2}}{2}(\bar{I}_{1}-2) \right), \tag{2}\]
where \(C_{1}\) is the stiffness in the fiber direction and \(C_{2}\) is the isotropic stiffness. This formulation was chosen since the experimental data from the passive stretching of the fibers (Fig. 1B) showed a close-to-linear behavior that is well captured by this material law. The stabilization term is,
\[\Psi_{\rm st}=\frac{K}{2}\left[(J-1)^{2}+(\ln J)^{2}\right]+\mu\,{\rm tr}({\bf E }^{2}). \tag{3}\]
The first term inside the parenthesis penalizes volumetric changes, while the second corresponds to the deviatoric term of a Saint-Venant Kirchhoff material. The parameters \(K=10^{-3}\) kPa and \(\mu=10^{-2}\) kPa are chosen to be \(\ll C_{1}\), so the mechanical response of the ECM is dominated by \(\Psi_{\rm f}\). A sensitivity analysis confirmed that these parameters are minimally important (see Section S6 of the Supplementary Information for more details).
The contractile iPSC-CMs were modeled by a passive component representing the bulk stiffness of the cells (\(\Psi_{\rm c}\)) and an active contraction component (\(\Psi_{\rm a}\)),
\[\Psi_{\rm CM}=\Psi_{\rm c}+\Psi_{\rm a}. \tag{4}\]
The passive component is modeled using the 2D version of the compressible Neohookean material law presented in Pascon et al. (2019) [34], which has the following strain energy density function,
\[\Psi_{\rm c}=\frac{K_{c}}{2}[\ln J]^{2}+\frac{\mu_{\rm c}}{2}\left(I_{1}-2-2 \ln J\right). \tag{5}\]
We use \(\mu_{\rm c}=2\) kPa, which is close to values derived from stress-strain curves obtained from isolated iPSC-CMs [35]. Due to the 2D nature of the EHTs, the compressible modulus was assumed negligible (i.e. \(K_{c}=0\)) due to the ease for cells to deform out-of-plane under in-plane compression.
The active component is given by,
\[\Psi_{a}=\int_{0}^{I_{4m}^{*}}\eta\rho_{\rm m}\phi(s)\ {\rm d}s. \tag{6}\]
Here, \(\phi\) is a function that models the length-dependent behavior of cardiomyocytes, and it is taken from [36] (see Supplementary Information Section S4 for more details). The parameter \(\eta\) controls the magnitude of the activation in time of the iPSC-CMs. Note that we assume the activation only occurs where there are myofibrils (hence the multiplication by \(\rho_{\rm m}\)), but the passive response of the iPSC-CMs, is present everywhere
in the tissue. This is because iPSC-CMs nuclei appear evenly distributed in the images of the tissues, but only some develop myofibrils.
The total Cauchy stress \(\mathbf{\sigma}\) is computed from \(\Psi\) using,
\[\mathbf{\sigma}=J^{-1}\frac{\partial\Psi}{\partial\mathbf{F}}\mathbf{F}^{T}, \tag{7}\]
and, under a quasi-static regime, the stress balance is given by,
\[\nabla\cdot\mathbf{\sigma}=0. \tag{8}\]
Throughout the paper, we assess the mechanics of the tissues using the active stress and the strain in the myofibril direction,
\[\sigma_{\mathrm{a,m}}=\mathbf{\sigma}_{\mathrm{a}}:\mathbf{m}\otimes\mathbf{m}, \qquad\varepsilon_{\mathrm{m}}=\frac{1}{2}\left(\mathbf{C}-\mathbf{I}\right): \mathbf{m}\otimes\mathbf{m}. \tag{9}\]
where \(\mathbf{\sigma}_{\mathrm{a}}=J^{-1}\frac{\partial\Psi_{\mathrm{a}}}{\partial \mathbf{F}}\mathbf{F}^{T}\).
### Data Assimilation
Once the structure of each tissue is defined, the only unknowns remaining in the system are the material parameters of the constitutive law of the fibers, \(C_{1}\) and \(C_{2}\), and of the active component of the iPSC-CMs, \(\eta\). For simplicity, we considered these parameters to be constant within a fibroTUG. To find these tissue-specific values, we assimilate the functional data shown in Fig. 1C-D using a parameter identification strategy introduced by [20]. Briefly, this technique integrates material parameters into the set of state variables, enabling the addition of constraints to match measured data. This method allows solving for both the displacement field and additional material parameters in the same forward simulation. In our case, we divide the assimilation into two steps. First, we use the data from the fiber indentation test (Fig. 1C) to find the stiffness parameters of the fibers, and then we use the post-displacement trace (Fig. 1D) to find the active parameter, \(\eta\). The corresponding boundary value problem equations are described in Section S5 of the Supplementary Information.
## 3 Results
### Model validation
The objective of this first set of simulations was to assess the ability of the proposed pipeline and model to capture fibroTUG mechanics. To do so, three complete datasets of the soft fibers/soft post condition
were analyzed. The fiber stiffness was set to the mean value found for the fibrous samples of this condition, \(C_{1}=3.84\) kPa (see Section 3.2). By design, the post-displacement (and, therefore, the post-force) data were exactly matched by the simulations as shown in Fig. 4A. The data assimilation enabled us to identify the parameter \(\eta\) that reproduces the post-displacement and force curves. Fig. 4B shows the mean \(\sigma_{\text{a,m}}\) at each time point for each tissue. To assess the ability of the model to capture local deformations, we validated the results of the simulations against the results from MicroBundleCompute, an image tracking software developed specifically for measuring the internal displacements of contracting EHTs [37]. For the three tissues, correlation plots were calculated between the predicted displacements of the simulation and the measured displacements from the video. Fig. 4C and D show the measured and predicted displacement fields in the post-to-post direction (Y) and the perpendicular direction (X) for one of the three datasets. Fig. 4E and F show the displacement's correlation in Y and X, respectively. The \(R^{2}\) parameter for the correlation in the Y direction is always greater than 0.89, while in the X direction, a positive correlation is observed, with \(R^{2}\) values close to 0.4.
### Active stress prediction for experimental conditions
In this section, we focus on understanding the effect of different mechanical environments on the iPSC-CMs ability to exert contractile stress. Images of five fibrous matrices for each condition were processed as described in 2.2 (with the exception of the random, soft fibers/soft post, where only four matrices with the paired indentation test were available). The fiber stiffness for each available matrix was identified using the indentation experimental data as detailed in 2.4. As expected, the mean fiber stiffness for the stiff matrix cases (mean \(C_{1}=19.47/C_{1}=24.66\) for aligned/random matrices) is about four times stiffer than the soft matrix case (mean \(C_{1}=3.84/C_{1}=4.26\) for aligned/random matrices). More detailed results of this process can be found in Supplementary Information Section S8. We computationally generated five myofibril fields on top of each matrix following the probabilistic characterization shown in Fig. 3. Further, five representative samples (i.e., that have a similar mean and standard deviation than the whole set) of post-displacement curves were used as input in the active data assimilation step. This gives us \(N=125\)_in-silico_ models per experimental condition (except for the random, soft fibers/soft post condition where \(N=100\)).
Fig. 5A-C shows the results for aligned matrices, including bar plots for the post-force and the mean active stress \(\sigma_{\text{a,m}}\), and the \(\sigma_{\text{a,m}}\) field at maximum contraction for aligned matrices. Fig. 5D-F shows the same for random matrices. Both \(F_{\text{p}}\) and \(\sigma_{\text{a,m}}\) follow the same trends when comparing condition to condition. For instance, in the aligned case, the magnitude of both \(F_{\text{p}}\) and \(\sigma_{\text{a,m}}\) are highest in the soft fibers/soft post
condition and lowest in the case with stiff fibers. However, the relative differences between conditions in \(\sigma_{\rm a,m}\) are lower compared to \(F_{\rm p}\). For example, the force output drops from \(3.01\,\mu\)N in the soft fibers/soft post case (grey bars in 5A) to only \(1.19\,\mu\)N in the case with stiffer fibers (red bars in 5A), which corresponds to a -60.5% relative change. Contrarily, the maximum \(\sigma_{\rm a,m}\) falls from 2.45 to 1.55 kPa between these two cases, only a -36.8% variation.
### Isolating the effect of mechanical variations
To investigate the origin of the \(F_{\rm p}/\sigma_{\rm a,m}\) relative differences, we performed several simulations outside the parameter range of current experimental conditions. Specifically, we studied variations in post stiffness, fiber stiffness, fiber alignment, and myofibril density. To do so, we took a single post-force curve (Fig. 6A) and computed the active stress needed to generate that force output given a certain parameter set. In this experiment, higher values of \(\sigma_{\rm a,m}\) indicate that the tissue is less efficient in transmitting cell stress to force output as this was held constant across all conditions. We performed this test using five random matrices and five aligned matrices with five computationally generated myofibril fields each (Fig. 6B). The alignment of myofibrils was taken from the soft post/soft fibers case (gray curves in Fig. 3A). The stiffness in the fiber direction was set to be \(C_{1}=3.84\) and \(C_{1}=19.47\) kPa, for the soft fiber and the stiff fiber case, respectively.
First, we considered a constant and uniform myofibril density \(\rho_{\rm m}=1\). Results of this experiment are shown in Fig. 6C, for aligned matrices and Fig. 6E, for random matrices. On aligned matrices, the active stress necessary to generate the force output represented in Fig. 6A was \(2.12,2.60,1.95\) kPa for the soft fibers/soft post, stiff fibers, and stiff post conditions, respectively. Similar relative trends are observed for the random case, although with higher values compared to their aligned counterparts. Second, to study the effect of myofibril density, we considered the case where \(\rho_{\rm m}=\rho_{\rm data}\), which is the specific density computed from the data (shown in Table 1). Fig. 6D shows the results of these simulations. The magnitude of \(\sigma_{\rm a,m}\) is elevated in those cases with lower \(\rho_{\rm m}\), when compared with the \(\rho_{\rm m}=1\) simulations for both aligned and random matrices.
Finally, to understand the reason behind the performance of random matrices compared to aligned matrices, we investigated the relative importance of myofibril alignment and fiber alignment in the transmission of iPSC-CM stress to post-force. To do so, we simulated mixed, synthetic scenarios where aligned myofibril fields (\(\xi=3.07\)) were generated on top of random fiber fields (Rf-Am) and vice-versa, random myofibrils (\(\xi=1.67\)) on aligned fibers (Af-Rm) as shown in Fig. 7A. These cases were compared with the experimental scenarios, aligned fibers with aligned myofibrils (Af-Am) and random fibers with random myofibrils (Rf-Rm), which correspond to the bar plots shown in Fig. 6C,E. We performed the same experiment as in
the previous section with these two mixed cases. Results for the soft fibers/soft post case are shown in Fig. 7B. The results for the other conditions (with stiffer fibers or stiffer posts) are very similar and are shown in Supplementary Information Section S9. The difference in \(\sigma_{\text{a,m}}\) between the experimental conditions Af-Am and Rf-Rm is 0.846 kPa. When random myofibrils are used on top of aligned fibers (Af-Rm), that difference is reduced to 0.731 kPa (-13.6%). When myofibrils are aligned in random matrices (Rf-Am) \(\sigma_{\text{a,m}}\) the reduction is similar (-16.7%). The highest variation is produced by changing the matrix alignment. For example, the difference is reduced by 86.4% when we fixed the myofibril alignment to be aligned and changed the matrix alignment (i.e., from Af-Am to Rf-Am).
## 4 Discussion
In this work, we developed a pipeline to generate data-driven computational models of EHTs. By combining comprehensive experimental data with biomechanical computational models, we are able to model EHT mechanics and compute metrics to estimate iPSC-CM function. The pipeline developed allows us to model the explicit fiber structure and myofibril network and to infer the mechanical properties of these components from functional experimental data. This is facilitated by the use of the fibroTUG platform, which provides great control over the mechanical environment and enables obtaining detailed imaging and functional data [24]. The biomechanical computational model augments the analysis and conclusions obtained from the experiments. It also enables the test of conditions that are experimentally not feasible to achieve, allowing us to decipher the effect of the different variables of the system in the tissue mechanics. Below, we discuss the results of the different simulations performed using the model.
### Model Validation
For the three validation datasets, the correlation between the displacements predicted by the simulations and the measurements of the MicroBundleCompute software in the post-to-post direction is strong, with high \(R^{2}\) values (mean 0.938) and slopes close to 1 (0.967, 1.047, 0.970 for the three tissues, see Fig. 4E). The correlations in the perpendicular direction are also positive but with slopes further from 1 (1.495, 0.694, 0.623 for the three tissues) and a mean \(R^{2}=0.372\) (see Fig. 4F). The decrease in correlation in the X directions is explained by several reasons. First and foremost, the videos are taken using brightfield imaging [24], which mainly shows the fiber's deformations. Our modeling approach uses a constrained mixture framework, where the kinematics of fibers and cells are homogenized, meaning that the computed displacements represent the average displacement of both these components. Furthermore, the displacements in the X direction are
smaller and harder to track as demonstrated by a simple exercise where two people track different features across the contraction cycle. This test showed a decrease in the measured displacement correlation, from \(R^{2}=0.9\) in the Y direction to \(R^{2}=0.72\) in the X direction (see Supplementary Information Section S7 for more details). For these reasons, we believe that imaging-to-modeling comparison is not direct, but it does allow us to confirm that our model is meaningfully capturing the main features of fibroTUG kinematics.
### Active stress prediction for experimental conditions
In Fig. 5, the computational pipeline was used to assess the mechanics of fibroTUGs under different mechanical environments. Both \(F_{\mathrm{p}}\) and \(\sigma_{\mathrm{a,m}}\) show significant differences between the conditions (Fig. 5A-B,D-E). Since \(\sigma_{\mathrm{a,m}}\) is a value that considers the structure of all the tissue components, the differences in this value indicate that modifying the mechanical environment where the cells develop will influence their contractile maturity. These differences are also captured by \(F_{\mathrm{p}}\), but the relative differences are influenced by a multitude of factors, including fiber mechanics, myofibrillar density and alignment, and contractile maturity. This proves that the force output is the reflection of different variables in the system, not only the iPSC-CMs active stress. This highlights the importance of giving proper context to force output, which is often treated as a direct surrogate of iPSC-CM stress [6; 8; 38].
### Isolating the effect of mechanical variations
One key advantage of using computational models to study EHT mechanics is their flexibility to study scenarios that are difficult, if not impossible, to create _in-vitro_. These simulations can shed light on the influence of different variables of a system by allowing us to change their values individually and measure the variation in an output of interest. We performed controlled simulations where one mechanical variable was modified at a time to assess the changes in the \(F_{\mathrm{p}}/\sigma_{\mathrm{a,m}}\) relationship. In Fig. 6C and E, we can see that when \(\rho_{\mathrm{m}}=1\), changing the ECM stiffness and the boundary conditions will have a slight impact on the capacity of the tissue to translate cell active stress into force output. For example, for stiffer fibers, a higher amount of the work done by the iPSC-CMs is lost in tugging the less deformable matrix. Conversely, when the post is stiffer, the magnitude of \(\sigma_{\mathrm{a,m}}\) is slightly lower, corresponding to a more efficient transduction of myofibrillar stress to the total output force.
When \(\rho_{\mathrm{m}}=\rho_{\mathrm{data}}\), the conditions with lower myofibril formation need to generate higher values of \(\sigma_{\mathrm{a,m}}\). This is not surprising because there will be less myofibril area to generate the same force, which means that the existing sarcomeres need to generate more stress to compensate. However, our model allows us to measure that effect and compare it with the effect of other mechanical variations. The observed changes due
to poor myofibril formation are much higher than those observed due to post or fiber stiffness (observed by the change of the bar plot magnitude between Fig. 6C-E and Fig. 6D-F). Furthermore, the cases that have lower \(\rho_{\rm m}\) (and that need higher \(\sigma_{\rm a,m}\) to generate the same force) correlate with the cases that have lower force output experimentally (Fig. 5A-B). This indicates that myofibril density is one of the most important mechanical parameters explaining EHT force output. The importance of myofibril formation has also been observed in single-cell models [39].
When evaluating the effect of fiber alignment, we can see that the magnitude of \(\sigma_{\rm a,m}\) is higher in random than aligned matrices. This makes sense, as in these cases, both fibers and myofibrils are less aligned in the post-to-post direction (see, for example, the PDFs in Fig. 3A). Here, a lot of the work performed by the iPSC-CMs gets lost in pulling the transverse direction. In Fig. 7, we decouple the alignment of fibers and myofibrils with the intention of understanding the relative importance of myofibril alignment versus fiber alignment. The results show that aligning the myofibrils on top of a random matrix (Am-Rf) or vice-versa (Rm-Af) only explains a small part of the difference between the all-aligned (Af-Am) and the all-random condition (Rf-Rm). For this reason, we conclude that, _for the range of observed variations in myofibril alignment_, the matrix structure has a higher effect on the force output. It could be that other mechanical environments generate a higher disarray of myofibrils making this parameter more dominant.
The results of the model complement nicely with other biomarkers studied in this platform [24]. For example, the aligned soft fibers/soft post condition showed a higher proportion of connexin 43, more mature forms of myosin (MLC-2v), and lower beats per minute, to mention a few. Our model shows that this condition and the aligned soft fibers/stiff post are the most efficient in transmitting force. However, the case with soft posts presents much higher myofibril strains (see Fig. 6C), which has shown to be important in iPSC-CM maturation [40]. These observations show the benefits of using a combined computational-experimental approach to understand different aspects that are involved in the EHT function and how they relate to the iPSC-CM maturation.
### Limitations
The computational generation of myofibril fields enabled the flexibility to explore non-experimental scenarios to understand the importance of myofibril alignment. However, generating myofibrils networks that are representative of real sarcomere organization is challenging. We performed several tests to assess our approach using the validation datasets. We tested the importance of including myofibril dispersion \(\kappa_{\rm m}\) and studied the prediction error induced when using computationally generated myofibril fields instead of imaged-based ones. Detailed results of these tests are shown in Supplementary Information Section S3.
Briefly, we showed that not including \(\kappa_{\rm m}\) will mainly impact the deformations in X, with these simulations showing less necking than expected. However, the active stress prediction remains very similar. The test of the artificial myofibril fields showed that the sarcomere strain values are the most affected quantity, but the active stress prediction is, again, very similar to the one computed using image-generated myofibril fields. The results of these tests show that we can use our proposed method to correctly assess changes in \(\sigma_{\rm a,m}\).
A few assumptions were made in the model not based on the experimental data. The passive stiffness of the iPSC-CMs, isolated from the fibers, was not measured in our experiments. This parameter is tricky to measure, as it is known that the cell will change its stiffness during development [41] and that this value also depends on the temperature and the state of its contractile apparatus [42]. Therefore, for simplicity, we considered the passive stiffness of the cells to be 2 kPa, a value similar to the one measured by Ballan et al. [35] in iPSC-CMs, and lower than what is usually measured in mature CMs [43]. We further assume that the passive response of the cell was isotropic. This is because no studies were found on the anisotropy of iPSC-CMs and since the iPSC-CMs considered in this study are still immature (thus, the cytoskeleton is less organized), we believe that our assumptions of an isotropic, less stiff passive response than in adult CMs are reasonable.
Another modeling assumption is that all iPSC-CMs have a synchronous activation and that all their contraction can be modeled using a single \(\eta\) parameter. This means that the spatial heterogeneity observed in the \(\sigma_{\rm a,m}\) fields in Fig. 5C,F are the product of only the length-dependent function \(\phi\) in Eq. (6). The simultaneous activation assumption is backed up by these tissues being very small, and the calcium transients measured experimentally did not show spatial differences [24]. However, it is possible (and probably expected) that not all the iPSC-CMs in one tissue have the same maturity, which could be modeled by having an \(\eta\) field instead of a single scalar value. For example, instead of only using the post-displacement/force as a constraint, we could use the displacement measurements from the MicroBundleCompute software to find a local \(\eta\). Different strategies to do so will be explored in the future. Nevertheless, we believe that using a single \(\eta\) parameter allows us to compute an average activation and that the overall regional differences will not affect the conclusions of this paper.
Finally, we modeled the fibroTUG in 2D, which also forces us to consider a material with low compressibility. This decision is based on the low thickness to length ratio that is \(\sim 3\%\), and because only 2D projected images were available for the fibers, myofibrils, and displacement tracking. Future studies will aim to generate a pipeline to reconstruct the full tissue geometry from 3D stacks obtained from high-magnification imaging techniques and assess the validity of the 2D model.
## 5 Conclusions
This work introduces a data-driven biomechanical computational model of EHTs. The implemented pipeline leverages the experimental data of the fibroTUG platform, which was introduced specifically to study iPSC-CMs function due to mechanics. With the model, we are able to measure the iPSC-CM active stress that replicates the experimental observations. Unlike the tissue force output, this value is a direct measure of iPSC-CM contractile function. We used the model to assess the effects of different mechanical environments on iPSC-CMs stress generation. The results followed the same trends as the force output, indicating maturation differences, but with lower relative differences. This highlights that EHT force generation is the product of different variables besides iPSC-CM contraction. We explore this fact with our model and show that myofibril density is one of the key factors explaining the experimental differences observed. The developed framework opens the door to many other applications in the cardiac tissue engineering field, and it shows how a combination of _in-silico_ with _in-vitro_ approaches can help us better understand the mechanobiology of EHTs.
## 6 Acknowledgements
JJ acknowledges the support of ANID and Fulbright Chile through the Becas BIO program. DN acknowledges funding from the Engineering and Physical Sciences Research Council Healthcare Technology Challenge Award (EP/R003866/1). BMB acknowledges the support from the National Science Foundation through CBET-2033654. All authors acknowledge the support from the National Science Foundation through the Nanosystems Engineering Research Center for Directed Multiscale Assembly of Cellular Metamaterials with Nanoscale Precision (CELL-MET, EEC-1647837).
\begin{table}
\begin{tabular}{|l|c|c||c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c||}{Aligned Fibers} & \multicolumn{2}{c|}{Random Fibers} \\ \hline Condition & Density & Normalized Density & Density & Normalized Density \\ \hline Soft Fibers / Soft Post & 0.643 \(\pm\) 0.104 & 1.000 & 0.395 \(\pm\) 0.180 & 0.613 \\ \hline Stiff Fibers / Soft Post & 0.444 \(\pm\) 0.126 & 0.690 & 0.391 \(\pm\) 0.060 & 0.608 \\ \hline Soft Fibers / Stiff Post & 0.593 \(\pm\) 0.128 & 0.922 & 0.563 \(\pm\) 0.119 & 0.875 \\ \hline \end{tabular}
\end{table}
Table 1: Results of the myofibril density characterization (mean \(\pm\) standard deviation) for the different experimental conditions. The normalized density is the mean density of each case divided by the aligned soft fibers/soft post condition mean density (i.e., divided by 0.643).
Figure 1: Experimental setup and data output. (A) Diagram of FibroTUG manufacturing process showing the electrospun fibers, the selective crosslinking, and the cell seeding process. (B) Force vs. strain curves obtained by performing an indentation test in the fibers only. (C) Post displacement traces obtained by tracking the post position in videos of the contractile EHTs. (D) Immunofluorescence images of the DVS fibers and the titin protein that indicates the location of Z-discs.
Figure 2: Image processing results for (A) the DVS fibers showing (from left to right) fiber density (\(\rho_{\rm f}\)), fiber alignment (shown as the absolute angle respect the post-to-post direction, \(|\theta_{\rm f}|\)), and fiber dispersion \(\kappa_{\rm f}\), and for (B) For the titin images showing myofibrl density \(\rho_{\rm m}\), myofibrl alignment (as the absolute angle respect the post-to-post direction, \(|\theta_{\rm m}|\)), and myofibrl dispersion \(\kappa_{\rm m}\).
Figure 3: Myofibryl characterization. (A) Histograms and their respective Von Mises fit for each condition studied. (B) Example image of the density computation for the different cases. The bar plots at the right show the density value computed for all the images available. All data presented as mean \(\pm\) standard deviation.
Figure 4: Results of the pipeline for three complete datasets. Each color represents a different tissue. (A) Post displacement data (dots) and model (line). (B) Simulated active stress. (C) MicroBundleCompute displacement outputs (left) and simulation results (right) in the post-to-post direction Y. (D) Measured displacements (left) and simulation results in the X direction. (E) Correlation between the tracking and the simulation displacements in Y and (F) in X. (C) and (D) correspond to the case pictured in cyan.
Figure 5: Results at maximum contraction for the experimental conditions of interest. (A) Force output, (B) mean \(\sigma_{\text{a,m}}\), and (C) \(\sigma_{\text{a,m}}\) field at maximum contraction for aligned fibrous matrix. (D) Force output, (E) mean \(\sigma_{\text{a,m}}\), and (F) \(\sigma_{\text{a,m}}\) field at maximum contraction for random fibrous matrix. All data presented as mean \(\pm\) standard deviation. * \(p<0.0001\) by unpaired t-tests.
Figure 6: Impact of different mechanical conditions on the post-force/active stress relationship. (A) Post-force and post-displacement for each condition. Note that the force is kept the same for all conditions, while the post-displacement varies in the case of stiff posts. (B) Procedure to generate _in-silico_ samples (see supplementary Information S3 for more details). An aligned and a random matrix were built from the images, and computationally myofibril fields were created using the corresponding PDF. Results for an aligned condition (C) with \(\rho_{\mathrm{m}}=1\), and (D) with \(\rho_{\mathrm{m}}\) equal to the specific density obtained from data for each case (\(\rho_{\mathrm{data}}\)). (E) and (F) show the results for random matrices with \(\rho_{\mathrm{m}}=1\) and \(\rho_{\mathrm{m}}=\rho_{\mathrm{data}}\). The values below the bar plots in (D) and (F) correspond to the mean density used in these cases (see Table 1). All data presented as mean \(\pm\) standard deviation. * \(p<0.0001\) by unpaired t-tests.
## 6 Conclusion
Figure 7: Procedure for the myofibril vs fiber alignment test. Simulated conditions that have aligned fibers with random myofibrils (Af-Rm) and vice-versa (Rf-Am) _in-silico_ tissues are generated by using fibrous matrices generated from images and the PDFs describing myofibril alignment. These simulations are compared with the aligned (Af-Am) and random (Rf-Rm) experimental conditions. (B) Mean \(\sigma_{\text{a,m}}\) at maximum contraction for the Af-Am, Af-Rm, Rf-Am, Rf-Rm, simulations using soft fibers/soft posts. All data presented as mean \(\pm\) standard deviation. * \(p<0.0001\) by unpaired t-tests. |
2309.08250 | Optimization of Rank Losses for Image Retrieval | In image retrieval, standard evaluation metrics rely on score ranking, \eg
average precision (AP), recall at k (R@k), normalized discounted cumulative
gain (NDCG). In this work we introduce a general framework for robust and
decomposable rank losses optimization. It addresses two major challenges for
end-to-end training of deep neural networks with rank losses:
non-differentiability and non-decomposability. Firstly we propose a general
surrogate for ranking operator, SupRank, that is amenable to stochastic
gradient descent. It provides an upperbound for rank losses and ensures robust
training. Secondly, we use a simple yet effective loss function to reduce the
decomposability gap between the averaged batch approximation of ranking losses
and their values on the whole training set. We apply our framework to two
standard metrics for image retrieval: AP and R@k. Additionally we apply our
framework to hierarchical image retrieval. We introduce an extension of AP, the
hierarchical average precision $\mathcal{H}$-AP, and optimize it as well as the
NDCG. Finally we create the first hierarchical landmarks retrieval dataset. We
use a semi-automatic pipeline to create hierarchical labels, extending the
large scale Google Landmarks v2 dataset. The hierarchical dataset is publicly
available at https://github.com/cvdfoundation/google-landmark. Code will be
released at https://github.com/elias-ramzi/SupRank. | Elias Ramzi, Nicolas Audebert, Clément Rambour, André Araujo, Xavier Bitot, Nicolas Thome | 2023-09-15T08:51:30Z | http://arxiv.org/abs/2309.08250v1 | # Optimization of Rank Losses for Image Retrieval
###### Abstract
In image retrieval, standard evaluation metrics rely on score ranking, _e.g._ average precision (AP), recall at k (R@k), normalized discounted cumulative gain (NDCG). In this work we introduce a general framework for robust and decomposable rank losses optimization. It addresses two major challenges for end-to-end training of deep neural networks with rank losses: non-differentiability and non-decomposability. Firstly we propose a general surrogate for ranking operator, SupRank, that is amenable to stochastic gradient descent. It provides an upperbound for rank losses and ensures robust training. Secondly, we use a simple yet effective loss function to reduce the decomposability gap between the averaged batch approximation of ranking losses and their values on the whole training set. We apply our framework to two standard metrics for image retrieval: AP and R@k. Additionally we apply our framework to hierarchical image retrieval. We introduce an extension of AP, the hierarchical average precision \(\mathcal{H}\)-AP, and optimize it as well as the NDCG. Finally we create the first hierarchical landmarks retrieval dataset. We use a semi-automatic pipeline to create hierarchical labels, extending the large scale Google Landmarks v2 dataset. The hierarchical dataset is publicly available at github.com/cdfoundation/google-landmark. Code will be released at github.com/elias-ramzi/SupRank.
Image Retrieval, Ranking, Average Precision, Hierarchical Ranking, Hierarchical Average Precision, Non-Decomposable
## I Introduction
Image retrieval (IR) is a major task in computer vision. The goal is to retrieve "similar" images to a query in a database. In modern computer vision this is achieved by learning a space of image representation, _i.e._ embeddings, where "similar" images are close to each other.
The performances of IR systems are often measured using ranking-based metrics, _e.g._ average precision (AP), recall rate at k (R@k), Normalized Discounted Cumulative Gain (NDCG). These metrics penalize retrieving non-relevant images before other remaining relevant images.
Although these metrics are suited for image retrieval, their use for training deep neural networks is limited. They have two main drawbacks: i) they are not amenable to stochastic gradient descent (SGD) and thus cannot be used directly to train deep neural networks (DNN), ii) they are not decomposable.
There has been a rich literature to provide proxy losses for the task of image retrieval using triplet losses [1, 2, 3, 4, 5, 6, 7, 8, 9] or cross entropy based losses [10, 11, 12, 13, 14, 15]. There also has been extensive work to create rank losses amenable to gradient descent [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28]. They create either coarse upper bounds of the target metric or tighter approximations but loosen the upper bound property which affects final performances.
During rank loss training, the loss averaged over batches generally underestimates its value on the whole training dataset, which we refer to as the _decomposability gap_. In image retrieval, attempts to circumvent the problem involve _ad hoc_ methods based on hard batch sampling strategies [5, 29, 30], storing all training representations/scores [31, 32] or using larger batches [24, 25, 28], leading to complex models with a large computation or memory overhead.
The core of our approach is a a unified framework, illustrated in Fig. 1 and detailed in Sec. III, to optimize rank losses for both hierarchical and standard image retrieval. Specifically, we propose a smooth approximation of the rank which is amenable to SGD and is an upper bound on the true rank, which leads to smooth losses that are upper bounds of the true losses. At training time, we additionally introduce a novel objective to reduce the non-decomposability of smooth rank losses without the need to increase the batch size.
Our framework for end-to-end training of DNN is illustrated in Fig. 1. Using a DNN \(f_{\theta}\) we encode both the query and the rest of the images in the batch. Optimizing the rank loss supports the correct -partial- ordering in a batch based on our surrogate of the rank, SupRank. Optimizing the decomposability loss supports that the positives will be ranked even before negative items that are not present in the batch. Both losses are amenable to gradient descent, which makes possible to update the model parameters with SGD.
Our framework can be used to optimize rank losses for both hierarchical and non-hierarchical image retrieval. In a first time we show how to instantiate our framework to non-hierarchical image retrieval by optimizing two ranking-based metrics, namely AP and R@k. We show the importance of the two components of our framework in ablation studies. Using our AP surrogate, we achieve state-of-the-art image retrieval performances across 3 datasets and 3 neural networks architectures.
In a second instantiation we focus on hierarchical image retrieval [33, 34, 35]. Because metrics used to evaluate fine-grained image retrieval rely on binary labels, _i.e._ similar or dissimilar, they are unable to take into account the severity of the errors. This leads methods that optimize this metrics to lack robustness: they tend to make severe errors when they make errors. Hierarchical image retrieval can be used to mitigate this issue by taking into account non-binary similarity between labels. We introduce the hierarchical average precision, \(\mathcal{H}\)-AP, a new metric that extends the AP to non-binary settings. Using our optimization framework, we exhibit how optimizing the \(\mathcal{H}\)-AP and the well known NDCG leads to competitive results for fine-grained image retrieval metrics, while outperforming by large margins both binary methods and hierarchical baselines when considering hierarchical metrics.
Finally we introduce the first hierarchical landmarks retrieval dataset, \(\mathcal{H}\)-GLDv2, extending the well-known Google Landmarks v2 landmarks retrieval (GLDv2) dataset [36]. While landmarks retrieval has been one of the most popular domain in image retrieval it lacks a hierarchical dataset. \(\mathcal{H}\)-GLDv2 is a large scale dataset with \(1.4\)m images and three levels of hierarchies: including \(100\)k unique landmarks, \(78\) super-categories and \(2\) final labels. The labels are publicly available at github.com/cvdfoundation/google-landmark.
Initial results of our work have been presented in [37, 35]. In this
work, we unify the methods from these two papers into a framework for the optimization of rank losses, naturally supporting both standard and hierarchical image retrieval problems. Additionally, we include more comprehensive experiments, to consider different decomposability objectives, apply our framework to the recent R@k loss [28] and optimize the NDCG in the hierarchical setting. Finally, in this work we introduce the first hierarchical image retrieval dataset in the domain of landmarks, which is incorporated for a more comprehensive benchmarking of our method.
## II Related work
### _Image Retrieval proxy losses_
The Image Retrieval community has designed several families of methods to optimize metrics such as AP and R@k. Methods that rely on triplet-wise losses, like pair losses [1, 2, 3], triplet losses [4, 5, 6], or larger tuplets [7, 8, 9] learn comparison relations between instances. These metric learning methods optimize a very coarse upper bound on AP and need complex post-processing and tricks to be effective. Other methods using proxies have been introduced to lower the computational complexity of triplet based training [10, 11, 12, 13, 14, 15]: they learn jointly a deep model and weight matrix that represent proxies using a cross-entropy based loss. Proxies are approximations of the original data points that should belong to their neighborhood.
### _Rank loss approximations_
Studying smooth rank surrogate losses has a long history. One option for training with rank losses is to design smooth upper bounds. Seminal works are based on structural SVMs [16, 17], with extensions to speed-up the "loss-augmented inference" [18] or to adapt to weak supervision [19] were designed to optimize AP. Generic black-box combinatorial solvers have been introduced [20] and applied to AP optimization [32]. To overcome the brittleness of AP with respect to small score variations, an _ad hoc_ perturbation is applied to positive and negative scores during training. These methods provide elegant AP upper bounds, but generally are coarse AP approximations.
Other approaches rely on designing smooth approximations of the the rank function. This is done in soft-binning techniques [21, 22, 23, 24, 25] by using a smoothed discretization of similarity scores. Other approaches rely on explicitly approximating the non-differentiable rank functions using neural networks [26], or with a sum of sigmoid functions in the Smooth-AP approach [27] or the more recent Smooth-Recall loss [28]. These approaches enable accurate surrogates by providing tight and smooth approximations of the rank function. However, they do not guarantee that the resulting loss is an upper bound on the true loss. The SupRank introduced in this work is based on a smooth approximation of the rank function leading to an upper bound on the true loss, making our approach both accurate and robust.
### _Decomposability in AP optimization_
Batch training is mandatory in deep learning. However, the non-decomposability of AP is a severe issue, since it yields an inconsistent AP gradient estimator.
Non-decomposability is related to sampling informative constraints in simple AP surrogates, _e.g._ triplet losses, since the constraints' cardinality on the whole training set is prohibitive. This has been addressed by efficient batch sampling [29, 30, 38] or selecting informative constraints within mini-batches [7, 39, 40, 30]. In cross-batch memory technique [31], the authors assume a slow drift in learned representations to store them and compute global mining in pair-based deep metric learning.
In AP optimization, the non-decomposability has essentially been addressed by a brute force increase of the batch size [20, 24, 25, 28]. This includes an important overhead in computation and memory, generally involving a two-step approach for first computing the AP loss and subsequently re-computing activations and back-propagating gradients. In contrast, our loss does not add
Fig. 1: Illustration of our unified framework which supports both hierarchical and non-hierarchical cases. We use a deep neural network \(f_{\theta}\) to embed images. We then optimize its weights in an end-to-end manner using two losses: 1) we optimize the ranking-based evaluation metric using an upper bound approximation of the rank, \(\text{rank}_{s}^{-}\), as described in Sec. III-B, enforcing the batch’s positive embeddings to have higher cosine similarity with the query than the batch’s negatives; 2) we reduce the decomposability gap, \(DG\), of rank losses using a decomposability loss as described in Sec. III-C, that supports that positives have higher similarity with the query than all negatives even outside the batch.
any overhead and enables good performances for AP optimization even with small batches.
### _Hierarchical predictions and metrics_
There has been a recent regain of interest in Hierarchical Classification (HC) [41, 42, 43], to learn robust models that make "better mistakes" [42]. However HC is evaluted in _closed set_, _i.e._ train and test classes are the same. Whereas, hierarchical image retrieval considers the _open set_ paradigm, where classes are distinct between train and test sets to better evaluate the generalization abilities of learned models.
The Information Retrieval community uses datasets where documents can be more or less relevant depending on the query [44, 45]. The quality of their retrieval engine is quantified using ranking based metrics such as the NDCG [46, 47]. Several works have investigated how to optimize the NDCG, _e.g._ using pairwise losses [48] or smooth surrogates [49, 50, 51, 52]. These works however focused on NDCG, and are without any theoretical guarantees: the surrogates are approximations of the NDCG but not _lower bounds_, _i.e._ their maximization does not imply improved performances during inference. An additional drawback is that NDCG does not relate easily to average precision [53], the most common metric in image retrieval. Fortunately, there have been some works done to extend AP in a graded setting where relevance between instances is not binary [54, 55]. The graded Average Precision from [54] is the closest to our work as it leverages SoftRank for direct optimization of non-binary relevance, although there are significant shortcomings. There is no guarantee that the SoftRank surrogate actually minimizes the graded AP, it requires to annotate datasets with pairwise relevances which is impractical for large scale settings in image retrieval.
Recently, the authors of [33] introduced three new hierarchical benchmarks datasets for image retrieval, in addition to a novel hierarchical loss CSL. CSL extends proxy-based triplet losses to the hierarchical setting. However, this method faces the same limitation as triplet losses: minimizing CSL does not explicitly optimize a well-behaved hierarchical evaluation metric, _e.g._\(\mathcal{H}\)-AP. We show experimentally that our method significantly outperforms CSL [33] both on hierarchical metrics and AP-level evaluations.
### _Hierarchical datasets_
Hierarchical trees are available for a large number of datasets, such as CUB-200-2011 [56], Cars196 [57], InShop [58], Stanford Online Products [59], and notably _large-scale_ ones such as iNaturalist [60], the three DyML datasets [33] and Imagenet [61]. Hierarchical labels are also less difficult to obtain than fine-grained ones since hierarchical relations can be semi-automatically obtained by grouping fine-grained labels. This was previously done by [43] or by using the large lexical database Wordnet [62]_e.g._ for Imagenet in [61] and for the SUN database in [63]. In the same spirit, we introduce for the first time a hierarchical dataset for the landmark instance retrieval problem: \(\langle\cdot\)GLDv2. We extend the well-known Google Landmarks Dataset v2 [36] with hierarchical labels using a semi-automatic pipeline, leveraging category labels mined from Wikimedia commons and substantial manual cleaning.
## III Smooth and decomposable rank losses
### _Preliminaries_
Let us consider a retrieval set \(\Omega=\{\mathbf{x}_{j}\}_{j\in\llbracket 1;N\rrbracket}\) composed of \(N\) elements, and a set of \(M\) queries \(\mathcal{Q}\). For each query \(\mathbf{q}_{i}\), each element in \(\Omega\) is assigned a relevance \(\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i})\in\mathbb{R}\)[44], such that \(\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i})>0\) (resp. \(\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i})=0\)) if \(\mathbf{x}_{j}\) is relevant (resp. irrelevant) with respect to \(\mathbf{q}_{i}\). For the standard image retrieval discussed in Sec. IV, \(\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i})=1\) if \(x_{j}\) and \(q_{i}\) share the same fine-grained label and \(0\) otherwise. In the hierarchical image retrieval setting \(\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i})\) models more complex pairwise relevance discussed in Sec. V. Positive relevance defines the set of positives for a query, _i.e._\(\Omega^{+}_{i}:=\{\mathbf{x}_{j}\in\Omega|\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i}) >0\}\). Instances with a relevance of \(0\) are the negatives, _i.e._\(\Omega^{-}_{i}:=\{\mathbf{x}_{j}\in\Omega|\operatorname{rel}(\mathbf{x}_{j},\mathbf{q}_{i} )=0\}\).
For each \(\mathbf{x}_{j}\in\Omega\), we compute its embedding \(\mathbf{v}_{\mathbf{j}}\in\mathbb{R}^{d}\). To do so we use a neural network \(f_{\mathbf{\theta}}\) parameterized by \(\mathbf{\theta}\): \(\mathbf{v}_{\mathbf{j}}:=f_{\mathbf{\theta}}(\mathbf{x}_{j})\). In the embedding space \(\mathbb{R}^{d}\), we compute the cosine similarity score between each query \(\mathbf{q}_{i}\) and each element in \(\Omega\): \(s(\mathbf{q}_{i},\mathbf{x}_{j})=\mathbf{v}_{\mathbf{q}_{i}}^{T}\mathbf{v}_{\mathbf{j}} /\|\mathbf{v}_{\mathbf{q}_{i}}\|\cdot\|\mathbf{v}_{\mathbf{j}}\|\).
During training, our goal is to optimize, for each query \(\mathbf{q}_{i}\), the model parameters \(\mathbf{\theta}\) such that the ranking, _i.e._ decreasing order of cosine similarity, matches the ground truth ranking, _i.e._ decreasing order of relevances. More precisely, we optimize a ranking-based metric \(0\leq\mathcal{M}_{i}\leq 1\) that penalizes inversion between positive instances and negative ones. The target loss is averaged over all queries:
\[\mathcal{L}_{\mathcal{M}}(\mathbf{\theta})=1-\frac{1}{M}\sum_{i=1}^{M}\mathcal{M}_{ i}(\mathbf{\theta}) \tag{1}\]
As previously mentioned, there are two main challenges with SGD optimization of rank losses: i) they are not differentiable with respect to \(\mathbf{\theta}\), and ii) they do not linearly decompose into batches. We propose to address both issues: we introduce a robust differentiable ranking surrogate, SupRank (Sec. III-B), and add a decomposable objective (Sec. III-C) to improve rank losses' behavior in a batch setting. Our final **RO**bust and **D**ecomposable (ROD) loss \(\mathcal{L}_{\text{ROD-}\mathcal{M}}\) combines a differentiable surrogate loss of a target ranking-based metric, \(\mathcal{L}_{\text{Sup-}\mathcal{M}}\), and the decomposable objective \(\mathcal{L}_{\text{DG}}\) with a linear combination, weighted by the hyper-parameter \(\lambda\):
\[\mathcal{L}_{\text{ROD-}\mathcal{M}}(\mathbf{\theta})=(1-\lambda)\cdot\mathcal{L}_{ \text{Sup-}\mathcal{M}}(\mathbf{\theta})+\lambda\cdot\mathcal{L}_{\text{DG}}^{*}( \mathbf{\theta}) \tag{2}\]
### _SupRank: smooth approximation of the rank_
The non-differentiablity in rank losses comes from the ranking operator, which can be viewed as counting the number of instances that have a similarity score greater than the considered instance1, _i.e._:
Footnote 1: For the sake of readability we drop in the following the dependence on \(\mathbf{\theta}\) for the rank, _i.e._\(\operatorname{rank}(k):=\operatorname{rank}(k,\mathbf{\theta})\) and on the query for the similarity, _i.e._\(s_{j}:=s(q_{i},x_{j})\).
\[\operatorname{rank}(k)=\underbrace{1+\sum_{j\in\Omega^{<}_{i,k}}\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
that for both rank\({}^{+}(k)\) and rank\({}^{-}(k)\) in Eq. (3) \(k\) is always positive, _i.e._ in \(\Omega^{+}\), and \(x_{j}\) can either be negative, _i.e._ in \(\Omega^{-}\), in rank\({}^{-}\) or positive in rank\({}^{+}\), _i.e._ in \(\Omega^{+}\).
From Eq. (3) it becomes clear that the rank is non-amenable to gradient descent optimization due to the Heaviside (step) function \(H\) (see Fig. 2a), whose derivatives are either zero or undefined.
**SupRank** To provide rank losses amenable to SGD, we introduce a smooth approximation of the rank function. We propose a different behavior between rank\({}^{+}(k)\) and rank\({}^{-}(k)\) in Eq. (3) by defining two functions \(H^{+}\) and \(H^{-}\). For rank\({}^{+}(k)\), we keep the Heaviside function, _i.e._\(H^{+}=H\) (see Fig. 2a). This ignores rank\({}^{+}(k)\) in gradient-based ranking optimization. It has been observed in other works that optimizing rank\({}^{-}\) is sufficient [64]. For rank\({}^{-}(k)\) we want smooth surrogate \(H^{-}\) for \(H\) that is a amenable to SGD and an upper bound on the Heaviside function. We define the following \(H^{-}\) function, illustrated in Fig 2b, that is both:
\[H^{-}(t)\!=\!\begin{cases}\sigma(\frac{t}{\tau})&\text{if}\,t\!\leq\!0\\ \sigma(\frac{t}{\tau})\!+\!0.5&\text{if}\,t\!\in\![0;\!\delta]&\text{with}\, \delta\!\geq\!0\\ \rho\!\cdot\!(t\!-\!\delta)\!+\!\sigma(\frac{t}{\tau})\!+\!0.5&\text{if}\,t\!>\! \delta\end{cases} \tag{4}\]
where \(\sigma\) is the sigmoid function (Fig. 2c), \(\delta\), \(\tau\) and \(\rho\) are hyper-parameters. \(\delta\) is chosen such that the sigmoidal part of \(H^{-}\) reaches the saturation regime and is fixed for the rest of the paper (see supplementary Sec. A-C). We keep \(\tau\) as in [27] and study the robustness to \(\rho\) in Sec. VII-A4.
From \(H^{-}\) in Eq. (4), we define the following rank surrogate that can be used plug-and-play for rank losses optimization:
\[\text{rank}_{s}^{-}(k)\!=\!\sum_{j\in\Omega_{i,k}^{+}}\!H^{-}(s_{j}\!-\!s_{k}) \tag{5}\]
**SupRank has two main features:**
\(\blacktriangleright\)**1** **Surrogate losses based on SupRank are upper bound of the target metrics**, since \(H^{-}\) in Eq. (4) is an upper bound of a step function (Fig 2b). This is an important property, since it ensures that the model keeps training until the correct ranking is obtained. It is worth noting that existing smooth rank approximations in the literature [21, 24, 25, 27] do not fulfill this property.
\(\blacktriangleright\)**2** **SupRank brings training gradients until the correct ranking plus a margin is fulfilled.** When the ranking is incorrect, an instance with a lower relevance \(\mathbf{x_{j}}\) is ranked before an instance of higher relevance \(\mathbf{x_{k}}\), thus \(s_{j}\!>\!s_{k}\) and \(H^{-}(s_{j}\!-\!s_{k})\) in Eq. (4) has a non-zero derivative. We use a sigmoid to have a large gradient when \(s_{j}\!-\!s_{k}\) is small. To overcome vanishing gradients of the sigmoid for large values \(s_{j}\!-\!s_{k}\), we use a linear function ensuring constant \(\rho\) derivative. When the ranking is correct (\(s_{j}\!<\!s_{k}\)), we enforce robustness by imposing a margin parameterized by \(\tau\) (sigmoid in Eq. (4)). This margin overcomes the brittleness of rank losses, which vanish as soon as the ranking is correct [20, 22, 24].
### _Decomposable rank losses_
As illustrated in Eq. (1), rank losses decompose linearly between queries \(\mathbf{q_{i}}\), but do not between retrieved instances. We therefore focus our analysis of the non-decomposability on a single query. For a retrieval set \(\Omega\) of \(N\) elements, we consider \(\{\mathcal{B}_{b}\}_{b\in\{1:K\}}\) batches of size B, such that \(N/B\!=\!K\!\in\!\mathbb{N}\). Let \(\mathcal{M}_{b}(\mathbf{\theta})\) be the metric \(\mathcal{M}\) in batch \(b\) for a query, we define the "decomposability gap" \(DG\) as:
\[DG(\mathbf{\theta})\!=\!\frac{1}{K}\!\sum_{b=1}^{K}\!\mathcal{M}_{b}(\mathbf{\theta})\! -\!\mathcal{M}(\mathbf{\theta}) \tag{6}\]
\(DG\) in Eq. (6) is a direct measure of the non-decomposability of any metric \(\mathcal{M}\) (illustrated for AP in Sec. A-A ). Our motivation here is to decrease \(DG\), _i.e._ to have the average metric over the batches as close as possible to the metric computed over the whole training set. To this end, we use a additional objective during training that aims at reducing the non-decomposability.
**Pair-based decomposability loss** We use the following decomposability loss \(\mathcal{L}_{\text{DG}}\) that was first introduced in ROADMAP [37], and used in other work [65] to reduce the non-decomposability of ranking losses:
\[\mathcal{L}_{\text{DG}}(\mathbf{\theta})\!=\!\frac{1}{|\Omega^{+}|}\!\sum_{\mathbf{x_{j }}\in\Omega^{+}}\![\alpha\!-\!s_{j}]\!+\!\frac{1}{|\Omega^{-}|}\!\sum_{\mathbf{x_{j }}\in\Omega^{-}}\![s_{j}\!-\!\beta]_{+} \tag{7}\]
where \([x]_{+}=\max(0,x)\). \(\mathcal{L}_{\text{DG}}\) is a pair-based loss [2], which we revisit in our context to "calibrate" the scores between mini-batches. Intuitively, the fact that the positive (resp. negative) scores are above (resp. below) a threshold \(\alpha\) (resp. \(\beta\)) in the mini-batches makes \(\mathcal{M}_{b}\) closer to \(\mathcal{M}\), which we support with an analysis in Sec. A-B.
**Proxy-based decomposability loss** In HAPPIER [35] we used the following proxy-based loss as the decomposability objective:
\[\mathcal{L}_{\text{DG}}^{*}(\theta)\!=\!-\!\log\!\left(\frac{\exp(\frac{v_{p}^{ \top}p_{s}}{\eta})}{\sum_{p_{p}\in\mathcal{Z}}\!\exp(\frac{v_{p}^{\top}p_{s}}{ \eta})}\right), \tag{8}\]
Fig. 2: Proposed surrogate losses for the Heaviside (step): with \(H^{+}(x)\) in Fig. 2a and \(H^{-}(x)\) in Fig. 2b. Using \(H^{-}\) in Eq. (5) leads to smooth and upperbounds rank losses. In addition, \(H^{-}(x)\) back-propagates gradients until the correct ranking is satisfied, in contrast to the sigmoid used in [27] (Fig. 2c).
where \(p_{y}\) is the normalized proxy corresponding to the fine-grained class of the embedding \(v_{y}\), \(\mathcal{Z}\) is the set of proxies, and \(\eta\) is a temperature scaling parameter. \(\mathcal{L}_{\text{DG}}^{*}\) is a classification-based proxy loss [11] that imposes a margin instances and the proxies. \(\mathcal{L}_{\text{DG}}^{*}\) has thus a similar effect to \(\mathcal{L}_{\text{DG}}\) on the decomposability of rank losses. In our experiments we show that both decomposability losses improve ranking losses optimization.
## IV Instantiation to standard image retrieval
In this section we apply the framework described previously to standard image retrieval where \(\operatorname{rel}(x,q)\in\{0,1\}\). Specifically we show how to directly optimize two metrics that are widely used in the image retrieval community, _i.e._ AP and R@k.
### _Application to Average Precision_
The average precision measures the quality of a ranking by penalizing inversion between positives and negatives. It strongly penalizes inversion at the top of the ranking. It is defined for each query \(q_{i}\) as follows:
\[\text{AP}_{i}=\frac{1}{|\Omega_{i}^{+}|}\sum_{k\in\Omega_{i}^{+}}\frac{\text{ rank}^{+}(k)}{\text{rank}(k)} \tag{9}\]
The overall AP loss \(\mathcal{L}_{\text{AP}}\) is averaged over all queries:
\[\mathcal{L}_{\text{AP}}(\boldsymbol{\theta})=1-\frac{1}{M}\sum_{i=1}^{M} \text{AP}_{i}(\boldsymbol{\theta}) \tag{10}\]
Using our surrogate of the rank, SupRank, we define the following AP surrogate loss:
\[\mathcal{L}_{\text{Sup-AP}}(\boldsymbol{\theta})=1-\frac{1}{M}\sum_{i=1}^{M} \frac{1}{|\Omega_{i}^{+}|}\sum_{k\in\Omega_{i}^{+}}\frac{\text{rank}^{+}(k)}{ \text{rank}^{+}(k)+\text{rank}^{-}_{s}(k)} \tag{11}\]
Finally we equip the AP surrogate loss with the \(\mathcal{L}_{\text{DG}}\) loss to support the decomposability of the AP, yielding our **RO**bust **A**nd **D**eco**M**posable **A**verage **P**recision:
\[\mathcal{L}_{\text{ROADMAP}}(\boldsymbol{\theta})=(1-\lambda)\cdot\mathcal{L} _{\text{Sup-AP}}(\boldsymbol{\theta})+\lambda\cdot\mathcal{L}_{\text{DG}}( \boldsymbol{\theta}) \tag{12}\]
### _Application to the Recall at k_
Another metric often used in image retrieval is the recall rate at k. In the image retrieval community it is often defined as:
\[\text{R@k}=\frac{1}{M}\sum_{i=1}^{M}\mathbbm{1}\big{(}\text{positive element in top-}k\big{)} \tag{13}\]
However in the literature the recall is most often defined as:
\[\text{TR@k}=\frac{1}{M}\sum_{i=1}^{M}\frac{\#\,\text{positive elements in top-}k}{\min(k,\#\,\text{positive elements})} \tag{14}\]
It was shown in [28] that the TR@k can be written similarly to other ranking-based metrics, _i.e._ using the rank, for each query \(q_{i}\) as:
\[\text{TR@k}=\frac{1}{M}\sum_{i=1}^{M}\frac{1}{\min(|\Omega_{i}^{+}|,k)}\sum_ {p\in\Omega_{i}^{+}}H(k-\text{rank}(p)) \tag{15}\]
Using the expression of Eq. (15) and SupRank we can derive a surrogate loss function for the recall for a single query as:
\[\mathcal{L}_{\text{Sup-R@k}}=1-\frac{1}{\min(|\Omega^{+}|,k)}\sum_{p\in\Omega ^{+}}\sigma(\frac{k-(\text{rank}^{+}(p)+\text{rank}^{-}_{s}(p))}{\tau^{*}}) \tag{16}\]
The authors of [28] use different level of recalls in their loss, which we follow _i.e._\(\mathcal{L}_{\text{Sup-R@k}}=\frac{1}{|\mathcal{K}|}\sum_{k\in\mathcal{K}} \mathcal{L}_{\text{Sup-R@k}}\), it is necessary to provide enough gradient signal to all positive items. To train \(\mathcal{L}_{\text{Sup-R@k}}\), it is also necessary to approximate a second time the Heaviside function, using a sigmoid with temperature factor \(\tau^{*}\). We combine it with \(\mathcal{L}_{\text{DG}}\) yielding the resulting differentiable and decomposable R@k loss:
\[\mathcal{L}_{\text{ROD-R@k}}=(1-\lambda)\cdot\mathcal{L}_{\text{Sup-R@k}}+ \lambda\cdot\mathcal{L}_{\text{DG}} \tag{17}\]
## V Instantiation to Hierarchical Image Retrieval
Standard metrics (_e.g._ AP or R@k) are only defined for binary labels, _i.e.__fine-grained_ labels: an image is negative if it is not strictly similar to the query. These metrics are by design unable to take into account the severity of the mistakes. To mitigate this issue we propose to optimize a new ranking-based metric, \(\mathcal{H}\)-AP introduced in Sec. V-A, that extends AP beyond binary labels, and the standard NDCG in Sec. V-B.
**Additional training context** We assume that we have access to a hierarchical tree defining semantic similarities between concepts as in Fig. 3. For a query \(\boldsymbol{q}\), we partition the set of retrieved instances into \(L+1\) disjoint subsets \(\big{\{}\Omega^{(l)}\big{\}}_{l\in[0,L]}\). \(\Omega^{(L)}\) is the subset of the most similar instances to the query (_i.e._ fine-grained level): for \(L=3\) and a "Lada #2" query (purple), \(\Omega^{(3)}\) are the images of the same "Lada #2" (green) in Fig. 3. The set \(\Omega^{(l)}\) for \(l<L\) contains instances with smaller relevance with respect to the query: \(\Omega^{(2)}\) in Fig. 3 is the set of "Lada" that are not "Lada #2" (blue) and \(\Omega^{(1)}\) is the set of "Cars" that are not "Lada" (orange). We also define \(\Omega^{-}:=\Omega^{(0)}\) as the set of negative instances, _i.e._ the set of vehicles that are not "Cars" (in red) in Fig. 3 and \(\Omega^{+}=\bigcup_{l=1}^{L}\Omega^{(l)}\). Given a query \(q\), we use this partition to define the relevance of \(k\in\Omega^{(l)}\), \(\operatorname{rel}(k):=\operatorname{rel}(x_{k},q)\).
Fig. 3: We leverage a hierarchical tree representing the semantic similarities between concepts to produce more robust ranking.
### _Hierarchical Average Precision_
We propose an extension of AP that leverages non-binary labels. To do so, we extend \(\text{rank}^{+}\) to the hierarchical case with a hierarchical \(\text{rank}^{+}\), \(\mathcal{H}\text{-rank}^{+}\):
\[\mathcal{H}\text{-rank}^{+}(k)\!=\!\operatorname{rel}(k)\!+\!\sum_{j\in\Omega^{ +}}\min(\operatorname{rel}(k)\!,\!\operatorname{rel}(j))\!\cdot\!H(s_{j}\!-\!s _{k})\;. \tag{18}\]
Intuitively, \(\min(\operatorname{rel}(k),\,\operatorname{rel}(j))\) corresponds to seeking the closest ancestor shared by instance \(k\) and \(j\) with the query in the hierarchical tree. As illustrated in Fig. 4, \(\mathcal{H}\text{-rank}^{+}\) induces a smoother penalization for instances that do not share the same fine-grained label as the query but still share some coarser semantics, which is not the case for \(\text{rank}^{+}\).
From \(\mathcal{H}\text{-rank}^{+}\) in Eq. (18) we define the Hierarchical Average Precision, \(\mathcal{H}\text{-AP}\):
\[\mathcal{H}\text{-AP}\!=\!\frac{1}{\sum_{k\in\Omega^{+}}\!\operatorname{rel} (k)}\!\sum_{k\in\Omega^{+}}\!\frac{\mathcal{H}\text{-rank}^{+}(k)}{\text{rank} (k)} \tag{19}\]
Eq. (19) extends the AP to non-binary labels. We replace \(\text{rank}^{+}\) by our hierarchical rank \(\mathcal{H}\text{-rank}^{+}\) and the term \(|\Omega^{+}|\) is replaced by \(\sum_{k\in\Omega^{+}}\operatorname{rel}(k)\) for proper normalization (both representing the "sum of positives", see more details in Sec. B-B1).
\(\mathcal{H}\text{-AP}\) extends the desirable properties of the AP. It evaluates the quality of a ranking by: i) penalizing inversions of instances that are not ranked in decreasing order of relevances with respect to the query, ii) giving stronger emphasis to inversions that occur at the top of the ranking. Finally, we can observe that, by this definition, \(\mathcal{H}\text{-AP}\) is equal to the AP in the binary setting (\(L\!=\!1\)). This makes \(\mathcal{H}\text{-AP}\) a _consistent generalization_ of AP (details in Sec. B-B2).
#### Iii-A1 Relevance function design
The relevance \(\operatorname{rel}(k)\) defines how "similar" an instance \(k\!\in\!\Omega^{(l)}\) is to the query \(q\). While \(\operatorname{rel}(k)\) might be given as input in information retrieval datasets [66, 67], we need to define it based on the hierarchical tree in our case. We want to enforce the constraint that the relevance decreases when going up the tree, _i.e._\(\operatorname{rel}(k)\!>\!\operatorname{rel}(k^{\prime})\) for \(k\!\in\!\Omega^{(l)}\), \(k^{\prime}\!\in\!\Omega^{(l^{\prime})}\) and \(l\!>\!l^{\prime}\). To do so, we assign a total weight of \((l/L)^{\alpha}\) to each semantic level \(l\), where \(\alpha\!\in\!\mathbb{R}^{+}\) controls the decrease rate of similarity in the tree. For example for \(L\!=\!3\) and \(\alpha\!=\!1\), the total weights for each level are \(1\), \(\frac{2}{3}\), \(\frac{1}{3}\) and \(0\). The instance relevance \(\operatorname{rel}(k)\) is normalized by the cardinal of \(\Omega^{(l)}\):
\[\operatorname{rel}(k)\!=\!\frac{(l/L)^{\alpha}}{|\Omega^{(l)}|}\text{if }k\!\in\! \Omega^{(l)} \tag{20}\]
We set \(\alpha\!=\!1\) in Eq. (20) for the \(\mathcal{H}\text{-AP}\) metric and in our main experiments. Setting \(\alpha\) to larger values supports better performances on fine-grained levels as their relevances will relatively increase. This variant is discussed in Sec. VII-C. Other definitions of the relevance are possible, _e.g._ an interesting option for the relevance enables to recover a weighted sum of AP, denoted as \(\sum w\text{AP}:=\sum_{l=1}^{L}w_{l}\cdot\text{AP}^{(l)}\) (supplementary Sec. B-B3), _i.e._ the weighted sum of AP is a particular case of \(\mathcal{H}\text{-AP}\).
#### Iii-A2 Hierarchical Average Precision Training for Pertinent Image Retrieval
We define our surrogate loss to optimize \(\mathcal{H}\text{-AP}\):
\[\mathcal{L}_{\text{Supp-}\mathcal{H}\text{-AP}}\!=\!1\!-\!\frac{1}{M}\!\sum_{ i=1}^{M}\!\!\!\frac{1}{\sum_{k\in\Omega^{+}_{i}}\!\!\operatorname{rel}(k)}\! \sum_{k\in\Omega^{+}_{i}}\!\!\frac{\mathcal{H}\text{-rank}^{+}(k)}{\text{rank} ^{+}(k)\!+\!\text{rank}^{*}_{*}(k)} \tag{21}\]
Note that in the hierarchical case \(\text{rank}^{-}(k)\) is the number of instances of relevances \(<\!\operatorname{rel}(k)\) meaning that it may contain images that are similar to some extent to the query. Finally our ranking loss, **H**ierarchical **A**verage **P**recision training for **P**ertinent **I**mag**E **R**etrieval (HAPPIER), is obtained by adding \(\mathcal{L}^{*}_{\text{DG}}\):
\[\mathcal{L}_{\text{HAPPIER}}\!=\!(1\!-\!\lambda)\!\cdot\!\mathcal{L}_{\text{Supp-} \mathcal{H}\text{-AP}}\!+\!\lambda\!\cdot\!\mathcal{L}^{*}_{\text{DG}} \tag{22}\]
### _Application to the NDCG_
The NDCG [46, 47] is a common metric in the information retrieval community. The NDCG is defined using a relevance that is not required to be binary:
\[\text{DCG}_{i} =\!\sum_{k\in\Omega^{+}_{i}}\!\frac{\operatorname{rel}(k)}{\log_{ 2}(1\!+\!\text{rank}(k))}\] \[\operatorname{iDCG}_{i} =\!\max_{\text{mak}}\!\text{DCG}_{i}\] \[\text{NDCG} =\!\frac{1}{M}\!\sum_{i=1}^{M}\!\frac{\text{DCG}_{i}}{\operatorname {iDCG}_{i}} \tag{23}\]
We choose the following relevance function for the NDCG: \(\operatorname{rel}(k)\!=\!2^{l}\!-1\), if \(k\!\in\!\Omega^{(l)}\). Using the exponentiation is a standard procedure in information retrieval [47] as it allows to put more emphasis on instances of higher relevance. We then use similarly to other rank losses our SupRank surrogate. We use it to approximate the DCG, and thus the NDCG:
\[\text{DCG}_{i,s} =\!\sum_{k\in\Omega^{+}_{i}}\!\frac{\operatorname{rel}(k)}{\log_{ 2}(1\!+\!\text{rank}^{+}(k)\!+\!\text{rank}_{s}(k))}\] \[\mathcal{L}_{\text{Sup-NDCG}} =\!1\!-\!\frac{1}{M}\!\sum_{i=1}^{M}\!\!\frac{\text{DCG}_{i,s}}{ \operatorname{iDCG}_{i}} \tag{24}\]
Note that once again our surrogate loss, \(\mathcal{L}_{\text{Sup-NDCG}}\), is an upper bound on the true loss \(1\!-\!\text{NDCG}\). Finally our training loss is:
\[\mathcal{L}_{\text{ROD-NDCG}}\!=\!(1\!-\!\lambda)\!\cdot\!\mathcal{L}_{\text{Sup- NDCG}}\!+\!\lambda\!\cdot\!\mathcal{L}^{*}_{\text{DG}} \tag{25}\]
Fig. 4: Given a “Lada #2” query, the top inversion is less severe than the bottom one. Indeed on the top row instance \(1\) is semantically closer to the query – it is a “Lada”— than instance \(3\) on the bottom row. As instance \(3\)’s closest common ancestor with the query, “Cars”, is farther in the hierarchical tree Fig. 3. This is why \(\mathcal{H}\text{-rank}^{+}(2)\) is greater on the top row (\(5/3\)) than on the bottom row (\(4/3\)).
## VI Hierarchical Landmark dataset
One of the most popular domains for image retrieval research is that of human-made and natural landmarks [36, 68, 70, 71]. In this work, we introduce for the first time a hierarchical dataset in this domain: \(\mathcal{H}\)-GLDv2, building on top of the Google Landmarks Dataset v2 (GLDv2) [36], which is the largest and most diverse landmark dataset. In the following, we present our process to semi-automatically annotate GLDv2 with an initial scraping of hierarchical labels from Wikimedia Commons, and a 2-step post-processing of the supercategories. We illustrate some of the created groups in Figs. 4(b) to 4(d). These hierarchical labels are released under the CC BY 4.0 license.
### _Scraping Wikimedia Commons_
The landmarks from GLDv2 are sourced from Wikimedia Commons, the world's largest crowdsourced collection of landmark photos. After careful inspection, we find that many of the landmarks in GLDv2 can be associated to supercategories by leveraging the "Instance of" annotations available in Wikimedia Commons - see Fig. 4(a). Out of the original \(203k\) landmarks in GLDv2-train, we were able to scrape supercategories for \(129.1k\). For the \(101k\) landmarks in GLDv2-index, we were able to scrape supercategories for \(68.1k\). A lightweight manual cleaning process was then applied to remove landmarks assigned to more than one supercategory and those with irrelevant supercategories (_e.g._, supercategories named "Wikimedia category" or "Wikimedia disambiguation page"). Approximately \(0.25\)% of landmarks end up being removed in this process, leading to a total number of selected landmarks of \(128.8k\) and \(67.9k\) for the train and index dataset splits, respectively. The number of unique scraped supercategories is \(5.7k\).
### _Post-processing supercategories_
The scraped supercategories are noisy and do not have the same level of granularity, _e.g._ "church building" _v.s._ "church building (1172-1954)". To mitigate this issue after the scraping we perform a two step post-processing to obtain the final supercategories.
1. **K-means clustering:** We first encode all the labels using the CLIP [72] textual encoder. We perform a k-means on the latent representations. This initial clustering allows to show different prominent categories, _e.g._ "Church", "Castle" _etc._
2. **Manual verification:** We manually assess the obtained clusters based on the scraped label names. We create semantic groups by dividing the k-means clusters into sub-clusters. This leads to \(78\) supercategories that we further group into human-made and natural landmarks. Two expert annotators comprehensively reviewed the final clusters manually and filtered them to produce a high-quality dataset.
### _Discussion and limitations_
\(\mathcal{H}\)-GLDv2 is a large scale dataset we were thus not able to manually check all images. This leads to a dataset that can have some noise. We release along with \(\mathcal{H}\)-GLDv2 the scraped labels to allow further work on the "supercategories". Another difficulty of \(\mathcal{H}\)-GLDv2 is the ambiguity of some supercategories. For instance, the bottom image of Fig. 4(c) is labeled as "Bridge", however it could be labeled as "River", another supercategory. Finally, there is an imbalance between supercategories that comes from the classes represented in GLDv2 [36]. We report first results in Sec. VII-C3 of models trained on our \(\mathcal{H}\)-GLDv2 dataset.
## VII Experiments
### _Standard image retrieval_.
In this section we compare our methods on the standard image retrieval setup, _i.e._\(\mathrm{rel}(x_{i},x_{j})\!\in\!\{0,\!1\}\), and report fine-grained metrics. We use publicly available implementations of all baselines and run all experiments under the same settings. We use a ResNet-50 backbone with average pooling, a normalization layer without affine parameters and a projection head that reduces the dimension from \(2048\) to \(512\). We use a batch size of \(256\) by sampling 4 images per class and the hierarchical samplig of [24] for SOP, with resolution \(224\times 224\), standard data augmentation (random resize crop, horizontal flipping), the Adam optimizer (with learning rate of \(5\cdot 10^{-5}\) on SOP and \(1\cdot 10^{-5}\) on iNaturalist, with cosine decay) and train for 100 epochs.
#### Vii-B1 Comparison to AP approximations
In Tab. I, we compare ROADMAP to AP loss approximations including soft-binning approaches Fast-AP [24] and SoftBin-AP [25], the generic solver BlackBox-AP [32], and the smooth rank approximation [27]. We observe that ROADMAP outperforms all the current AP approximations by a large margin. The gains are especially pronounced on the large-scale dataset iNaturalist.
#### Vii-B2 Ablation study.
To investigate more in-depth the impact of the two components of our framework, we perform ablation studies in Tab. II. We show the improvements against Smooth-AP [27] and Smooth-R@k [28] when replacing the sigmoid by SupRank Eq. (10),
Fig. 4: Fig. 4(a) depicts the “Instance of” (within red rectangles), from which we collect hierarchical landmark labels: _e.g. lake_, _waterfall_, _mosque_. Figs. 4(b) to 4(d) illustrate some of the supercategories of our \(\mathcal{H}\)-GLDv2 dataset.
and the use of \(\mathcal{L}_{\text{DG}}\) Eq. (7) or \(\mathcal{L}_{\text{DG}}^{*}\) Eq. (8). We can see that both \(\mathcal{L}_{\text{Sup-AP}}\) and \(\mathcal{L}_{\text{Sup-R@k}}\) consistently improve performances over the baselines, +0.5pt mAP@R on SOP and +1pt mAP@R on iNaturalist for both Sup-AP and Sup-R@k. Both \(\mathcal{L}_{\text{DG}}\) and \(\mathcal{L}_{\text{DG}}^{*}\) improve over the smooth surrogates, with strong gains on iNaturalist, _e.g._\(\mathcal{L}_{\text{DG}}^{*}\) improves by +2.9pt R@1 over Sup-AP and +3.7pt R@1 over Sup-R@k. This is because the batch vs. dataset size ratio \(\frac{\lambda}{N}\) is tiny (\(\sim 8\cdot 10^{-4}\ll 1\)), making the decomposability gap in Eq. (6) huge. On SOP \(\mathcal{L}_{\text{DG}}\) and \(\mathcal{L}_{\text{DG}}^{*}\) work similarly, however on iNat \(\mathcal{L}_{\text{DG}}^{*}\) performs far better than \(\mathcal{L}_{\text{DG}}\). In the following we choose to keep only \(\mathcal{L}_{\text{DG}}^{*}\).
#### V-A3 Analysis on decomposability
The decomposability gap depends on the batch size Eq. (6). To illustrate this we monitor on Fig. 6 the relative improvement when adding \(\mathcal{L}_{\text{DG}}^{*}\) to \(\mathcal{L}_{\text{Sup-AP}}\) as the batch size decreases. We can see that the relative improvement becomes larger as the batch size gets smaller. This confirms our intuition that the decomposability loss \(\mathcal{L}_{\text{DG}}^{*}\) has a stronger effect on smaller batch sizes, for which the AP estimation is noisier and \(DG\) larger. This is critical on the large-scale dataset iNaturalist where the batch AP on usual batch sizes is a very poor approximation of the global AP.
In Tab. III we compare ROADMAP to the cross-batch memory [31] (XBM) which is used reduce the gap between batch-AP and global AP. We use XBM with a batch size of 128 and store all the dataset, and use the setup described previously otherwise. ROADMAP outperforms XBM both on SOP and iNaturalist with gains more pronounced on iNaturalist with +12.5pt R@1 and +11 mAP@R. \(\mathcal{L}_{\text{DG}}^{*}\) allows us to train models even with smaller batches.
#### V-A4 ROADMAP hyper-parameters
We demonstrate the robustness of our framework to hyper-parameters in Fig. 7. Firstly, Fig. 7a illustrates the complementarity between the two terms of \(\mathcal{L}_{\text{BOADMAP}}\). For \(0\!<\!\lambda\!<\!1\), \(\mathcal{L}_{\text{ROADMAP}}\) outperforms both \(\mathcal{L}_{\text{Sup-AP}}\) and \(\mathcal{L}_{\text{DG}}^{*}\). While we use \(\lambda\!=\!0.1\) in our experiments, hyper-parameter tuning could yield better results, _e.g._ with \(\lambda\!=\!0.3\)\(\mathcal{L}_{\text{ROADMAP}}\) has 2.1 R@1 _v.s._ 71.8 R@1 reported in Tab. I. Secondly Fig. 7b shows the influence of the slope \(\rho\) that controls the linear regime in \(H^{-}\). As shown in Fig. 7b, the improvement is important and stable in \([10,\!100]\). Note that \(\rho\!>\!1\) already improves the results compared to \(\rho\!=\!0\) in [27]. There is a decrease when \(\rho\!\gg\!10^{3}\) probably due to the high gradient that takes over the signal for correctly ranked samples.
### _Comparison to state-of-the-art_
In this section, we compare our AP approximation method, ROADMAP, to state-of-the-art methods, on SOP, CUB, and iNaturalist. We use ROADMAP with a memory [31] to virtually increase the batch size. Note that using batch memory is less computationally expensive than methods such as [28] which trade computational time for memory footprint by using two forward passes. We apply ROADMAP on both a convolutional backbone, ResNet-50 with GeM pooling [68] and layer normalization, and Vision transformer models [77], DeiT-S [78] (Imagenet-1k pre-trained as in [76]) and ViT-B (Imagenet21k pre-trained as in [28]). For convolutional backbones, we choose to keep the standard images of size \(224\times 224\) for both training and inference on SOP and iNaturalist, and use more recent settings [15, 6] for CUB and use images of size \(256\times 256\). Vision transformers experiments use images of size \(224\times 224\).
In Tab. IV, using convolutional backbones, ROADMAP outperforms most state-of-the-art methods when evaluated at different (standard) R@k. As ROADMAP optimizes directly the evaluation metrics, it outperforms metric learning and classification-based methods, _e.g._ +1.4pt R@1 on SOP compared to Triplet SCT [6] or +1.9pt R@1 on SOP _v.s._ ProxyNCAA++ [15]. ROADMAP also outperforms R@k [28] with +1.2pt R@1 on SOP and +1.3pt R@1 on iNaturalist. This is impressive as R@k [28] uses a strong setup _i.e._ a batch size of \(4096\) and Similarity mixup. On the small-scale dataset CUB, our method is competitive with methods such as ProxyNCAA++ with the same embedding size of 512
Finally, we show that ROADMAP also improves Vision Transformers for image retrieval. With DeiT-S, ROADMAP outperforms [76] on both SOP and CUB by +1pt R@1, this again
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{SOP} & \multicolumn{2}{c}{iNaturalist} \\ \hline Method & R@1 & mAP@R & R@1 & mAP@R \\ \hline Fast-AP [24] & 77.8 & 50.5 & 59.9 & 24.0 \\ SoftBin-AP [25] & 79.7 & 52.7 & 63.6 & 25.4 \\ BlackBox-AP [32] & 80.0 & 53.1 & 52.3 & 15.2 \\ Smooth-AP [27] & 80.9 & 54.3 & 67.3 & 26.5 \\ \hline
**ROADMAP** & **81.9** & **55.7** & **71.8** & **29.5** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison between ROADMAP and state-of-the-art AP ranking based methods.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{4}{c}{SOP} & \multicolumn{2}{c}{iNaturalist} \\ \hline Method & rank & \(DG\) & R@1 & mAP@R & R@1 & mAP@R \\ \hline Smooth-AP & sigmoid & ✗ & 80.9 & 54.3 & 67.3 & 26.5 \\ Sup-AP & SupRank & ✗ & 81.2 & 54.8 & 68.9 & 27.5 \\ ROADMAP & SupRank & \(\mathcal{L}_{\text{DG}}^{*}\) & 81.7 & **55.7** & 69.1 & 27.6 \\ & & & & **81.9** & **55.7** & **71.8** & **29.5** \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{SOP} & \multicolumn{2}{c}{iNaturalist} \\ \hline Method & R@1 & mAP@R & R@1 & mAP@R \\ \hline xBM [31] & 80.6 & 54.9 & 59.3 & 18.5 \\
**ROADMAP** & **81.9** & **55.7** & **71.8** & **29.5** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison between XBM [31] and ROADMAP equiped with memory.
Fig. 6: Relative increase of mAP@R _v.s._ batch size when adding \(\mathcal{L}_{\text{DG}}^{*}\) to \(\mathcal{L}_{\text{Sup-AP}}\).
Fig. 7: Robustness to hyper-parameters on iNaturalist.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{SOP} & \multicolumn{2}{c}{iNaturalist} \\ \hline Method & R@1 & mAP@R & R@1 & mAP@R \\ \hline xBM [31] & 80.6 & 54.9 & 59.3 & 18.5 \\
**ROADMAP** & **81.9** & **55.7** & **71.8** & **29.5** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison between XBM [31] and ROADMAP equiped with memory.
shows the interest of directly optimizing the metrics rather than the pair loss of [31] used in [76]. With ViT-B, ROADMAP outperforms [28] by +0.4pt R@1 and +1.2pt R@1 on SOP and iNaturalist respectively. We attribute this to the fact that our loss is an actual upper bound of the metric, in addition to our decomposability loss.
### _Hierarchical Results_
In this section, we show results on the hierarchical settings and use the labels as described in the additional context of Sec. VI. We report results using the experimental setting of Sec. VII-A. Additionally to hierarchical metrics NDCG and \(\mathcal{H}\)-AP we report ASI which is defined in Sec. C-A1.
On Tab. V, we show that HAPPIER significantly outperforms methods trained on the fine-grained level only, with a gain on \(\mathcal{H}\)-AP over the best performing methods of +16.4pt \(\mathcal{H}\)-AP on SOP, +13pt on iNat-base and 10.7pt on iNat-full. HAPPIER also exhibits significant gains compared to hierarchical methods. On \(\mathcal{H}\)-AP, HAPPIER has important gains on all datasets (_e.g._ +6.3pt on SOP, +4.2pt on iNat-base over the best competitor), but also on ASI and NDCG. This shows the strong generalization of the method on standard metrics. Compared to the recent CSL loss [33], we observe a consistent gain over all metrics and datasets, _e.g._ +6pt on \(\mathcal{H}\)-AP, +8pt on ASI and +2.6pts on NDCG on SOP. This shows the benefits of optimizing a well-behaved hierarchical metric compared to an ad-hoc proxy method.
Furthermore we can see that HAPPIER performs on-par to the best methods for standard image retrieval when considering fine-grained metrics. HAPPIER has 81.0 R@1 on SOP _v.s._ 81.4 R@1 for NCA++, and even performs slightly better on iNat-base with 70.7 R@1 _vs._ 70.2 R@1 for NSM. Finally our variant HAPPIER\({}_{\text{F}}\) for \(\alpha\!>\!1\) Sec. V-A1, performs as expected (\(\alpha\) is 5 on SOP and 3 on iNat-base/full): it is a strong method for fine-grained image retrieval, and still outperforms standard methods on hierarchical metrics.
#### Vi-C1 Detailed evaluation
HAPPIER performs well on the overall hierarchical metrics because it performs well on _all_ the hierarchical level. We illustrate this on Tab. VI which reports the different methods' performances on all semantic hierarchy levels on iNat-full. We evaluate HAPPIER and HAPPIER\({}_{\text{F}}\). HAPPIER optimizes the overall hierarchical performance, while HAPPIER\({}_{\text{F}}\) is meant to be optimal at the fine-grained level without sacrificing coarser levels. The satisfactory behavior and the two optimal regimes of HAPPIER and HAPPIER\({}_{\text{F}}\) are confirmed on iNat-full: HAPPIER gives the best results on coarser levels (from "Class"), while being very close to the best results on finer ones. HAPPIER\({}_{\text{F}}\) gives the best results at the finest levels, even outperforming very competitive fine-grained baselines. HAPPIER also outperforms CSL [33] on all semantic levels, _e.g._ +5pt on the fine-grained AP ("Species") and +3pt on the coarsest AP ("Kingdom"). We show the detailed evaluation on SOP and iNat-base in Sec. C-A3.
#### Vi-C2 Model analysis
We showcase the different behavior and the robustness of HAPPIER when changing the hyper-parameters. Fig. 8a studies the impact of \(\alpha\) for setting the relevance in Eq. (20). \(\alpha\) controls the balance between the relevance weight allocated to each levels. Increasing \(\alpha\) puts more emphasis on the fine-grained levels, on the contrary diminishing its value will put an equal contribution to all levels. This is illustrated in Fig. 8a: increasing \(\alpha\) improves the AP at the fine-grained level on iNat-base. Fig. 8a shows that one can use \(\alpha\) to obtain a range of performances for desired applications.
We measure the impact in Fig. 8b of \(\lambda\) for weighting \(\mathcal{L}_{\text{f-AP}}^{s}\) and \(\mathcal{L}_{\text{DG}}\) in HAPPIER: we observe a stable increase in \(\mathcal{H}\)-AP with \(0<\lambda<0.5\) compared to optimizing only \(\mathcal{L}_{\text{f-AP}}^{s}\), while a drop in performance is observed for \(\lambda>0.5\). This shows the complementarity of \(\mathcal{L}_{\text{f-AP}}^{s}\) and \(\mathcal{L}_{\text{DG}}^{s}\), and how, when combined, HAPPIER reaches its best performance.
#### Vi-C3 Hierarchical landmark results
In this section we report first results on our \(\mathcal{H}\)-GLDv2 dataset. We run all experiments under
\begin{table}
\begin{tabular}{l l l|c c c|c c c|c c c c} \hline \hline & & & \multicolumn{3}{c}{SOP} & \multicolumn{3}{c}{CUB} & \multicolumn{3}{c}{iNaturalist} \\ & & Method & dim & 1 & 10 & 100 & 1 & 2 & 4 & 8 & 1 & 4 & 16 & 32 \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & Triplet SH [5] & 512 & 72.7 & 86.2 & 93.8 & 63.6 & 74.4 & 83.1 & 90.0 & 58.1 & 75.5 & 86.8 & 90.7 \\ & MS [9] & 512 & 78.2 & 90.5 & 96.0 & 65.7 & 77.0 & 86.3 & 91.2 & - & - & - & - \\ & SEC [73] & 512 & 78.7 & 90.8 & 96.6 & 68.8 & 79.4 & 87.2 & 92.5 & - & - & - & - \\ & HORDE [74] & 512 & 80.1 & 91.3 & 96.2 & 66.8 & 77.4 & 85.1 & 91.0 & - & - & - & - \\ & XBM [31] & 128 & 80.6 & 91.6 & 96.2 & 65.8 & 75.9 & 84.0 & 89.9 & - & - & - & - \\ & Triplet SCT [6] & 512/64 & 81.9 & 92.6 & 96.8 & 57.7 & 69.8 & 79.6 & 87.0 & - & - & - & - \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & ProxyNCA [10] & 512 & 73.7 & - & - & 49.2 & 61.9 & 67.9 & 72.4 & 61.6 & 77.4 & 87.0 & 90.6 \\ & ProxyGML [14] & 512 & 78.0 & 90.6 & 96.2 & 66.6 & 77.6 & 86.4 & - & - & - & - & - \\ & NSoftmax [11] & 512 & 78.2 & 90.6 & 96.2 & 61.3 & 73.9 & 83.5 & 90.0 & - & - & - & - \\ & NSoftmax [11] & 2048 & 79.5 & 91.5 & 96.7 & 65.3 & 76.7 & 85.4 & 91.8 & - & - & - & - \\ & Cross-Entropy [75] & 2048 & 81.1 & 91.7 & 96.3 & 69.2 & 79.2 & 86.9 & 91.6 & - & - & - & - \\ & ProxyNCA++ [15] & 512 & 80.7 & 92.0 & 96.7 & 69.0 & 79.8 & 87.3 & 92.7 & - & - & - & - \\ & ProxyNCA++ [15] & 2048 & 81.4 & 92.4 & 96.9 & **72.2** & **82.0** & **89.2** & **93.5** & - & - & - & - \\ \hline \multirow{6}{*}{\begin{tabular}{} \end{tabular} } & FastAP [24] & 512 & 76.4 & 89.0 & 95.1 & - & - & - & - & 60.6 & 77.0 & 87.2 & 90.6 \\ & Blackbox [32] & 512 & 78.6 & 90.5 & 96.0 & 64.0 & 75.3 & 84.1 & 90.6 & 62.9 & 79.4 & 88.7 & 91.7 \\ \cline{1-1} & SmoothAP [27] & 512 & 80.1 & 91.5 & 96.6 & - & - & - & - & 67.2 & 81.8 & 90.3 & 93.1 \\ \cline{1-1} & R@@1 & 512 & 82.8 & 92.9 & 97.0 & - & - & - & - & 71.2 & 84.0 & 91.3 & 93.6 \\ \cline{1-1} & R@+ iNatFix [28] & 512 & 82.1 & 92.8 & 97.0 & - & - & - & - & 71.8 & 84.7 & 91.9 & 94.3 \\ \cline{1-1} & **ROADMAP (ours)** & 512 & **83.3** & **93.6** & **97.4** & 69.4 & 79.4 & 87.2 & 92.1 & **73.1** & **85.7** & **92.7** & **94.8** \\ \hline \multirow{6}{*}{
\begin{tabular}{} \end{tabular} } & IRTR\({}_{\text{F}}\)[76] & 384 & 84.2 & 93.7 & 97.3 & 76.6 & 85.0 & 91.1 & 94.3 & - & - & - & - \\ \cline{1-1} & **ROADMAP (ours)** & 384 & **85.2** & **94.5** & **97.9** & **77.6** & **86.2** & **91.6** & **95.0** & **74.7** & **86.9** & **93.4** & **9
the same settings: we use a ResNet-101 with GeM pooling and initialize a linear projection with a PCA [25]. We use a batch size of 256 and train for \(\sim\!55\mathrm{k}\) steps with Adam and a learning rate of \(10^{-5}\) decayed using a cosine schedule. We report the mAP@100 [36], and the hierarchical metrics \(\mathcal{H}\)-AP, ASI and NDCG.
In Tab. VII we report the results of ROADMAP and HAPPIER _v.s._ other fine-grained methods and hierarchical methods. Tab. VII demonstrates once again the interest of our AP surrogate, ROADMAP and HAPPIER\({}_{\text{F}}\) perform the best on the fine-grained metric mAP@100. Furthermore HAPPIER has the best hierarchical results. It outperforms ROADMAP by +2.8pt \(\mathcal{H}\)-AP and +8.8pt ASI. It also outperforms CSL by +2.6pt \(\mathcal{H}\)-AP.
#### VIII-B4 Qualitative experiments
We assess qualitatively HAPPIER, including embedding space analysis and visualization of HAPPIER's retrievals.
**t-SNE: organization of the embedding space:** In Figs. (a)a and (b)b, we plot using t-SNE [80, 79] how HAPPIER learns an embedding space on SOP (\(L=2\)) that is well-organized. We plot the mean vector of each fine-grained class and we assign the color based on the coarse level. We compare the t-SNE the embedding space of a baseline ( Smooth-AP [27]) on Fig. (a)a and of HAPPIER in Fig. (b)b. We cannot observe any clear clusters for the coarse level on Fig. (a)a, whereas we can appreciate the the quality of the hierarchical clusters formed on Fig. (b)b.
**Controlled errors on iNat-base:** Finally, we showcase in Figs. (c)c and (d)d errors of HAPPIER _v.s._ a fine-grained baseline (Smooth-AP) on iNat-base. On Fig. (c)c, we illustrate how a model trained with HAPPIER makes less severe mistakes than a model trained only on the fine-grained level. On Fig. (d)d, we show an example where both models fail to retrieve the correct fine-grained instances, however the model trained with HAPPIER retrieves images that are semantically more similar to the query. This shows the the robustness of HAPPIER's ranking.
## VIII Conclusion
In this work we have introduced a general framework for rank losses optimization. It tackles two issues of rank losses optimization: 1) non-differentiability using smooth and upper bound rank approximation, 2) non-decomposability using an additional objective. We apply our framework to both fine-grained, by optimizing the AP and R@k, and hierarchical image retrieval, by optimizing the NDCG and the introduced \(\mathcal{H}\)-AP. We show that using our framework outperforms other rank loss surrogates
\begin{table}
\begin{tabular}{l c c c c|c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c|}{SOP} & \multicolumn{4}{c|}{iNat-base} & \multicolumn{4}{c}{iNat-full} \\ \cline{2-13} & R@1 & AP & \(\mathcal{H}\)-AP & ASI & NDCG & R@1 & AP & \(\mathcal{H}\)-AP & ASI & NDCG & R@1 & AP & \(\mathcal{H}\)-AP & ASI & NDCG \\ \hline \multirow{3}{*}{Triplet SH [5]} & 79.8 & 59.6 & 42.2 & 22.4 & 78.8 & 66.3 & 33.3 & 39.5 & 63.7 & 91.5 & 66.3 & 33.3 & 36.1 & 59.2 & 89.8 \\ & NSM [11] & 81.3 & 61.3 & 42.8 & 21.1 & 78.3 & 70.2 & 37.6 & 38.0 & 51.6 & 88.9 & 70.2 & **37.6** & 33.3 & 51.7 & 88.2 \\ & NNAC+ [15] & 81.4 & 61.7 & 43.0 & 21.5 & 78.4 & 67.3 & 35.2 & 39.5 & 57.0 & 90.1 & 67.3 & 35.2 & 35.3 & 55.7 & 89.0 \\ & Smooth-AP [27] & 80.9 & 60.8 & 42.9 & 20.6 & 78.2 & 67.3 & 35.2 & 41.3 & 64.2 & 91.9 & 67.3 & 35.2 & 37.2 & 60.1 & 90.1 \\ \hline \(\Sigma\)TL\({}_{\text{SH}}\)[5] & 78.3 & 57.6 & 53.1 & 53.3 & 89.2 & 54.7 & 21.3 & 44.0 & 87.4 & 96.4 & 52.9 & 19.7 & 39.9 & 85.5 & 92.0 \\ & \(\Sigma\)NSM [11] & 79.4 & 58.4 & 50.4 & 49.7 & 87.0 & 69.5 & 37.5 & 47.9 & 75.8 & 94.4 & 67.2 & 36.1 & 46.9 & 74.2 & **93.8** \\ & NNC++ [15] & 76.3 & 54.5 & 49.5 & 52.8 & 87.8 & 64.2 & 35.4 & 48.9 & 78.7 & 95.0 & 67.4 & 36.3 & 44.7 & 74.3 & 92.6 \\ & CSL [33] & 79.4 & 58.0 & 52.8 & 57.9 & 88.1 & 62.9 & 30.2 & 50.1 & **89.3** & 96.7 & 59.9 & 30.4 & 45.1 & 84.9 & 93.0 \\ \hline \multicolumn{13}{l}{**ROD-NDCG (ours)**} & 80.5 & 59.6 & 58.3 & 65.0 & 91.1 & 70.7 & 35.9 & 53.1 & 87.8 & 96.6 & 71.2 & 36.7 & 44.8 & 81.1 & 93.1 \\ \hline \multicolumn{13}{l}{**HAPPIER (ours)**} & 81.0 & 60.4 & **59.4** & **65.9** & **91.5** & 70.7 & 36.7 & **54.3** & **89.3** & **96.9** & 70.2 & 36.0 & **47.9** & **87.2** & **93.8** \\ \multicolumn{13}{l}{**HAPPIER\({}_{\text{F}}\) (ours)**} & **81.8** & **62.2** & 52.0 & 45.9 & 86.5 & **71.6** & **37.8** & 43.2 & 87.0 & 96.6 & **71.4** & **37.6** & 40.1 & 80.0 & 93.5 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Comparison of HAPPIER on SOP and iNat-base/full. Best results in **bold**, second best underlined.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & mAP@100 & \(\mathcal{H}\)-AP & ASI & NDCG \\ \hline SoftBin [25] & 39.0 & 35.2 & 74.6 & 94.4 \\ Smooth-AP [27] & 42.5 & 37.3 & 76.9 & 94.7 \\ R@k [28] & 41.6 & 36.8 & 77.1 & 94.7 \\
**ROADMAP** & 42.9 & 37.0 & 75.0 & 94.4 \\ \hline \hline CSL [33] & 37.5 & 36.2 & **85.4** & **95.7** \\
**HAPPIER** & 41.6 & **38.8** & 83.8 & **95.7** \\
**HAPPIER\({}_{\text{F}}\)** & **43.7** & 38.3 & 77.5 & 94.8 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Comparison of ROADMAP and HAPPIER _v.s._ baselines on \(\mathcal{H}\)-GLDv2.
Fig. 8: Impact on iNat-base of \(\alpha\) in Eq. (20) for setting the relevance of \(\mathcal{H}\)-AP (a) and of the \(\lambda\) hyper-parameter on HAPPIER results (b).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & mAP@100 & \(\mathcal{H}\)-AP & ASI & NDCG \\ \hline SoftBin [25] & 39.0 & 35.2 & 74.6 & 94.4 \\ Smooth-AP [27] & 42.5 & 37.3 & 76.9 & 94.7 \\ R@k [28] & 41.6 & 36.8 & 77.1 & 94.7 \\
**ROADMAP** & 42.9 & 37.0 & 75.0 & 94.4 \\ \hline \hline CSL [33] & 37.5 & 36.2 & **85.4** & **95.7** \\
**HAPPIER** & 41.6 & **38.8** & 83.8 & **95.7** \\
**HAPPIER\({}_{\text{F}}\)** & **43.7** & 38.3 & 77.5 & 94.8 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparison of HAPPIER _v.s._ fine-grained methods and CSL on iNat-full. Metrics are reported for all 7 semantic levels.
on several standard fine-grained and hierarchical image retrieval benchmarks, including the hierarchical landmark dataset we introduce in this work. We also show that our framework sets state-of-the-art results for fine-grained image retrieval.
## Acknowledgment
This work was done under a grant from the the AHEAD ANR program (ANR-20-THIA-0002) and had access to HPC resources of IDRIS under the allocation ADJ101102645 made by GENCI.
|
2309.07163 | Systematic Review of Experimental Paradigms and Deep Neural Networks for
Electroencephalography-Based Cognitive Workload Detection | This article summarizes a systematic review of the electroencephalography
(EEG)-based cognitive workload (CWL) estimation. The focus of the article is
twofold: identify the disparate experimental paradigms used for reliably
eliciting discreet and quantifiable levels of cognitive load and the specific
nature and representational structure of the commonly used input formulations
in deep neural networks (DNNs) used for signal classification. The analysis
revealed a number of studies using EEG signals in its native representation of
a two-dimensional matrix for offline classification of CWL. However, only a few
studies adopted an online or pseudo-online classification strategy for
real-time CWL estimation. Further, only a couple of interpretable DNNs and a
single generative model were employed for cognitive load detection till date
during this review. More often than not, researchers were using DNNs as
black-box type models. In conclusion, DNNs prove to be valuable tools for
classifying EEG signals, primarily due to the substantial modeling power
provided by the depth of their network architecture. It is further suggested
that interpretable and explainable DNN models must be employed for cognitive
workload estimation since existing methods are limited in the face of the
non-stationary nature of the signal. | Vishnu KN, Cota Navin Gupta | 2023-09-11T14:27:22Z | http://arxiv.org/abs/2309.07163v1 | Systematic Review of Experimental Paradigms and Deep Neural Networks for Electroencephalography - Based Cognitive Workload Detection
###### Abstract
This article summarizes a systematic review of the electroencephalography (EEG) - based cognitive workload (CWL) estimation. The focus of the article is two-fold, identify the disparate experimental paradigms used for reliably eliciting discreet and quantifiable levels of cognitive load and the specific nature and representational structure of the commonly used input formulations in deep neural networks (DNNs) used for signal classification. The analysis revealed a number of studies using EEG signals in its native representation of a two-dimensional matrix, for offline classification of CWL. However, only a few studies adopted an online/pseudo-online classification strategy for real-time CWL estimation. Further, only a couple of interpretable DNNs and a single generative model were employed for cognitive load detection till date during this review. More often than not, researchers were using DNNs as black-box type models. In conclusion, DNNs prove to be valuable tools for classifying EEG signals, primarily due to the substantial modeling power provided by the depth of their network architecture. It is further suggested that interpretable and explainable DNN models must be employed for cognitive workload estimation since existing methods are limited in the face of the non-stationary nature of the signal.
Cognitive Workload, Mental Workload, Deep Neural Networks, Deep Learning, Electroencephalogram
## 1 Introduction
Brain-Computer Interfaces (BCIs) are often employed to facilitate Human - Machine Interactions (HMIs) like that of autonomous or semi-autonomous transportation vehicles or heavy industrial machinery. Operational aspects of these environments demand situational awareness, optimal allocation of attentional resources, and sustained vigilance from the operator due to their safety-critical nature [1]. These cognitive resource demands induce a load on the mental faculties of the human operator, and this operational load has been termed as cognitive (mental) workload (CWL). The cognitive resources demanded by an operational task may vary from very low (underload) at times to extremely high (overload) in ominous operational situations. High and low CWL may adversely affect the interaction, reducing both the machine's and operator's performance, which may result in catastrophes and cost human lives [2] Therefore, accurate real-time estimation of task-induced workload and the user's cognitive state is critical for an adaptive - automated functional protocol in real-world HMIs like piloting an aircraft, driving an automobile, or operating heavy construction machinery. Emerging technologies like Brain - Computer Interfaces (BCIs) are envisioned to bridge the gap between humans and machines by providing a bio-digital interface between the two [3, 4].
The general definition of CWL is the ratio between the cognitive resources demanded by the task and the available cognitive resources that a user can allocate against the task's demands [5]. Several ways of measuring task-induced CWL exist [6]. The field has traditionally used self-reported (subjective) measures to estimate the cognitive workload experienced by a user in addition to reaction time in a secondary task (a behavioral measure). These methods hinder the primary task execution and therefore are unsuitable for real-time estimation [7]. The adoption of neurophysiological signals such as electroencephalogram (EEG) has increased since they can provide an objective, direct, passive, and real-time estimation of the cognitive resources demanded by the task [8].
Electroencephalographic (EEG) signals originate from a noisy nonlinear system and have traditionally been considered challenging to decode [9]. Nevertheless, EEG is still an appropriate signal for CWL estimation [10] since it is a low-cost and portable acquisition system with high temporal resolution. However, this neuroimaging modality comes with a unique set of challenges. The high dimensionality of the EEG signal has always compelled feature extraction and dimensionality reduction [9, 11]from the time-domain signal, followed by dimensionality reduction. Other than physiological artifacts [12], the presence of unrelated neural activity could be the primary reason for EEG signals being highly variable across the multiple sessions of a subject and the different subjects performing the same task. Almost all state-of-the-art BCI protocols need extensive calibration for reliable classification performance at the levels typically required by consumer BCI applications [13]. These challenges necessitate careful experimental design and extensive signal processing before conducting statistical analysis, so it would be possible to correlate the EEG signal with an observed behavioral phenotype.
CWL can be elicited using numerous tasks and may be detected as changes in the signal power for various frequency bands of EEG signals. Many studies have independently verified characteristic changes in EEG sub-band oscillations during different levels of workload [14]. Alpha oscillations are characteristic of the wakeful state [15] since they relate to sensory perception and mediate sensory suppression mechanisms during selective attention [16]. Additionally, CWL can be measured using active or passive measures and tasks [17, 18]. A wide variety of these EEG-based measures have been reviewed in [19]. The passive BCI (pBCI) do not employ covarying subjective or behavioral measure of CWL. The envisioned pBCI is a bio-digital interface that can provide an implicit mode of communication between a computer-controlled machine through [8] by automatically detecting neurophysiological signals of specific intentions and translating this brain activity into machine-executable commands [20].
Identifying and segregating neural activity of interest from the rest of the signal is central to a BCI protocol, but the technical challenges
VKN was associated with the Indian Institute of Technology, Guwahati, Assam, India ([email protected]). CNG was associated with the Indian Institute of Technology, Guwahati, Assam, India ([email protected]).
significantly hamper practical signal classification [21]. Recent advances in deep neural networks (DNNs) have shown promise in objectively assessing CWL levels from electroencephalogram signals [22, 23]. Deep learning (DL) algorithms can learn from the characteristically weak EEG signals eliminating the need for feature extraction [24] as well as signal pre-processing in some cases [25]. DNNs possess superior pattern recognition abilities over traditional machine learning algorithms (MLA) since they can leverage the parametric depth of the network while learning, enabling them to recognize the relevant features directly from the EEG signals despite the non-stationarity. In several EEG-Based BCI paradigms, (DNNs have surpassed the performance of traditional MLA [25]. Notwithstanding some sparse success in the field of CWL estimation [26], DNNs currently perform inferior to the current state-of-the-art SVM classifiers [22, 27]. However, it is worth noting that [28] proved that DNNs could achieve performance on par with traditional classifiers using relatively small EEG datasets.
Current limitations of EEG-based CWL detection and thus evident broad research gaps can be identified as,
1. Inter-session/subject variations in signal features notwithstanding the same stimulus being used for eliciting a given activity,
2. Non-stationarity of EEG signals and the need for signal features that deliver optimal classification performance for a given task,
3. Lack of models explaining the sustained inter-subject similarity in neural activity despite significant intra-subject signal variability [29], and
4. A consequent lack of consensus on the classification algorithm and the most appropriate signal feature.
Cognitive workload and measurements are currently widely used in aviation [30], automobile [31], and certain BCI applications. The field uses both laboratory and real-world-based paradigms. These experimental paradigms, DNN based detection methods, and their application domains are reviewed in this article. Though many reviews exist on the topic [24, 30, 32, 2], a systematic review focusing on EEG-based cognitive workload estimation using deep learning algorithms is absent, and this review is intended to fill this gap in knowledge.
## II Literature Search
### _Research Questions_
The central topics of this review, the keywords for article retrieval, and The PRISMA flow chart of article selection are depicted in Fig. 1. We have identified 64 articles that satisfied all set criteria and have been analyzed using the following constructs. The research articles are systematically evaluated with predefined critical constructs expressed as questions
1. What are the paradigm designs used to elicit different cognitive states? Are there any domain-specific cognitive states and task-design trends?
2. What are the DNNs employed for cognitive load detection? What are the prefered network architecture, input formulations AND features used for CWL detection?
## III Results
### _Experimental Paradigm for CWL Induction_
Many experimental paradigms are prevalently used in CWL research to elicit graded levels of cognitive load. These levels vary from basic binary distinction to as high as seven levels of cognitive load. The highest levels of graded workload were provided by Automated Cabin Air Management System (AutoCAMS) task [33]. This experimental paradigm simulates the micro-world of a space flight but is a generic operator paradigm [34] where the subject is tasked to monitor gauge levels and make real-time decisions. The task's general similarity to many operational scenarios, including industrial process controllers, has inspired many studies, and about 10% of all the studies reviewed here are found using it. AutoCAMS simulates these graded levels by varying the number of subsystems and automation failures. The observed maximum in this survey is seven levels of cognition, as found in [35]. Most of the studies employing the AutoCAMS task used three or more categories. It is a computer simulation aimed at simulating adaptive operational environments. Additionally, different types of flight simulations were used to generate four [36, 37, 38] and five levels [39, 40] of workload. Further, the Multi-Attribute Task Battery (MATB ) [41] is also a generic operator paradigm like AutoCAMS, and it simulates the generic operations a pilot performs while flying an aircraft. The task battery consists of multiple tasks to be performed simultaneously in a scheduled manner, determining the induced workload levels. MATB has also been widely used for eliciting two [42] or three [28] workload levels.
Furthermore, Simultaneous Task Capacity (SIMKAP) [43, 44, 45] and N-back tasks [46] have been used to produce a maximum of three workload levels. SIMKAP task is a multitasking paradigm, and few open-source datasets collected with it are available online. Additionally, the N - back task consists of presenting a series of numbers or shapes to the subject on a screen, and the subject is asked to react to a series of stimuli by evaluating if the current element was the same as that appeared n times ago, and hence the name, n-back. Other paradigms used to elicit two levels of workload: working memory task, mental arithmetic task, construction activities, and learning tasks, to name a few. Apart from these standard tasks, some studies have used in-house tasks [47, 48, 49] for eliciting cognitive workload [50].
#### Iii-1 Cognitive and Operative Paradigms
A recent review classified the workload-inducing tasks into 'cognitive paradigms' and 'operate paradigms' [24] (Fig. 1)in which the studies using operative paradigms specifically intended their research as a direct industrial application, unlike the cognitive paradigm, which was intended as controlled laboratory experiments that are focused on theoretical aspects of cognition and the cognitive workload construct. This analysis follows the same taxonomical classification while presenting results. Most articles retrieved for this survey implemented operative paradigms for inducing cognitive workload (67%), while the rest were cognitive paradigms (33%). This inference is based on Fig. 2, where the pie charts illustrate the prevalence of the experimental paradigms used to elicit cognitive workload. The prevalent cognitive paradigms encountered in this study are N-back [51, 52, 53, 54], Sternberg working memory [55, 39] mental arithmetic (MA) [56, 57], SIMKAP [58, 59], while General Flight Simulation [60, 61, 62], Driving Simulations [63, 64, 65], MATB [28, 42, 66] and AutoCAMS [67, 68, 69, 70, 71, 72] are the prominent tasks categorized in operative paradigms. Overall, operative paradigms were encountered more in comparison to cognitive paradigms.
Following the logic of the previous synthesis, a further sub-division of these experimental paradigms is proposed keeping the objective of the cognitive load-inducing task and orientation of the conducting research. This classification of experimental task demarcates the orientation and application of the intended experiment, whether the construct under experiment is the human agent and his continually varying cognitive states or if it is an operational aspect of the machine that may bring about characteristic cognitive states in the user upon
encounter. This classification of experimental tasks may help identify the specific context of the cognitive workload problem and help formulate it into a suitable experimental setting for the intended application. The taxonomical classification schemes are the 'operator paradigm,' 'operation paradigm,' 'user paradigm,' and the 'brain paradigm.' This dichotomy is depicted in Fig. 1. D. The results briefed in this section are depicted in Fig. 2
#### 2 Operation and Operator Paradigms
The operator paradigms simulate the general characteristics of an HMI, focusing on the operator, while the operation paradigm focuses on simulating one particular aspect of a given HMI to examine the corresponding functional states of the human agent. About 47% of all the articles surveyed use an operator paradigm, while about 20% of the studies employed an operation paradigm. Within the operator paradigms, flight simulations that mimicked the typical operational environment of a pilot were used the most (30%). Monotonous automobile driving (19%) and the generic operator paradigm AutoCAMS (19%) were the next most prevalent operator paradigms. Further, MATB was used by 16% of the studies. Other operator paradigms are driving in varying traffic conditions [73], construction activities [74], [75], and learning tasks [76], and together they constituted about 16% of the operator paradigms encountered in this survey. Contrary to the operator paradigms, operation paradigms were focused on inducing a specific cognitive state in response to a particular operational sequence or event, such as lane - deviation. The most prevalent experimental task within operation paradigms was the lane-deviation task (46%), where a lane-perturbation event was followed by monotonous automobile driving, and the operator's reaction time was regressed against the driver's cognitive state. Other operation paradigms encountered in this survey are driving distraction [77], remote piloiting of aerial vehicles [78], specific flight sequences [36], [37], [38], robot-assisted surgery [79], and construction activity [80], constituting about half of the operation paradigms (54%).
#### 2.2.3 Brain and User Paradigms
User paradigms focus on user skills or specific attributes of the user, like multitasking or language proficiency, while brain paradigms focus on cognition-related aspects such as working memory (WM) and engagement. About 18% of all the reviewed articles induce workload with a user paradigm, while brain paradigms were used by 15% of all the articles. The prevalent user paradigm was MA (46%), where the subject continuously performs difficult sssnumerical calculations to elicit binary workload levels, followed by SIMKAP (38%), where several sub-tasks are performed simultaneously to elicit graded workload levels. Other user paradigms include visuomotor tracking tasks [81], where a visual stimulus is tracked while moved through a screen, and language incongruency tasks, where ambiguous pronouns elicit higher workload levels. These tasks focused on the user and their response to a generic BCI protocol.
Further, within the brain paradigms, the N-back task (46%) was the prevalent choice, the rest (56%) were several types of WM tasks and other in-house WM paradigms. These tasks were focused on a specific aspect of cognition, such as WM, attention, arousal, and vigilance.
#### 2.2.4 Experimental Environment
Usually, computer-based simulations were used to set up task environments in cognitive and operative paradigms. MATB and AutoCAMS are two computer-based operator paradigms extensively used for eliciting cognitive load, and they resemble the typical operational environment of an aircraft pilot and a generic industrial process controller, respectively. Unlike computer-based simulations, some deep-learning studies used EEG signals acquired from real-world vehicle operations scenarios [50], [82]. However, simulated task conditions are the norm in the field. Typically, these tasks are implemented in an augmented or virtual reality engine or a computer-based simulator. Though cognitive paradigms were created using only computer-based simulations, task environments of operative paradigms were set up much more diversely.
In the automobile industry, augmented reality (AR) 'full-driving simulators' are less of an industry standard and are typically used only in research settings. The full-driving simulators constitute a real car mounted on a Stewart platform that can simulate motion with six degrees of freedom and surrounding projected display. AR environments are industry standard for pilot training in the aviation sector. These systems are known as 'full-flight simulators' and vary in terms of the realism induced in the simulation they offer. On the high end, data collected using full-flight simulators were encountered in this study [61], [83]. On the lower end, this study used a simple simulation setup like mounting the pilot's chair on a Stewart platform [39], [84] and providing computer-based projected display. Additionally, virtual reality (VR) engines and head-mounted displays were used only for conducting construction activity paradigms[74].
These AR and VR systems are a good trade-off between real-life situations and controlled laboratory environments and can be
Figure 1: A) The PRISMA Protocol followed in this review. B) The tree diagram of topics covered in this review. C) The set of keywords identified for each branch of the topic tree. The synonyms of a concept are joined using OR blocks and the different sub-themes are joined using AND blocks to construct the final search string D) The proposed taxonomical classification of CWL experimental paradigms
extremely useful in researching real-life paradigms which are often dangerous, like that of a lane-keeping task. However, unlike these costly systems that are not easily available, computer-based simulations are accessible to everyone. Moreover, many studies have validated computer-based operator paradigms like AutoCAMS and MATB, and there already exists a plethora of datasets collected using these tasks, and therefore it can be used for comparing the fidelity of detection methodologies.
### Cognitive States Induced by CVL
The cognitive state of arousal, characterized by attention and engagement, is achieved when workload levels are optimized. Cognitive States and varying degrees of workload levels were seen across all the studies reviewed here. It was enquired whether any experimental paradigm was preferentially used to elicit a given cognitive state. It was observed that specific cognitive states were induced by domain-specific experimental paradigms. AutoCAMS and MATB were particularly suited for generating highly graded CVL levels due to their highly modular nature. Notably, different workload levels were elicited by 47% of the studies reviewed here. The states of attention and engagement have been explored by 10% of the studies. Overload fatigue was examined by about 16% of the studies, while underload fatigue was explored by 18%. Further, WM was explored by about 9% of the studies. These results are described in Fig 2. G
Moreover, underload fatigue was mostly explored in automobile paradigms since detecting drowsy states is a popular domain-specific industrial need. On the other hand, operational fatigue was mostly explored in aviation paradigms. Apart from specific flight sequence simulations, only AutoCAMS was used to elicit operational exhaustion and overload fatigue. It is interesting to note that only brain paradigms explored WM, operation paradigms explored underload fatigue (drowsiness), and operator paradigms induced operational fatigue, while all types of paradigms explored attention, engagement, and multitasking abilities.
### Deep Neural Networks for CVL Detection
There were mainly two kinds of studies that used a DNN for CWL detection, those that treated the model as a black box [28, 79] and those who have reasoned out the architecture and pipeline [58].. However, other studies have modified parts of the architecture and pipeline to suit the specific problem of EEG - based CWL detection [45, 51]. Most networks were implemented offline, and only two studies were found to use online pipeline [56, 81] explicitly. However, other studies [56, 74, 85] employed a pseudo - online analysis. Further, one study was found implementing a CWL detection system on a smartphone.
Some studies (about 25%) have introduced additional DL mechanisms like attention [86], residual identity connections [54], or multi-paths [42, 45] to endow the network with additional modeling power. Within the group of studies employing additional DL mechanisms, residual connections, commonly known as ResNets were the prevalent (29%) choice [75, 87], followed by the attention mechanism [45, 54, 86] with about 17% prevalence. ResNets are are generally used to solve the vanishing gradient problem. Attention, on the other hand, enables the network to focus only on the parts of the input relevant to the problem at hand and is a method known to improve computational burden and performance. Attention has been used for feature selection in some studies, where it is employed before the network input layer, but most employed this mechanism within their network In the latter case, the output of several deep learning layers containing high-dimensional features are transformed with weighted multiplication of for determining their contribution to the each prediction.Further, about 17% of the studies reviewed here used both Ensemble Learning [70, 71] and Transfer Learning [69, 87]. Ensemble Learning trains several classifiers on sub-sets of the data and aggregates the information from all for making a prediction. The method is advantageous in the case of EEG since the signals are highly variable across sessions and subjects. One such study used an ensemble of AE networks to mitigate the cross-subject variability of EEG signals. It was enquired if any preference exists for DNNs in different application domains. It was observed that the most networks have been used in all types of experimental paradigms, and no such preference exists.
Figure 2: The pie chart (B) describes the percentages articles using each task for collecting data. The pie charts have been organized to mimic the taxonomical bifurcation of cognitive and operative paradigm with color coded categories. The sub-charts show the distribution of operator (A) / user paradigms (F), and operation (E) / brain paradigms (C). The sizes of slices signify the prevalence within paradigm category. D) Depicts the application domains of CWL research. G) The bar charts depict the prevalence of different cognitive states encountered in this study, they are color coded to reflect the generalizability of the study, whether it is a cross-session/subject/task analysis.
#### 3.1.1 Network Architecture
Convolutional Neural Networks (CNNs) are the most prevalent network of choice (29%), presumably due to their success in computer vision. Plug-and-play type architectures and availability has been credited in at least one of the studies for the motivation of using a CNN for cognitive workload detection [79]. The generalizability of CNNs in recognizing spatial patterns from data structured in a 2D / 3D matrix might have been a reason for the choice [88]. Recurrent Neural Networks (RNNs) were the next prevalent architecture (24%), and they were explicitly motivated by the recurrent nature of the network and its known capabilities of modeling temporal dependencies [28, 42, 54].
Hybrid Neural Networks (Hybrids) and Auto-Encoders (AE) were used by about 15% and 12% of the studies reviewed in the survey. The hybrid type of networks only used CNN - RNN combinations, and hybrid networks consisting of other networks and algorithms are not to be found in this systematic survey. Other architectures encountered in this survey are Multi-Layer Perceptron/ Artificial Neural Networks (MLP/ANN) [39, 63] (9%), Deep Belief Networks (DBNs) [68, 89] (8%), Generative Adversarial Networks [90] (GANs) (2%), and Graph Neural Networks [73] (GNNs) (2%). The prevalence of neural networks is given in Fig. 3 A.
#### 3.1.2 Signal Feature Extraction
Popular features used in cognitive workload research can be categorized into five groups:'_spectral_, '_nonlinear_, '_temporal_, '_spatial_, '_statistical_, and '_others_.' Most studies used a combination of features from these groups, and only a few chose only a single type of feature. Since studies used a combination of features, each article was counted separately against each feature. About 72% of the studies reviewed here used a feature extraction step before modeling EEG with a DNN, but about 23% of studies eliminated the feature extraction step and directly fed the EEG signals to the DNN for analysis. However, within the studies that used no specific feature extraction step, most employed some signal filter or artifact reduction methods to clean the signal for analysis, and very few studies directly used the raw EEG signal as input to the DNN [65].
Within the studies that employed feature extraction steps, about 54% of studies extracted various spectral features from the EEG signals. It usually included calculating power spectral density using various methods, including Fourier and discreet wavelet transforms. Specifically, the frequency bands of theta, alpha, and beta were extracted by most studies since they were known to be the most relevant channels for CWL detection [14]. Some studies used all frequency sub-bands; however, most studies eliminate gamma at the pre-processing stage by applying a high-pass filter that excludes gamma oscillations.
Nonlinear features, such as various entropy-based measures, were the next most prevalent feature (15%). These networks were motivated by the nonlinear behavior of EEG [29] and expected entropy measures to contribute to the classification performance significantly, and [28] found that their RNN performed slightly worse when nonlinear features were not given to the network. Most notably, approximate entropy [28], Shannon entropy [68, 69], Spectral entropy [69], nonlinear functional connectivity features [52], and mutual information [91] were used by the articles reviewed in this analysis. Generally, nonlinear features were fused or concatenated with other feature types before feeding to the network as in [28], while [52, 91] were found to training their classifier exclusively on nonlinear features. Further, some studies (11%) used statistical measures of the EEG signal like mean, variation, and kurtosis for training their networks. All statistical features were extracted by [76]; however, most studies that used statistical features for training their model typically extracted mean, variance, skewness, and kurtosis. Additionally, most studies concatenate all statistical features of interest before feeding them as a final input for the network.
About 10% of studies used temporal features other than the time-domain signal, such as the auto-regressive coefficients [89] and moving average algorithms. All temporally varying features except the time-varying frequency (spectral feature) and time domain signals (no feature) are grouped into this category. Other features combined was found in about 8% of the studies. Among them, one study explored the use of fractals [63], while two studies explored both functional connectivity [52] and graph features. A recent review exhaustively enlists all the popular EEG features extraction typically used in signal classification [11].
It was further enquired whether any feature was preferentially
Figure 3: D) The chart depicts the prevalence of DNNs encountered in this survey A) The plot describes the percentage of different feature used by the studies reviewed here. E) This bar chart depicts the input formulations used by the networks. 1D feature vectors were clearly the preferred input C) Chart depicts the paradigms specific choice of networks, according to proposed taxonomization grouping. It is clearly seen that there is no preference of network for any of the tasks. G) The preference of a network for a given input formulation is plotted. B) the preference of features among networks is plotted in this bar chart.
employed for a given DNN to see if the network architecture necessitates the feature extraction in addition to the complexity of the signal. It was observed that spectral features were used for all DNN architectures along with nonlinear measures, and since these features have strong theoretical foundations in EEG analysis, it is unsurprising that all networks used these two as input features. However, Graph Neural Networks (GNNs) and Generative Adversarial Networks (GANs) have not been found using these two measures, possibly due to their architecture. It was also observed that DBNs and AEs did not use temporal features or the time-domain signal, as they exclusively preferred a concatenated feature vector. This analysis is depicted in Fig. 3 A & B.
#### Iii-C3 Network Input Formulation
There were three main categories of input formulations to be seen in this analysis, feature vector, image matrix, and EEG matrix. The feature vector is usually a concatenation of all the features in a suitable format for the employed DNN, while the image matrix is a single-channel or multi-channel image-like data created from the EEG signals using various signal transformation methods. The EEG matrix contained multi-channel signals in its native 2-dimensional (2D) form. Most studies that used multiple categories of features concatenated these into a suitable feature vector. Overall, feature vectors were the most used input formulation, followed by image matrices and EEG matrices. AEs, DBNs, and ANNs have all used exclusively feature vectors as input, while CNNs have not been trained using only feature vectors as input. This result is depicted Fig. 3 E.
Within the feature vector, there were 1-dimensional (1D) and 2D feature vectors, where the 1D feature vector is usually a concatenation of all the features extracted from the data without any specific sequential relationship amongst its elements like the ones used in [28, 67]. Only Spectral and nonlinear features were concatenated to create vectors in [28, 69, 92] to create 1D or 2D vectors, while statistical features were concatenated in addition to the above in [50, 68]. Additionally, feature vectors were created using spectral power density in different bands, a CWL index known as the fatigue index [37]. The 2D feature vectors are also a concatenation of features except when the whole spectral decomposition matrix was used.
In the image category, single-channel images (2D - images) [55, 61, 80], and multi-channel images (3D or higher) [75, 90, 93] were used as input to the network within the image category. These images were mostly created by transforming the time-domain EEG signal into the spectral domain. Many variations of image-like data were created from EEG signals using some topographical projection and interpolation for transforming the EEG data into a multi-channel or single-channel image. These methods mainly differed in the feature extraction step and the transformation used for mapping. [94, 95]. Certain studies employ spectral density at each location within a given time to produce a series of images, like a brain power map [37]. Some studies have created an image-like representation by concatenating the spectral decomposition matrix of different frequency bands into the multiple channels of an image [53], thereby suggesting EEG-image is a general feature that may be used for EEG signal analysis.
EEG matrix was directly given as input for a given model under the assumption that DNNs can leverage their depth to model the inherently noisy EEG signals. Time-domain input was given mostly to RNNs and occasionally to CNNs. Some studies have used EEG signals assuming them to be 2D images [79, 85]. This, however, is not entirely supported by the assumptions of the CNN models employed since the arrangement of channels (the spatial location of the signals) does not follow any reasonable pattern resembling their spatial location in the scalp. Some studies used a 1-dimensional (1D) EEG vector, while others used a concatenation of multiple 2D frames into a 3-dimensional (3D) EEG matrix. Some CNNs can perform depth-wise, channel-wise, separable, and/or dilated convolutions and are adapted to process temporal dependencies.
It was further enquired whether the DNNs favor any specific type of input formulations and whether there was any consideration to be given while creating the input vector for a given network. Time-domain inputs were exclusively used by MLPs, CNNs, and RNNs and their hybrids. Time-domain input is appropriate for RNN; for others, the insensitivity to temporal dependencies may prevent the time-domain signal from being useful to the network.. It was also observed that Image-like representation was only used for CNNs and their hybrids. These results are summarized in Fig 3. G
#### Iii-C4 Generalizability of the Network
The least generalizable model is the subject-specific model that has been explored by 27% of the studies, and many of these studies have recorded EEG from only one session per subject. Cross-Session models mark the next level of generalizability, and about 10% of studies have explored such a detection strategy. About 58% of the studies reviewed have proposed cross-subject classifiers, which suggest a high level of generalizability across the subjects and different sessions. However, it is notable that most studies have pooled multiple subjects/sessions with simplistic assumptions for training the data and have not considered the nonlinear statistical shifts present in the EEG signals from multiple sessions and subjects. The generalizability of cognitive states and the DNN detector is depicted in Fig. 3. G.
The highest level of generalizability is achieved by around 4% of the studies, as they have built models to recognize workload levels across different tasks [48, 54]. These classifiers may have accurately estimated universal discriminatory features of different cognitive workload levels. However, it is still unclear what the significant contributing factors for the predictions and decisions made by these networks, very few studies [58, 96] have interpreted the networks' latent representations and attempted at explainable or interpretable deep learning.
## IV Discussion
The principal motivation for this systematic literature analysis was to identify the most suitable methods for elicitation (experimental paradigms) and detection (EEG-based DNNs) specific to the different application domains of CWL research. This analysis found no specific trends in the architectural choice or training strategy according to the tasks or the targeted cognitive states as expected. However, clear patterns were present regarding the types of features and the data structure used for training a DNN, as described in the results section.
Deliberations on the limitations of DNN-based detection lead to generalizability as overfitting is an imminent concern for any DNN; the peculiarities of EEG data only aggravate the issue. Some studies have built subject-specific classifiers since EEG is known to be having nonlinear statistical shifts across different subjects. These can be considered as the least generalizable models. Additionally, since EEG is a non-stationary signal across multiple sessions of a single subject, cross-session detection of cognitive workload is a challenging problem, and it might be caused by the fact that the number of recording sessions seen in typical EEG datasets might not offer enough modeling power for the network to capture variations across sessions. Most deep learning pipelines use a cross-subject training strategy to train the network. This trend may be attributed to the typical low sample sizes of EEG data, which would not offer sufficient samples from a subject to train a DNN since most studies did employ any mathematical transformation to bring the signals from multiple subjects into a shared database and have pooled them indiscriminately. Therefore, it can be suggested that existing DNNs
can already perform cross-subject classification, suggesting they offer sufficient generalizability to model users' cognitive workload levels. Further, some studies have attempted cross-task classification of workload levels using the same DNN [54, 97, 98]. The performance of these networks suggests that cognitive workload levels elicited by different tasks may elicit similar neural responses and that they can be detected using a deep neural network. In summary, DNNs offer sufficient generalizability for employing them for CWL-level detection across subjects and tasks, provided they are trained with sufficiently heterogeneous data.
A key issue identified in this survey is that of an appropriate input formulation. CNNs are particularly good at learning spatial relationships in a 2D matrix representation of data. However, since the EEG channels (matrix rows) are not arranged according to its (EEG electrodes) spatial location on the scalp, the EEG matrix does not adequately represent the spatial relationship between the channels. CNNs assume that the if the input data is one with spatial dependencies. Thus, the DNN cannot capture scalp spatial information when the native EEG matrix is presented to the network. Therefore, further measures of experimental controls need to be defined for employing CNNs directly on EEG matrices. A suggestion is to randomly change the location of EEG channels in the matrix representation and cross-validate the model. Further consideration of the input formulations for RNNs suggests that feeding a concatenated feature vector to an RNN is problematic since RNNs assume that the input vector's elements share temporal dependencies. Therefore, concatenating temporally unrelated features into a feature vector is unjustifiable in the case of an RNN. This issue has been correctly identified by [28], though they have used a set of temporally uncorrelated spectral and nonlinear features concatenated into a 1D vector.
Another key issue identified is related to the subjects of CWL experiments. In laboratory paradigms, the subjects are mostly graduate students. Aviation and automobile paradigm still possessed larger variability due to professionals being used as subjects. However, all other paradigms predominantly use university students, presumably because of availability. In most cases, subjects tend to be in the below-40 age bracket. However, cognitive workload is known to change with age. Therefore, one of the suggestions this review put forth is to include older and younger individuals alike in the subject pool.
Though many have deliberated on core problems of EEG-based cognitive state detections, solutions to these fundamental problems are still at large. This article postulates that deep neural network offers promising solutions to the challenges of EEG-based cognitive workload detection, such as automatic feature extraction and signal variability. Further, it is hypothesized that DNNs (using transfer learning) can overcome the domain statistical shifts in the EEG data across different sessions and subjects without using sophisticated data-pooling techniques [28], given that the training set is sufficiently large and heterogenous. Further, there were few online classifiers that may be useful in a practical BCI, though some have validated their framework using pseudo-online DNN designs. There was only one study that implemented their CWL framework in a smartphone. These findings suggest that a real-time framework needs extensive research to see if DNNs are a viable computing solution for real-time cognitive state detection in online BCI protocols.
### Cognitive Load Continuum
The bibliometric data presented in this article suggests there are two central themes dealt in the studies reviewed here, the overload or the underload of cognitive resources, and respectively the resulting drowsiness or distraction. Further, this systematic review theorizes a proposition termed 'the cognitive load continuum' where all the disparate cognitive states and associated workload levels are expressed as a function of cognitive workload demand and available cognitive resources for allocation using the existing multiple-resource theory [5]. Transient neurophysiologicalized changes that lead up to a certain cognitive state, like that of fatigue, can be modeled as the state-transitory causes and effects in this framework. The proposition is graphically described in Fig. 4. A.
In an operational context, these operator functional states (OFS) can vary continually due to task-related affairs. The optimal OFS is hypothesized to be an unstable equilibrium in its cognitive landscape. Furthermore, sub-optimal OFS can result from being under cognitive load for a prolonged period, which can be termed fatigue. There are two types of fatigue. Fatigue from cognitive overload, or operational exhaustion, leads to sub-optimal operator performance as the attention resources are depleted due to physiological fatigue. Fatigue from cognitive underload, or drowsiness, also leads to sub-optimal operator performance as the attention resources are reduced due to mind wandering or preoccupancy to sleep. This relationship is depicted in Fig. 4. B.
## V Conclusion
The general operator paradigm was simulated using AutoCAMS and MATB with highly graded workload levels. Further, it has been observed that specific paradigms were used for eliciting some cognitive states, though a wide variety of tasks were used for eliciting graded/binary workload levels. Notably, drowsiness and underload fatigue were explored more by automobile driving tasks, while operational exhaustion and overload fatigue was explored more often
Figure 4: A) The extreme ends of this scale are the extremes of cognition, the lower end being an unconscious state without any perception, cognition, or action. B) The curve depicted in the cognitive demand - and allocation curve on which the operator brain state achieves an unstable equilibrium of optimal cognitive load and delivers maximum performance. While on either side of this demand-allocation curve, the operator performance decreases |
2309.11701 | Dimension of Pinned Distance Sets for Semi-Regular Sets | We prove that if $E\subseteq \R^2$ is analytic and $1<d < \dim_H(E)$, there
are ``many'' points $x\in E$ such that the Hausdorff dimension of the pinned
distance set $\Delta_x E$ is at least $d\left(1 -
\frac{\left(D-1\right)\left(D-d\right)}{2D^2+\left(2-4d\right)D+d^2+d-2}\right)$,
where $D = \dim_P(E)$. In particular, we prove that $\dim_H(\Delta_x E) \geq
\frac{d(d-4)}{d-5}$ for these $x$, which gives the best known lower bound for
this problem when $d \in (1, 5-\sqrt{15})$. We also prove that there exists
some $x\in E$ such that the packing dimension of $\Delta_x E$ is at least
$\frac{12 -\sqrt{2}}{8\sqrt{2}}$. Moreover, whenever the packing dimension of
$E$ is sufficiently close to the Hausdorff dimension of $E$, we show the pinned
distance set $\Delta_x E$ has full Hausdorff dimension for many points $x\in
E$; in particular the condition is that $D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}$.
We also consider the pinned distance problem between two sets $X, Y\subseteq
\R^2$, both of Hausdorff dimension greater than 1. We show that if either $X$
or $Y$ has equal Hausdorff and packing dimensions, the pinned distance
$\Delta_x Y$ has full Hausdorff dimension for many points $x\in X$. | Jacob B. Fiedler, D. M. Stull | 2023-09-21T00:58:35Z | http://arxiv.org/abs/2309.11701v1 | # Dimension of pinned distance sets for semi-regular sets
###### Abstract.
We prove that if \(E\subseteq\mathbb{R}^{2}\) is analytic and \(1<d<\dim_{H}(E)\), there are "many" points \(x\in E\) such that the Hausdorff dimension of the pinned distance set \(\Delta_{x}E\) is at least \(d\left(1-\frac{(D-1)(D-d)}{2D^{2}+(2-4d)D+d^{2}+d-2}\right)\), where \(D=\dim_{P}(E)\). In particular, we prove that \(\dim_{H}(\Delta_{x}E)\geq\frac{d(d-4)}{d-5}\) for these \(x\), which gives the best known lower bound for this problem when \(d\in(1,5-\sqrt{15})\). We also prove that there exists some \(x\in E\) such that the packing dimension of \(\Delta_{x}E\) is at least \(\frac{12-\sqrt{2}}{8\sqrt{2}}\). Moreover, whenever the packing dimension of \(E\) is sufficiently close to the Hausdorff dimension of \(E\), we show the pinned distance set \(\Delta_{x}E\) has full Hausdorff dimension for many points \(x\in E\); in particular the condition is that \(D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\).
We also consider the pinned distance problem between two sets \(X,Y\subseteq\mathbb{R}^{2}\), both of Hausdorff dimension greater than \(1\). We show that if either \(X\) or \(Y\) has equal Hausdorff and packing dimensions, the pinned distance \(\Delta_{x}Y\) has full Hausdorff dimension for many points \(x\in X\).
The first author was supported in part by NSF DMS-2037851 and NSF DMS-2246906. Both authors are grateful to the American Institute of Mathematics for hosting the workshop _Effective methods in measure and dimension_, which was the genesis of this collaboration.
A closely related problem is to prove lower bounds on the Hausdorff or packing dimension of \(\Delta_{x}E\) for "many" \(x\in E\), given \(d=\dim_{H}(E)\). Restricting consideration to \(\mathbb{R}^{2}\) for the remainder of the paper, we now discuss a few previous bounds of this type. For \(d\in(1,\frac{5}{4})\), Liu proved that \(\dim_{H}(\Delta_{x}E)>\frac{4d}{3}-\frac{2}{3}\) in [9]. Shmerkin proved a better bound for \(d\) not much greater than \(1\), namely that \(\dim_{H}(\Delta_{x}E)\geq\frac{2}{3}+\frac{1}{42}\)[19]. The second author improved the best known bound for \(d\) not much larger than \(1\), proving that \(\dim_{H}(\Delta_{x}E)\geq\frac{d}{4}+\frac{1}{2}\)[20].
Complementing the bounds cited in the previous paragraph, Du, Ou, Ren and Zhang proved a bound for \(d\)_less_ than the dimension threshold of Falconer's conjecture [3]. In \(\mathbb{R}^{2}\), they showed \(\sup_{x\in E}\dim_{H}(\Delta_{x}E)\geq\frac{5d}{3}-1\). As for packing dimension, in [7], Shmerkin and Keleti proved that
\[\dim_{P}(\Delta_{x}E)\geq\frac{1}{4}\left(1+d+\sqrt{3d(2-d)}\right).\]
Finally, we note that Shmerkin proved that for sets \(E\) which are _regular_ in the sense that \(\dim_{H}(E)=\dim_{P}(E)\), so long as \(\dim_{H}(E)>1\), then for most \(x\), the pinned distance set has full dimensions, i.e., \(\dim_{H}(\Delta_{x}E)=1\)[18].1
Footnote 1: Observe that this regularity is weaker than Alfors-David regularity, which Orponen considered in [15]. Throughout the remainder of the paper, by regular, we mean the more general notion.
Our work makes a number of improvements to the pinned distance problem in the plane. First, we are able to prove a dimensional lower bound which takes into account the packing dimension of \(E\).
**Theorem 1**.: _Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(1<d<\dim_{H}(E)\). Then there is a subset \(F\subseteq E\) of full dimension such that_
\[\dim_{H}(\Delta_{x}E)\geq d\left(1-\frac{\left(D-1\right)\left(D-d\right)}{2D ^{2}+\left(2-4d\right)D+d^{2}+d-2}\right),\]
_for all \(x\in F\), where \(D=\dim_{P}(E)\). In particular, \(\dim_{H}(E\setminus F)\leq d<\dim_{H}(E)\). Furthermore, if_
\[D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\]
_Then \(\dim_{H}(\Delta_{x}E)=1\)._
Note that the second part of the theorem significantly generalizes Shmerkin's result for regular sets \(E\subseteq\mathbb{R}^{2}\), in the sense that we show that \(E\) need not be fully regular to have most of its pinned distance sets be full dimension. Instead, it only needs to have sufficiently close Hausdorff and packing dimension, a form of "semi-regularity". As a corollary of this theorem, we obtain an improvement over the second author's previous bound, namely.
**Corollary 2**.: _Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(1<d<\dim_{H}(E)\). Then there is a subset \(F\subseteq E\) of full dimension such that_
\[\dim_{H}(\Delta_{x}E)\geq\frac{d(d-4)}{d-5}.\]
_for all \(x\in F\)._
Additionally, we can bound the dimension of the pinned distance sets in terms of only the packing dimension, as below.
**Corollary 3**.: _Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(\dim_{H}(E)>1\). Then, for all \(x\in E\) outside a set of (Hausdorff) dimension one,_
\[\dim_{H}(\Delta_{x}E)\geq\frac{D+1}{2D},\]
_where \(D=\dim_{P}(E)\)._
This corollary turns out to be useful in establishing the following improvement on Shmerkin and Keleti's packing dimension bound.2
Footnote 2: More precisely, we use its effective analog.
**Theorem 4**.: _Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(\dim_{H}(E)>1\). Then there exists some \(x\in E\) such that,_
\[\dim_{P}(\Delta_{x}E)\geq\frac{12-\sqrt{2}}{8\sqrt{(}2)}\approx 0.9356.\]
However, our results are more general than the above. We are able to prove essentially the same bounds in the case that our pinned points \(x\) lie in some set \(X\) and we consider the set of distances from \(x\) to some analytic set \(Y\). Theorem 1 is itself an immediate corollary of the following more general theorem.
**Theorem 5**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be analytic such that \(1<d_{y}=\dim_{H}(Y)\) and \(D_{y}=\operatorname{Dim}_{p}(Y)\). Let \(X\subseteq\mathbb{R}^{2}\) be such that \(1<d_{x}<\dim_{H}(X)\) and \(D_{x}=\operatorname{Dim}_{p}(X)\). Then there is some \(F\subseteq X\) of full dimension such that_
\[\dim_{H}(\Delta_{x}E)\geq d\left(1-\frac{(D-1)\left(D-d\right)}{2D^{2}+\left(2 -4d\right)D+d^{2}+d-2}\right),\]
_for all \(x\in F\), where \(d=\min\{d_{x},d_{y}\}\) and \(D=\max\{D_{x},D_{y}\}\). In particular, \(\dim_{H}(X\setminus F)\leq d_{x}<\dim_{H}(X)\) Furthermore, if_
\[D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\]
_Then \(\dim_{H}(\Delta_{x}E)=1\)._
The second portion of this theorem amounts to a semi-regularity condition on both \(X\) and \(Y\). Our work also shows that we can achieve full dimension pinned distance sets when all that we require of \(X\) is \(\dim_{H}(X)>1\), at the cost of a somewhat more strict semi-regularity condition on \(Y\).
**Theorem 6**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be analytic with \(\dim_{H}(Y)>1\) and \(\dim_{P}(Y)<2\dim_{H}(Y)-1\). Let \(X\subseteq\mathbb{R}^{2}\) be any set such that \(\dim_{H}(X)>1\). Then for all \(x\in X\) outside a set of (Hausdorff) dimension one,_
\[\dim_{H}(\Delta_{x}Y)=1.\]
Finally, we are able to show that regularity in just the pin set \(X\) is good enough to imply that typical pinned distance sets have dimension \(1\).
**Theorem 7**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be analytic with \(\dim_{H}(Y)>1\). Let \(X\subseteq\mathbb{R}^{2}\) be any set such that \(\dim_{H}(X)=\dim_{P}(X)>1\). Then there is a subset \(F\subseteq X\) such that,_
\[\dim_{H}(\Delta_{x}Y)=1,\]
_for all \(x\in F\). Moreover, \(\dim_{H}(X\setminus F)<\dim_{H}(X)\)._
Now, we outline the structure of the paper. This paper employs recently developed "effective" methods in the study of dimension. These methods connect Hausdorff and packing dimension to Kolmogorov complexity through _point-to-set principles_, and thus open classical problems up to attack by tools from computability theory. Section 2 covers the necessary preliminaries from computability theory, as well as a few less introductory but crucial lemmas.
Section 3 deals with the proof of an effective projection theorem for points \(x\) which is primarily used to bound the growth of the complexity of \(|x-y|\) in the next section. In order to obtain this projection theorem, we need to perform a certain partitioning argument on the interval \([1,r]\) considering the complexity function \(K_{r}(x)\). Partitioning \([1,r]\) into smaller intervals so that the complexity function has useful properties on each interval, or even just certain intervals, is a recurring idea throughout this paper.
Section 4 is the main thrust of the argument on the effective side. The idea is to now partition \([1,r]\) so that \(K_{r}(|x-y|)\) either grows at an optimal rate of \(1\) or grows at an average rate at least equal to the average growth rate of \(K_{r}(y)\) on each interval. First, in section 4.2, we construct a partition which only uses the first kind of interval; this does not require the application of the projection theorem and will thus be essential in proving Theorem 6.3 Section 4.3 details the construction of a more general partition that achieves better bounds using the projection theorem, and section 4.4 sums over this partition to obtain the effective analog of Theorem 5. This effective analog is a bound on the complexity of the distance at _every_ precision, and in section 4.5 we use it as a basis to obtain improved bounds at certain well-chosen precisions; this yields the effective analog of Theorem 4.
Footnote 3: Recall that Theorem 6 was more or less independent of the dimension of \(X\), so long as it is greater than \(1\). This is why we do not use the projection theorem here, as it concerns the pin \(x\) and its effective dimension.
Section 5 is where we perform the reductions to the classical results. Essentially, we have to show that sets of the given dimensions always have points \(x\) and \(y\) with the desired algorithmic properties, which then imply the bounds of our effective theorems hold for certain distances. Performing these reductions yields Theorem 4, Theorem 5, and Theorem 6.
Section 6, wherein the goal is to prove Theorem 7, is more self-contained. The idea is that if the point \(x\) is regular in the sense of having equal effective Hausdorff and effective packing dimensions, we get an essentially optimal effective projection theorem. This allows us to take intervals as long as we want when partitioning \(|x-y|\), which makes it straightforward to establish the bound of \(1\). A complication when performing the reduction to the classical result is that regular sets do not necessarily contain sufficiently many regular points, so we need a variant of the projection theorem that holds for \(x\)'s that are _almost_ regular. As a consequence, we cannot take arbitrarily long intervals when partitioning \(|x-y|\), but as \(\dim_{H}(Y)>d_{y}>1\), we do not need arbitrarily long intervals to get the bound of \(1\). Thus, the reduction goes through.
_Remark_.: For readers who want a relatively straightforward demonstration of the main ideas of this paper, we suggest considering starting with section 6. The case of almost regular \(x\), while it does not follow from the previous sections, has a similar structure to sections 3-5 without as many of the complications.
## 2. Preliminaries
### Kolmogorov complexity and effective dimension
The _conditional Kolmogorov complexity_ of binary string \(\sigma\in\{0,1\}^{*}\) given a binary string \(\tau\in\{0,1\}^{*}\) is the length of the shortest program \(\pi\) that will output \(\sigma\) given \(\tau\) as input. Formally, the conditional Kolmogorov complexity of \(\sigma\) given \(\tau\) is
\[K(\sigma\mid\tau)=\min_{\pi\in\{0,1\}^{*}}\left\{\ell(\pi):U(\pi,\tau)=\sigma \right\}\,,\]
where \(U\) is a fixed universal prefix-free Turing machine and \(\ell(\pi)\) is the length of \(\pi\). Any \(\pi\) that achieves this minimum is said to _testify_ to, or be a _witness_ to, the value \(K(\sigma\mid\tau)\). The _Kolmogorov complexity_ of a binary string \(\sigma\) is \(K(\sigma)=K(\sigma\mid\lambda)\), where \(\lambda\) is the empty string. We can easily extend these definitions to other finite data objects, e.g., vectors in \(\mathbb{Q}^{n}\), via standard binary encodings. See [8] for details.
The _Kolmogorov complexity_ of a point \(x\in\mathbb{R}^{m}\) at _precision_\(r\in\mathbb{N}\) is the length of the shortest program \(\pi\) that outputs a _precision_-\(r\) rational estimate for \(x\). Formally, this is
\[K_{r}(x)=\min\left\{K(p)\,:\,p\in B_{2^{-r}}(x)\cap\mathbb{Q}^{m}\right\}\,,\]
where \(B_{\varepsilon}(x)\) denotes the open ball of radius \(\varepsilon\) centered on \(x\). Note that this implies that the Kolmogorov complexity of a point is non-decreasing in precision. The _conditional Kolmogorov complexity_ of \(x\) at precision \(r\) given \(y\in\mathbb{R}^{n}\) at precision \(s\in\mathbb{R}^{n}\) is
\[K_{r,s}(x\mid y)=\max\left\{\,\min\{K_{r}(p\mid q)\,:\,p\in B_{2^{-r}}(x)\cap \mathbb{Q}^{m}\}\,:\,q\in B_{2^{-s}}(y)\cap\mathbb{Q}^{n}\right\}.\]
When the precisions \(r\) and \(s\) are equal, we abbreviate \(K_{r,r}(x\mid y)\) by \(K_{r}(x\mid y)\). As a matter of notational convenience, if we are given a non-integral positive real as a precision parameter, we will always round up to the next integer. Thus \(K_{r}(x)\) denotes \(K_{\lceil\tau\rceil}(x)\) whenever \(r\in(0,\infty)\).
A basic property, proven by Case and J. Lutz [1] shows that the growth rate of the Kolmogorov complexity of a point is essentially bounded by the dimension of the ambient space. Since this paper concerns \(\mathbb{R}^{2}\), we will frequently use this in the form that for any \(\varepsilon>0\), for sufficiently large \(s\) we have that
\[K_{r+s}(x)\leq K_{r}(x)+2s+\varepsilon s\]
We may _relativize_ the definitions in this section to an arbitrary oracle set \(A\subseteq\mathbb{N}\). We will frequently consider the complexity of a point \(x\in\mathbb{R}^{n}\)_relative to a point_\(y\in\mathbb{R}^{m}\), i.e., relative to an oracle set \(A_{y}\) that encodes the binary expansion of \(y\) is a standard way. We then write \(K_{r}^{y}(x)\) for \(K_{r}^{A_{y}}(x)\). Oracle access to the _entire_ binary expansion of a point is no less useful than conditional access to that binary expansion only up to a certain precision. Thus, we note that, for every \(x\in\mathbb{R}^{n}\) and \(y\in\mathbb{R}^{m}\),
\[K_{s,r}(x\mid y)\geq K_{s}^{y}(x)-O(\log r)-O(\log s), \tag{1}\]
for every \(s,r\in\mathbb{N}\)
One of the most useful properties of Kolmogorov complexity is that it obeys the _symmetry of information_. That is, for every \(\sigma,\tau\in\{0,1\}^{*}\),
\[K(\sigma,\tau)=K(\sigma)+K(\tau\mid\sigma,K(\sigma))+O(1)\,.\]
We also have the more technical lemmas detailing versions of symmetry of information that hold for Kolmogorov complexity in \(\mathbb{R}^{n}\). Lemma 8 was proved in the second author's work [13].
**Lemma 8** ([13]).: _For every \(m,n\in\mathbb{N}\), \(x\in\mathbb{R}^{m}\), \(y\in\mathbb{R}^{n}\), and \(r,s\in\mathbb{N}\) with \(r\geq s\),_
1. \(|K_{r}(x\mid y)+K_{r}(y)-K_{r}(x,y)|\leq O(\log r)+O(\log\log|y|)\,.\)__
2. \(|K_{r,s}(x\mid x)+K_{s}(x)-K_{r}(x)|\leq O(\log r)+O(\log\log|x|)\,.\)__
A consequence of Lemma 8, is the following.
**Lemma 9** ([13]).: _Let \(m,n\in\mathbb{N}\), \(x\in\mathbb{R}^{m}\), \(z\in\mathbb{R}^{n}\), \(\varepsilon>0\) and \(r\in\mathbb{N}\). If \(K_{r}^{x}(z)\geq K_{r}(z)-O(\log r)\), then the following hold for all \(s\leq r\)._
1. \(K_{s}^{x}(z)\geq K_{s}(z)-O(\log r)\,.\)__
2. \(K_{s,r}(x\mid z)\geq K_{s}(x)-O(\log r)\,.\)__
J. Lutz [10] initiated the study of effective dimensions (also known as _algorithmic dimensions_) by effectivizing Hausdorff dimension using betting strategies called _gales_, which generalize martingales. Mayordomo showed that effective Hausdorff dimension can be characterized using Kolmogorov complexity [14]. In this paper, we use this characterization as a definition. The _effective Hausdorff dimension_ of a point \(x\in\mathbb{R}^{n}\) is
\[\dim(x)=\liminf_{r\to\infty}\frac{K_{r}(x)}{r}\,.\]
The _effective packing dimension_ of a point \(x\in\mathbb{R}^{n}\) is
\[\operatorname{Dim}(x)=\limsup_{r\to\infty}\frac{K_{r}(x)}{r}\,.\]
We can relativize both definitions, so that the effective Hausdorff and packing dimension _with respect to an oracle_\(A\subseteq\mathbb{N}\) are
\[\dim^{A}(x)=\liminf_{r\to\infty}\frac{K_{r}^{A}(x)}{r}\text{ and }\operatorname{ Dim}^{A}(x)=\limsup_{r\to\infty}\frac{K_{r}^{A}(x)}{r}\]
### The Point-to-Set Principle
The _point-to-set principle_ shows that the Hausdorff and packing dimension of a set can be characterized by the effective Hausdorff and effecive packing dimension of its individual points. Specifically, J. Lutz and N. Lutz [11] showed the following for arbitrary subsets of \(\mathbb{R}^{n}\).
**Theorem 10** (Point-to-set principle [11]).: _Let \(n\in\mathbb{N}\) and \(E\subseteq\mathbb{R}^{n}\). Then_
\[\dim_{\mathrm{H}}(E) =\min_{A\subseteq\mathbb{N}}\sup_{x\in E}\dim^{A}(x).\] \[\dim_{\mathrm{P}}(E) =\min_{A\subseteq\mathbb{N}}\sup_{x\in E}\operatorname{Dim}^{A}( x).\]
Stated as above, it is clear that Hausdorff and pacing dimension are in a certain respect dual to each other. The only difference is a limit inferior versus a limit superior for the individual points. This immediately implies that the packing dimension of a set is no less than its Hausdorff application.
The general point-to-set principle is extremely useful, but for some applications, we would like to either remove the oracle, or at least be able to say something about which oracles achieve the minimum. The first point-to-set principle for Hausdorff
dimension, which holds for a restricted class of sets, was implicitly proven by Hitchcock [6] and J. Lutz [10].
A set \(E\subseteq\mathbb{R}^{n}\) is _effectively compact relative to \(A\)_ if the set of finite open covers of \(E\) by rational balls is computably enumerable relative to \(A\).4 We will use the fact that every compact set is effectively compact relative to some oracle.
Footnote 4: The balls are rational in the sense that the coordinates of the centers and the radii are all rational numbers, which allows us to identify each ball by a finite string.
**Theorem 11** ([6, 10]).: _Let \(E\subseteq\mathbb{R}^{n}\) and \(A\subseteq\mathbb{N}\) be such that \(E\) is effectively compact relative to \(A\). Then_
\[\dim_{\mathrm{H}}(E)=\sup_{x\in E}\dim^{A}(x)\,.\]
_Remark_.: We only state this restricted point-to-set principle for Hausdorff dimension because it is known that it fails for packing dimension, see [2]. Informally, this can be seen to occur because effective compactness and Hausdorff dimension both deal with covers, whereas it is hard to convert the covers we have from effective compactness into usable information about packings.
In order to apply Theorem 11 to the pinned distance sets, we need the following fact of computable analysis.
**Observation 12**.: _Let \(E\subseteq\mathbb{R}^{2}\) be a compact set and let \(A\subseteq\mathbb{N}\) be an oracle relative to which \(E\) is effectively compact. Then, for every \(x\in\mathbb{R}^{2}\), \(\Delta_{x}E\) is effectively compact relative to \((x,A)\)._
### Helpful lemmas
In this section, we recall several lemmas which were introduced by Lutz and Stull [13, 12] and which will be used throughout the paper. Note that these lemmas each relativize with the addition of an oracle \(A\). The first lemma shows that the precision to which we can compute \(e\) given \(x,w\) such that \(p_{e}x=p_{e}w\) depends linearly on the distance of \(x\) and \(w\).
**Lemma 13** ([13]).: _Let \(z\in\mathbb{R}^{2}\), \(e\in S^{1}\), and \(r\in\mathbb{N}\). Let \(w\in\mathbb{R}^{2}\) such that \(p_{e}z=p_{e}w\) up to precision \(r\).5 Then_
Footnote 5: This lemma was originally proven without the “up to precision \(r\)” qualifier, but we rephrase like this to match the form we will use the lemma in. The generalization is essentially immediate, because in this case there will be some sufficiently close point to \(w\) with _exactly_ the same projection as \(z\), indistinguishable from \(w\) at precision \(r\).
\[K_{r}(w)\geq K_{t}(z)+K_{r-t,r}(e\mid z)+O(\log r)\,,\]
_where \(t:=-\log|z-w|\)._
We will commonly need to lower the complexity of points at specified positions. The following lemma shows that conditional complexity gives a convenient way to do this.
**Lemma 14** ([13]).: _Let \(z\in\mathbb{R}^{2}\), \(\eta\in\mathbb{Q}_{+}\), and \(r\in\mathbb{N}\). Then there is an oracle \(D=D(r,z,\eta)\) with the following properties._
1. _For every_ \(t\leq r\)_,_ \[K_{t}^{D}(z)=\min\{\eta r,K_{t}(z)\}+O(\log r)\,.\]
_._
2. _For every_ \(m,t\in\mathbb{N}\) _and_ \(y\in\mathbb{R}^{m}\)_,_ \[K_{t,r}^{D}(y\mid z)=K_{t,r}(y\mid z)+O(\log r)\,,\] _and_ \[K_{t}^{z,D}(y)=K_{t}^{z}(y)+O(\log r)\,.\]
3. _If_ \(B\subseteq\mathbb{N}\) _satisfies_ \(K_{r}^{B}(z)\geq K_{r}(z)-O(\log r)\)_, then_ \[K_{r}^{B,D}(z)\geq K_{r}^{D}(z)-O(\log r)\,.\]
4. _For every_ \(t\in\mathbb{N}\)_,_ \(u\in\mathbb{R}^{n},w\in\mathbb{R}^{m}\)__ \[K_{r,t}(u\mid w)\leq K_{r,t}^{D}(u\mid w)+K_{r}(z)-\eta r+O(\log r)\,.\]
_In particular, this oracle \(D\) encodes \(\sigma\), the lexicographically first time-minimizing witness to \(K(z\!\upharpoonright\!r\mid\!z\!\upharpoonright\!s)\), where \(s=\max\{t\leq r\,:\,K_{t-1}(z)\leq\eta r\}\)._
The final lemma in this section is a crucial tool at several points of the argument. Under certain conditions, it lets us lower bound the complexity growth of the \(|x-y|\) by the complexity growth of \(y\) on particular intervals.
**Lemma 15**.: _Suppose that \(x,y\in\mathbb{R}^{2}\), \(t<r\in\mathbb{N}\), and \(\eta,\varepsilon\in\mathbb{Q}_{+}\) satisfy the following conditions._
1. \(K_{r}(y)\leq\left(\eta+\frac{\varepsilon}{2}\right)r\)_._
2. _For every_ \(w\in B_{2^{-t}}(y)\) _such that_ \(|x-y|=|x-w|\)_,_ \[K_{r}(w)\geq\eta r+\min\{\varepsilon r,r-s-\varepsilon r\}\,,\] _where_ \(s=-\log|y-w|\leq r\)_._
_Then for every oracle set \(A\subseteq\mathbb{N}\),_
\[K_{r,t}^{A,x}(y\mid y)\leq K_{r,t}^{A,x}(|x-y|\mid y)+3\varepsilon r+K( \varepsilon,\eta)+O(\log r)\,.\]
## 3. Projection theorem
The main goal of this section is to prove the following projection theorem:
**Theorem 16**.: _Let \(x\in\mathbb{R}^{2}\), \(e\in\mathcal{S}^{1}\), \(\varepsilon\in\mathbb{Q}^{+}\), \(C\in\mathbb{N}\), \(A\subseteq\mathbb{N}\), and \(t,r\in\mathbb{N}\). Suppose that \(r\) is sufficiently large, and that the following hold._
1. \(1<d\leq\dim^{A}(x)\leq\operatorname{Dim}^{A}(x)\leq D\)_._
2. \(t\geq\frac{d(2-D)}{2}r\)_._
3. \(K_{s}^{x,A}(e)\geq s-C\log s\)_, for all_ \(s\leq t\)_._
_Then_
\[K_{r}^{A}(x\,|\,p_{e}x,e)\leq\max\{\frac{D-1}{D}(dr-t)+K_{r}^{A}(x)-dr,K_{r}^{ A}(x)-r\}+\varepsilon r.\]
This projection has somewhat more restrictive hypotheses than the projection theorem of [20]. Namely, depending on \(D\), we may have a rather large lower bound on \(t\). However, in the next section when we will need to apply this projection theorem, if \(t\) is smaller than the above, it is easy to deduce the desired result without reference to this theorem. We will begin by introducing some definitions and tools from [20] which will be of use in proving the modified projection theorem.
### Projection preliminaries
We need to consider \(K^{A}_{s}(x)\) as a function of \(s\). For convenience, we define \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) to be the piece-wise linear function which agrees with \(K^{A}_{s}(x)\) on the integers such that
\[f(a)=f(\lfloor a\rfloor)+(a-\lfloor a\rfloor)(f(\lceil a\rceil)-f(\lfloor a\rfloor)),\]
for any non-integer \(a\). Note that \(f\) is non-decreasing since \(K^{A}_{r}(x)\) is, and, for every \(a<b\in\mathbb{N}\),
\[f(b)-f(a)\leq 2(b-a)+O(\log\lceil b\rceil).\]
That is, the maximal growth rate of \(f\) on large intervals is about \(2\).
There are several specials kinds of intervals on which we can bound the complexity of projections.
* An interval \([a,b]\) is called _teal_ if \(f(b)-f(c)\leq b-c\) for every \(a\leq c\leq b\).
* An interval \([a,b]\) is called _yellow_ if \(f(c)-f(a)\geq c-a\) for every \(a\leq c\leq b\).
More concretely, these intervals are useful due to the following proposition, which is Corollary 16 in [20]:
**Proposition 17**.: _Let \(A\subseteq\mathbb{N}\), \(x\in\mathbb{R}^{2},e\in\mathcal{S}^{1},\varepsilon\in\mathbb{Q}_{+}\), \(C\in\mathbb{N}\) and \(t<r\in\mathbb{R}_{+}\). Suppose that \(r\) is sufficiently large and \(K^{A,x}_{s}(e)\geq s-C\log r\), for all \(s\leq r-t\). Then the following hold._
1. _If_ \([t,r]\) _is yellow,_ \[K^{A}_{r,r,r,t}(x\mid p_{e}x,e,x)\leq K^{A}_{r,t}(x\mid x)-(r-t)+\varepsilon r.\]
2. _If_ \([t,r]\) _is teal,_ \[K^{A}_{r,r,r,t}(x\mid p_{e}x,e,x)\leq\varepsilon r.\]
We denote the set of teal intervals by \(T\) and the set of yellow intervals by \(Y\).
Supposing that a partition of \([1,r]\) consists of only yellow and teal intervals, and has essentially a constant number of terms, we could repeatedly apply symmetry of information to \(17\) and deduce a useful bound for \(K^{A}_{r}(x\mid p_{e}x,e)\). We make the notion of such an "admissible" partition more precise: a partition \(\mathcal{P}=\{[a_{i},a_{i+1}]\}_{i=0}^{k}\) of closed intervals with disjoint interiors is \((M,r,t)\)**-admissible** if \([1,r]=\cup_{i}[a_{i},a_{i+1}]\), and it satisfies the following conditions.
1. \(k\leq M\),
2. \([a_{i},a_{i+1}]\) is either yellow or teal,
3. \(a_{i+1}\leq a_{i}+t\).
We can repeatedly apply the symmetry of information to write the complexity \(K^{A}_{r}(x\mid p_{e}x,e)\) as a sum of complexities over a partition of \([1,r]\), allowing us to apply \(17\). This idea is encapsulated by the following result in [20].
**Lemma 18**.: _Suppose that \(x\in\mathbb{R}^{2}\), \(e\in\mathcal{S}^{1}\), \(\varepsilon\in\mathbb{Q}^{+}\), \(C\in\mathbb{N}\), \(t,r\in\mathbb{N}\) satisfy (P1)-(P3). If \(\mathcal{P}=\{[a_{i},a_{i+1}]\}_{i=0}^{k}\) is an \((3C,r,t)\)-admissible partition, and \(r\) is sufficiently large, then_
\[K^{A}_{r}(x\mid p_{e}x,e)\leq\varepsilon r+\sum_{i\in\textbf{Bad}}K^{A}_{a_{i +1},a_{i}}(x\mid x)-(a_{i+1}-a_{i}),\]
_where_
\[\textbf{Bad}=\{i\leq k\mid[a_{i},a_{i+1}]\notin T\}.\]
Note that admissible partitions of specified intervals always exist, as per the following lemma:
**Lemma 19**.: _Let \(x\in\mathbb{R}^{2}\), \(r,C\in\mathbb{N}\) and \(\frac{r}{C}\leq t<r\). For any \(0\leq a<b\leq r\), there is an \((3C,r,t)\)-admissible partition of \([a,b]\)._
However, a partition merely being admissible isn't enough to establish the desired bounds. We can do better by consider the special intervals which are both yellow and teal.
* An interval \([a,b]\)_green_ if it is yellow and teal and \(b-a\leq t\).
Green intervals are often advantageous because they combine the best of yellow intervals (complexity of \(x\) grows superlinearly) and the best of teal intervals (we can compute \(x\) with few bits given its projection). We denote the set of green intervals by \(G\). We now introduce two more types of intervals to formulate a few results pertaining to green intervals.
* An interval \([a,b]\) is called _red_ if \(f\) is strictly increasing on \([a,b]\).
* An interval \([a,b]\) is called _blue_ if \(f\) is constant on \([a,b]\).
In [20], a partition \(\hat{\mathcal{P}}=\hat{\mathcal{P}}(x,r,t)\) of \([1,r]\) with the following properties is constructed:
* The interiors of the elements of \(\hat{\mathcal{P}}\) are disjoint.
* Each interval is red, blue, or green.
* If \([a,b]\) is red and \([b,c]\) is blue (not necessarily in \(\hat{\mathcal{P}}\)), then \(b\) is contained in the interior of a green interval in \(\hat{\mathcal{P}}\). Moreover, any \(b\) that's contained in _any_ green interval is contained in a green interval in \(\hat{\mathcal{P}}\).
* Suppose \(I_{0},\ldots,I_{n+1}\) is a a red-green-blue sequence in \(\hat{\mathcal{P}}\) i.e. a sequence \(I_{0},\ldots,I_{n+1}\) of consecutive intervals in \(\hat{\mathcal{P}}\) such that \(I_{0}\) is red, \(I_{1},\ldots,I_{n}\) are green, and \(I_{n+1}\) is blue. Then the total length of \(I_{1},\ldots,I_{n}\) is at least \(t\).
Call a maximal collection of consecutive green intervals a _green block_. The last property, that green blocks preceded by a red and succeeded by a blue interval have length at least \(t\), will be particularly important. Informally, this property holds because if a green interval had length less than \(t\) with red on the left and blue on the right, it would always be possible to lengthen the green by consuming some of the blue and red.
The final fact from [20] we need in this section is the following: if there is no red-green-blue sequence in \(\hat{\mathcal{P}}=\hat{\mathcal{P}}(x,r,t)\), then there is an essentially all yellow admissible partition of \([1,r]\). More specifically, in this case there is an admissible partition \(\mathcal{P}\) such that for some \(c\) not depending on \(r\), if \(c\leq a_{i}<a_{i+1}\) and \([a_{i},a_{i+1}]\in P\), then \([a_{i},a_{i+1}]\) is yellow. Intuitively, this is because if a blue interval appears after a red interval, there has to be a red-green-blue sequence somewhere in between. \(\dim(x)>1\), so after a certain point there has to be a red interval. Consequently, after a certain point, there can only be red or green intervals, which we can convert into an all-yellow partition of \([c,r]\).
_Remark_.: In fact, it is easy to convert a partition of any subinterval of \([1,r]\) consisting of only red and green intervals into an all-yellow \(3C\)-admissible partition of the subinterval. Just observe that green intervals are yellow, any subinterval of a red interval is yellow, and the union of adjacent yellow intervals is yellow. Greedily combining the red and green intervals from the left to the right and beginning a new yellow interval when the length of the previous yellow is about to exceed \(t\) accomplishes this.
With these definitions and tools, we can now prove the modified projection theorem.
### Proof of the projection theorem
**Theorem 16**.: _Let \(x\in\mathbb{R}^{2}\), \(e\in\mathcal{S}^{1}\), \(\varepsilon\in\mathbb{Q}^{+}\), \(C\in\mathbb{N}\), \(A\subseteq\mathbb{N}\), and \(t,r\in\mathbb{N}\). Suppose that \(r\) is sufficiently large, and that the following hold._
1. \(1<d\leq\dim(x)\leq\operatorname{Dim}(x)\leq D\)_._
2. \(t\geq\frac{d(2-D)}{2}r\)_._
3. \(K_{s}^{A,x}(e)\geq s-C\log s\)_, for all_ \(s\leq t\)_._
_Then_
\[K_{r}^{A}(x\,|\,p_{e}x,e)\leq\max\{\frac{D-1}{D}(dr-t)+K_{r}^{A}(x)-dr,K_{r}^{A }(x)-r\}+\varepsilon r.\]
Proof.: Let \(x\) be as in the statement of the theorem, \(r\) sufficiently large, and \(t\geq\frac{d(2-D)}{2}r\). Let \(\hat{P}=\hat{P}(x,r,t)\) be a partition of \([1,r]\) satisfying the properties described in the last section. Let \(S\) be the number of red-green-blue sequences in \(\hat{P}\). Note that \(S<\frac{2}{d(2-D)}\), since the green block in each red-green-blue sequence is at least length \(t\), and these blocks have to be separated from each other by some amount.
**The case \(S=0\):** In this case, there are no red-green-blue sequences, so let \(\mathcal{P}\) be the all yellow \(3C\)-admissible partition guaranteed by last fact of the previous section.
\[\sum_{I_{i}\in\mathcal{P}-Y}a_{i+1}-a_{i}\leq c,\]
for some constant \(c\), and so for sufficiently large \(r\)
\[B:=\sum_{I_{i}\in\mathcal{P}\cap Y}a_{i+1}-a_{i}\geq r-\frac{\varepsilon r}{2}. \tag{2}\]
By symmetry of information, we can write
\[K_{r}^{A}(x) \geq\sum_{I_{i}\in\mathcal{P}}K_{a_{i+1},a_{i}}^{A}(x\mid x)-O( \log r)\] \[\geq\sum_{I_{i}\in\mathcal{P}\cap Y}K_{a_{i+1},a_{i}}^{A}(x\mid x )-O(\log r)\] \[\geq-\frac{\varepsilon r}{4}+\sum_{I_{i}\in\mathcal{P}\cap Y}K_{a _{i+1},a_{i}}^{A}(x\mid x)\]
To apply Lemma 18 we note that, on green intervals, \(K_{a,b}^{A}(x\mid x)=b-a\). Therefore, we see that
[Lemma 18] \[K_{r}^{A}(x) \geq K_{r}^{A}(x\mid p_{e}x,e)+B-\frac{\varepsilon r}{2}\] \[\geq K_{r}^{A}(x\mid p_{e},x,e)+r-\varepsilon r.\]
Thus,
\[K_{r}^{A}(x\mid p_{e}x,e)\leq K_{r}^{A}(x)-r+\varepsilon r, \tag{3}\]
and the conclusion follows.
**The case \(S=1\):** Now, suppose there is exactly one red-green-blue sequence in \(\hat{\mathcal{P}}\). Then there is a precision \(1<r_{1}<r-t\) and an \(s\geq t\) such that \([r_{1},r_{1}+s]\) is green in \(\hat{\mathcal{P}}\). Let \(\mathcal{P}_{1}\) be a \(3C\)-admissible partition of \([1,r_{1}]\), and \(\mathcal{P}_{2}\) be a \(3C\)-admissible partition of \([r_{1}+s,r]\), guaranteed by Lemma 19. Since there is exactly one red-green-blue sequence in \(\hat{\mathcal{P}}\), \(\mathcal{P}_{1}\) contains no red-green-blue sequences. Therefore, using the same argument as in the previous case, \(\mathcal{P}_{1}\) is essentially covered by yellow intervals and we conclude that
\[K^{A}_{r_{1}}(x\mid p_{e}x,e)\leq K^{A}_{r_{1}}(x)-r_{1}+\frac{ \varepsilon r_{1}}{4}\] \[(D-1)r_{1}+\frac{\varepsilon r_{1}}{2}.\]
Let \(r_{2}\geq r_{1}+s\) be minimal precision such that \([r_{2},r]\) is the union of yellow intervals6 Therefore we have
Footnote 6: We allow \(r_{2}\) to be equal to \(r\), in the case that \([r_{1}+s,r]\) is covered by teal intervals.
\[K^{A}_{r}(x\mid p_{e}x,e) \leq K^{A}_{r_{1}}(x)-r_{1}+K^{A}_{r,r_{2}}(x\mid x)-(r-r_{2})+\varepsilon r\] \[\leq(D-1)\,r_{1}+K^{A}_{r,r_{2}}(x\mid x)-(r-r_{2})+\varepsilon r\] \[\leq(D-1)\,r_{1}+(d-1)\,(r-r_{2})+K^{A}_{r}(x)-dr+\varepsilon r\] \[\leq(D-1)\,B+K^{A}_{r}(x)-dr+\varepsilon r.\]
If \(B\leq\frac{dr-t}{D}\), the conclusion follows. So, we assume that \(B>\frac{dr-t}{D}\). Hence,
\[K^{A}_{r}(x\mid p_{e}x,e) \leq K^{A}_{r}(x)-s-B+\varepsilon r\] \[\leq K^{A}_{r}(x)-t-B+\varepsilon r\] \[<K^{A}_{r}(x)-t-\frac{dr-t}{D}+\varepsilon r\] \[=K^{A}_{r}(x)-dr+\frac{D-1}{D}\,(dr-t)+\varepsilon r,\]
and the conclusion follows.
**The case \(S\geq 2\):** We now consider the case that there are at least two red-green-blue sequence in \(\hat{P}\). Let
\[L=\sum_{I_{i}\in\mathcal{P}\cap G}a_{i+1}-a_{i} \tag{4}\]
be the total length of the green intervals in \(\mathcal{P}\). In this case we have \(L\geq 2t\). Let
\[B=\sum_{i\in\mathbf{Bad}}a_{i+1}-a_{i} \tag{5}\]
be the total length of the bad (non-teal) intervals in \(\mathcal{P}\).
We first prove that
\[K^{A}_{r}(x\mid p_{e}x,e)\leq\min\{K^{A}_{r}(x)-B-2t,B\}+\varepsilon r. \tag{6}\]
Since \(x\) is an element of \(\mathbb{R}^{2}\),
\[K^{A}_{a_{i+1},a_{i}}(x\mid x)\leq 2(a_{i+1}-a_{i})+O(\log r).\]
Therefore, by Lemma 18, with respect to \(\varepsilon/4\),
\[K^{A}_{r}(x\mid p_{e}x,e)\leq\frac{\varepsilon r}{2}+B. \tag{7}\]
By repeated applications of the symmetry of information,
\[K_{r}^{A}(x) \geq-\frac{\varepsilon r}{2}+\sum_{I_{i}\in\mathcal{P}\cap T}K_{a_{ i+1},a_{i}}^{A}(x\mid x)+\sum_{i\in\mathbf{Bad}}K_{a_{i+1},a_{i}}^{A}(x\mid x)\] \[\geq-\frac{\varepsilon r}{2}+\sum_{I_{i}\in\mathcal{P}\cap G}K_{a _{i+1},a_{i}}^{A}(x\mid x)+\sum_{i\in\mathbf{Bad}}K_{a_{i+1},a_{i}}^{A}(x\mid x)\] \[=-\frac{\varepsilon r}{2}+L+\sum_{i\in\mathbf{Bad}}K_{a_{i+1},a_{ i}}^{A}(x\mid x) \tag{8}\] \[\geq 2t+K_{r}^{A}(x\mid p_{e}x,e)+B-\varepsilon r\]
Combining (7) and (8) proves inequality (6).
By inequality (6), if
\[B\leq K_{r}^{A}(x)-dr+\frac{D-1}{D}(dr-t),\]
we are done, so we assume otherwise. Applying (6) again and using our assumption on \(t\) implies that
\[K_{r}^{A}(x\mid p_{e}x,e) \leq K_{r}^{A}(x)-2t-B+\varepsilon r\] \[<\frac{d}{D}r-\frac{D+1}{D}t+\varepsilon r\] \[\leq\frac{D-1}{D}(dr-t)+\varepsilon r\] \[\leq K_{r}^{A}(x)-dr+\frac{D-1}{D}\left(dr-t\right)+\varepsilon r,\]
and the proof is complete.
## 4. Effective dimension of distances
In the previous section, we considered partitions of the interval \([1,r]\) depending on the complexity function of our pinned point \(x\). In particular, we were able to use these partitions to get a bound on the complexity of \(x\) given \(e\) and the projection of \(x\) in the direction of \(e\). Now, we need to consider the complexity function of \(y\) and relate this to the complexity of \(|x-y|\). Similar to before, we let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be the piece-wise linear function which agrees with \(K_{s}^{A}(y)\) on the integers, and
\[f(a)=f(\lfloor a\rfloor)+(a-\lfloor a\rfloor)(f(\lceil a\rceil)-f(\lfloor a \rfloor)),\]
for any non-integer \(a\). Since
\[K_{s}^{A}(y)\leq K_{s+1}^{A}(y)\]
for every \(s\in\mathbb{N}\), \(f\) is non-decreasing and since \(y\in\mathbb{R}^{2}\), for every \(a<b\in\mathbb{R}\),
\[f(b)-f(a)\leq 2(b-a)+O(\log\lceil b\rceil).\]
As before, we make the following definitions: an interval \([a,b]\) is called _teal_ if \(f(b)-f(c)\leq b-c\) for every \(c\in[a,b]\). It is called _yellow_ if \(f(c)-f(a)\geq c-a\), for every \(c\in[a,b]\). We denote the set of teal intervals by \(T\) and the set of yellow intervals by \(Y\).
For reference, here we list some conditions that our points \(x\) and \(y\) will be assumed to satisfy throughout the remainder of this section. Let \(x,y\in\mathbb{R}^{2}\), \(e=\frac{x-y}{|x-y|}\) and \(A,B\subseteq\mathbb{N}\). For this section, we let \(d_{x}=\dim^{A}(x)\), \(D_{x}=\operatorname{Dim}^{A}(x)\)
\(d_{y}=\dim^{A}(y)\) and \(D_{y}=\operatorname{Dim}^{A}(y)\). We will assume that \(x\) and \(y\) satisfy the following conditions.
1. \(d_{x},d_{y}>1\)
2. \(K_{r}^{x,A}(e)=r-O(\log r)\) for all \(r\).
3. \(K_{r}^{x,A,B}(y)\geq K_{r}^{A}(y)-O(\log r)\) for all sufficiently large \(r\).
4. \(K_{r}^{A}(e\mid y)=r-o(r)\) for all \(r\).
### Complexity of distances on yellow and teal intervals
As compared to the corresponding part of [20], we'll need to partition \([1,r]\) more carefully. However, similar to the proof of the projection theorem in the previous section, there are a few tools from [20] that we can reuse.
**Lemma 20**.: _Suppose that \(A\subseteq\mathbb{N}\), \(x,y\in\mathbb{R}^{2}\) and \(e=\frac{x-y}{|x-y|}\) satisfy (C1)-(C4) for every \(r\in\mathbb{N}\). Then for every \(\varepsilon\in\mathbb{Q}_{+}\) and all sufficiently large \(r\in\mathbb{N}\), the following hold._
1. _If_ \([t,r]\) _is yellow and_ \(t\leq r\leq 2t\)__ \[K_{r,r,t}^{A,x}(y\mid|x-y|,y)\leq K_{r,t}^{A}(y\mid y)-(r-t)+\varepsilon r.\]
2. _If_ \([t,r]\) _is teal, and_ \(t\leq r\leq 2t\)_,_ \[K_{r,r,t}^{A,x}(y\mid|x-y|,y)\leq\varepsilon r.\]
We say that a partition \(\mathcal{P}=\{[a_{i},a_{i+1}]\}_{i=0}^{k}\) of intervals with disjoint interiors is _good_ if \([1,r]=\cup_{i}[a_{i},a_{i+1}]\) and it satisfies the following conditions.
1. \([a_{i},a_{i+1}]\) is either yellow or teal,
2. \(a_{i+1}\leq 2a_{i}\), for every \(i\) and
3. \(a_{i+2}>2a_{i}\) for every \(i<k\).
Note that (G3) ensures that the errors do not pile up. Furthermore, observe that (G2) is somewhat different than the admissibility condition in the previous section. There, we had that an interval could not be longer than \(t\): essentially some fixed quantity, at least when partitioning. Now, the requirement is that the intervals cannot be any more than "doubling", so the best we can hope for in a partition is a logarithmic number terms. Indeed, just as we had admissible partitions for every \([a,b]\), the following lemma guarantees the existence of good partitions.
**Lemma 21**.: _For every \(y\in\mathbb{R}^{2}\) and every \(r\in\mathbb{N}\), there is a good partition of \([1,r]\)._
The following lemma uses repeated applications of the symmetry of information to write \(K_{r}^{x}(y\mid|x-y|)\) as a sum of its complexity on the intervals of a partition. The conclusion then follows via Lemma 20.
**Lemma 22**.: _Let \(A\subseteq\mathbb{N}\). Let \(\mathcal{P}=\{[a_{i},a_{i+1}]\}_{i=0}^{k}\) be a good partition. Then_
\[K_{r}^{A,x}(y\mid|x-y|)\leq\varepsilon r+\sum_{i\in\textbf{Bad}}K_{a_{i+1},a_{ i}}^{A}(y\mid y)-(a_{i+1}-a_{i}),\]
_where_
\[\textbf{Bad}=\{i\leq k\mid[a_{i},a_{i+1}]\notin T\}.\]
So applying the previous lemma with \(\frac{\varepsilon}{2}\), absorbing the log term for sufficiently large \(r\), and recalling condition (C3), we have that
\[K_{r}^{x,A}(|x-y|)\geq K_{r}^{A}(y)-\sum_{i\in\mathbf{Bad}}K_{a_{i+1},a_{i}}^{A}( y\mid y)-(a_{i+1}-a_{i})-\varepsilon r, \tag{9}\]
Constructing a particular good partition to optimize this bound - either at every precision or well-chosen precisions - will be crucial to proving a bound on the effective Hausdorff and effective packing dimension of such points (respectively) and will thus be a key goal of the remainder of this section.
### Sufficient conditions for an all-yellow partition
We will describe a more general partition strategy in the next subsection, but here we will introduce a related partition for a simpler scenario. In particular, we will discuss situations where we can guarantee the existence of a good partition consisting (essentially) of only yellow intervals which are not more than doubling. To see why this is significant, observe that if we had such a partition \(\mathcal{P}\), equation 9, together with the observation that the complexity of \(y\) grows at an average rate of exactly \(1\) on yellow intervals that are _also_ green, implies that for sufficiently large \(r\)
\[K_{r}^{x,A}(|x-y|) \geq K_{r}^{A}(y)-\sum_{i\in\mathcal{P}}K_{a_{i+1},a_{i}}^{A}(y \mid y)-(a_{i+1}-a_{i})-\frac{\varepsilon}{2}r\] \[\geq K_{r}^{A}(y)-K_{r}^{A}(y)+r-\varepsilon r\] \[=(1-\varepsilon)r\]
Since we can take \(\varepsilon\) to be as small as desired, this implies the dimension of the distance is \(1\). We now state the necessary conditions and make the above argument more rigorous.
**Proposition 23**.: _Suppose that, in addition to conditions \((C1)-(C4)\), we also have that \(D_{y}<2d_{y}-1\). Then \(\dim^{x,A}(|x-y|)=1\)._
Intuitively, this extra requirement expresses that, if \(y\) is semi-regular, then we have the same conclusion as if \(y\) is regular (in the sense of having equal effective Hausdorff and effective packing dimension). As long as these dimensions do not differ by too much, we can find an all-yellow partition. To see why we have this bound specifically, consider the adversary complexity function in Figure 1. We can only work with yellow intervals that are at most doubling, so we want to choose \(D_{y}\) and \(d_{y}\) to guarantee that the average growth rate of the complexity from \(\frac{r}{2}\) to \(r\) is at least \(1\). Maximizing \(K_{\frac{r}{2}}^{A}(y)\) and minimizing \(K_{r}^{A}(y)\) allows us to conclude the above when \(D_{y}<2d_{y}-1\). Now, we prove the proposition.
Proof of Proposition 23.: Pick an \(\varepsilon>0\) small enough that \(d_{x}-\frac{\varepsilon}{4}>1\) and \(D_{y}+\frac{\varepsilon}{4}<2(d_{y}-\frac{\varepsilon}{4})-1\). For all \(r\) sufficiently large, we have that \((d_{y}-\frac{\varepsilon}{4})s<K_{s}^{A}(y)<(D_{y}+\frac{\varepsilon}{4})s\) whenever \(s\geq\frac{\varepsilon}{4}r\). We now construct a partition of the interval \([\frac{\varepsilon}{2}r,r]\).
Set \(r_{0}=r\). Given \(r_{i}\), first check if \(r_{i}\leq\frac{\varepsilon}{2}r\). If it is, let \(i=n\) and we are done. If not, we let \(r_{i+1}\) be the largest value of \(s\) such that \(K_{s}^{A}(y)=K_{\frac{r_{i}}{2}}^{A}(y)+(s-\frac{r_{i}}{2})\). Note that \(\frac{r_{i}}{2}>\frac{\varepsilon}{4}r\), so the inequalities at the start of the proof hold in the interval we consider on this step. First, we show that \(r_{i+1}<r_{i}\). To see this, observe that
\[K^{A}_{\frac{r_{i}}{2}}(y)+(s-\frac{r_{i}}{2})<s+(D_{y}+\frac{\varepsilon}{4}- \frac{1}{2})r_{i}\]
On the other hand,
\[K^{A}_{s}(y)>(d_{y}-\frac{\varepsilon}{4})s,\]
so for an \(s\) satisfying the above equation, we have
\[s<\frac{\frac{D_{y}}{2}-\frac{1}{2}+\frac{\varepsilon}{8}}{d_{y}-1-\frac{ \varepsilon}{4}}r_{i}<r_{i}.\]
Here, the second inequality follows from our choice of \(\varepsilon\). Now, we define the partition \(\mathcal{P}\) to be \([1,r_{n}],[r_{n},r_{n-1}],...,[r_{1},r_{0}]\).
We now claim that each \([r_{i+1},r_{i}]\) is a yellow interval which is at most doubling. If it were not yellow, we would have some \(s^{\prime}\in[r_{i+1},r_{i}]\) such that \(K^{A}_{s^{\prime}}(y)-K^{A}_{r_{i+1}}(y)<s^{\prime}-r_{i+1}\). This implies that \(K^{A}_{s^{\prime}}(y)<K^{A}_{\frac{r_{i}}{2}}(y)+(s^{\prime}-\frac{r_{i}}{2})\). However, \(K^{A}_{r_{i}}(y)>K^{A}_{\frac{r_{i}}{2}}(y)+(r_{i}-\frac{r_{i}}{2})\), so by the intermediate value theorem, there is some \(s^{\prime\prime}\in(s^{\prime},r_{i})\) such that \(K^{A}_{s^{\prime\prime}}(y)=K^{A}_{\frac{r_{i}}{2}}(y)+(s^{\prime\prime}-\frac {r_{i}}{2})\), contradicting that \(r_{i+1}\) was the maximal such precision. Finally, \([r_{i+1},r_{i}]\) is at most doubling because \(r_{i+1}\) cannot be any less than \(\frac{r_{i}}{2}\), as \(K^{A}_{\frac{r_{i}}{2}}(y)=K^{A}_{\frac{r_{i}}{2}}(y)+(\frac{r_{i}}{2}-\frac {r_{i}}{2})\). Thus we have a partition of \([1,r]\) where all but the first interval are yellow.
We want to apply Lemma 22, so we make \(\mathcal{P}\) into a good partition by taking a good partition \(\mathcal{P}_{[1,r_{n}]}\) of the first interval \([1,r_{n}]\) and replacing it in \(\mathcal{P}\) with \(\mathcal{P}_{[1,r_{n}]}\). As for the remaining intervals, the union of yellow intervals is still yellow, so simply greedily combine them from the left to the right. Start a new yellow interval each time adding the next \([r_{i+1},r_{i}]\) would make the current yellow interval more than
Figure 1. On the left, the adversary complexity function that grows as quickly as possible (given \(D_{y}\)) and then levels off. On the right, an implementation of the procedure, generating a yellow interval \([r_{1},r]\) by sending out a line of slope 1 from \(\left(\frac{r}{2},K^{A}_{\frac{r}{2}}(y)\right)\) and then finding the last intersection of this line with \(K^{A}_{s}(y)\).
doubling. For ease of notation, continue to denote this modification of \(\mathcal{P}\) by \(\mathcal{P}\). The conditions being satisfied, we apply Lemma 22 relative to \(A\) with \(\frac{\varepsilon}{2}\) via (9), obtaining:
\[K_{r}^{x,A}(|x-y|) \geq K_{r}^{A}(y)-\frac{\varepsilon}{2}r\sum_{i\in\mathcal{P} \setminus\mathcal{P}_{[1,r_{n}]}}K_{a_{i+1},a_{i}}^{A}(y\mid y)-(a_{i+1}-a_{i})\] \[\geq K_{r}^{A}(y)-(K_{r}^{A}(y)-r+\frac{\varepsilon}{2}r)-\frac{ \varepsilon}{2}r\] \[=(1-\varepsilon)r\]
Since we only needed \(r\) to be sufficiently large,
\[\dim^{x,A}(|x-y|)\geq 1-\varepsilon.\]
and taking a sequence of \(\varepsilon\) going to \(0\) gives the desired conclusion.
### Constructing a general partition
In this subsection, we describe a partitioning strategy that works outside of the special case considered in Proposition 23. The key limitation of the previous subsection was that we could only use intervals that were at most doubling, which we now would like to enhance - at least for certain intervals - using the projection theorem of section 3. The new partition will involve a combination of yellow intervals and certain teal intervals, with the teal intervals chosen so we can apply Theorem 16 to understand the complexity growth of \(|x-y|\) on them. To begin, fix a precision \(r\in\mathbb{N}\). To keep the expressions involved reasonably brief, we set
\[d=\min\{d_{x},d_{y}\}\text{ and }D=\max\{D_{x},D_{y}\}.\lx@note{footnote}{It would be possible to obtain a somewhat better bound in Theorem 5 that more freely involves $d_{x},d_{y},D_{x}$, and $D_{y}$ using the same approach of this section, at the cost of significantly worse calculations.}\]
Our goal is to give a lower bound on the complexity \(K_{r}^{A,x}(|x-y|)\) at precision \(r\). We will first define a sequence \(r=r_{0}>r_{1}>\ldots>r_{k}=1\) of precisions. We then prove a lower bound of the complexity \(K_{r_{i+1},r_{i}}^{A}(|x-y|\mid|x-y|)\) on each interval of the resulting partition of \([1,r]\).
**Constructing the partition \(\mathcal{P}\)**: We define the sequence \(\{r_{i}\}\) inductively. To begin, we set \(r_{0}=r\). Having defined the sequence up to \(r_{i}\), we choose \(r_{i+1}\) as follows. Let \(a\leq r_{i}\) be the minimal real such that \([a,r_{i}]\) can be written as the union of yellow intervals whose lengths are at most doubling. If \(a<r_{i}\), then we set \(r_{i+1}=a\). In this case, we will refer to \([r_{i+1},r_{i}]\) as a **yellow** interval of \(\mathcal{P}\).
Otherwise, let \(t_{i}<r_{i}\) be the maximum of all reals such that
\[f(t)=f(r_{i})+\frac{D-1}{D}\left(dr_{i}-(d+1)t\right)-d(r_{i}-t). \tag{10}\]
Let \(t_{i}^{\prime}<r_{i}\) be the largest real such that \(f(r_{i})=f(t_{i}^{\prime})+r_{i}-t_{i}^{\prime}\). Note that such a \(t^{\prime}\) must exist. We then set
\[r_{i+1}=\max\{t_{i},t_{i}^{\prime}\}. \tag{11}\]
Note that, in this case, \([r_{i+1},r_{i}]\) is teal. We therefore refer to intervals of this case as **teal** intervals of \(\mathcal{P}\).
We begin by observing that our partition is well defined.
**Observation 24**.: _Suppose that \(r_{i}\leq r\) and \(r_{i}>C\), for some fixed constant \(C\) depending only on \(y\). Then there is at least one \(t\) such that_
\[f(t)=f(r_{i})+\frac{D-1}{D}\left(dr_{i}-(d+1)t\right)-d(r_{i}-t).\]
Proof.: We first note that \(f(0)=O(1)\), and so
\[f(r_{i})+\frac{D-1}{D}\left(dr_{i}-(d+1)t\right)-d(r_{i}-t) =f(r_{i})+d\frac{D-1}{D}r_{i}-dr_{i}\] \[=f(r_{i})-\frac{d}{D}r_{i}\] \[>f(0).\]
We also see that, at \(t=r_{i}\),
\[f(r_{i})+\frac{D-1}{D}\left(dr_{i}-(d+1)t\right)-d(r_{i}-t)<f(r_{i}),\]
and so by the mean value theorem, the conclusion follows.
We now show that a partition \(\mathcal{P}\) does not contain too many intervals. This will allow us to control the accumulation of error terms when applying the symmetry of information.
**Lemma 25**.: _If \([r_{i+1},r_{i}]\in\mathcal{P}\) is teal, then \(r_{i+1}\leq\frac{r_{i}}{2}\)._
Proof.: Suppose that \([r_{i+1},r_{i}]\in\mathcal{P}\) is teal. Then, by the construction of \(\mathcal{P}\), \([t,r_{i}]\) is not yellow, for any \(\frac{r_{i}}{2}\leq t<r_{i}\). This immediately implies that \(t_{i}^{\prime}<\frac{r_{i}}{2}\). Moreover, for any \(t>\frac{r_{i}}{2}\), we see that
\[f(t)-f(r_{i})+\frac{d}{D}r_{i}-\frac{d+1-D}{D}t \geq\frac{d+1}{D}t-\frac{D-d}{D}r_{i}\] \[>0,\]
implying that \(t_{i}\leq\frac{r_{i}}{2}\), and the conclusion follows.
**Lemma 26**.: _Let \(\varepsilon>0\). Suppose that \(r_{i+1}\) is sufficiently large and \([r_{i+1},r_{i}]\) is a teal interval of the above partition. Then_
\[r_{i+1}\geq\frac{d(D-1)}{D^{2}+D-d-1}r_{i}-\varepsilon r_{i}.\]
_In particular,_
\[r_{i+1}\geq\frac{d(2-D)}{2+d(2-D)}r_{i}+1.\]
Proof.: Assume that \(r_{i+1}\) is large enough that, for all \(s>r_{i+1}\),
\[ds-\varepsilon^{\prime}s\leq K_{s}^{A}(y)\leq Ds+\varepsilon^{\prime}s\]
for some \(\varepsilon^{\prime}\) to be determined. By our choice of \(r_{i+1}\),
\[K_{r_{i+1}}^{A}(y)\geq K_{r_{i}}^{A}(y)-\frac{d}{D}r_{i}+\frac{1+d-D}{D}r_{i+1 }-\varepsilon^{\prime}r_{i}. \tag{12}\]
We then have
\[Dr_{i+1} \geq D_{y}r_{i+1}\] \[\geq K_{r_{i+1}}^{A}(y)-\varepsilon^{\prime}r_{i+1}\] \[\geq K_{r_{i}}^{A}(y)-\frac{d}{D}r_{i}+\frac{1+d-D-D\varepsilon^{ \prime}}{D}r_{i+1}\] \[\geq dr_{i}-\varepsilon^{\prime}r_{i}-\frac{d}{D}r_{i}+\frac{1+d- D-D\varepsilon^{\prime}}{D}r_{i+1}\] \[=\frac{d(D-1)-D\varepsilon^{\prime}}{D}r_{i}+\frac{1+d-D-D \varepsilon^{\prime}}{D}r_{i+1}.\]
Rearranging and simplifying yields
\[r_{i+1}\geq\frac{d(D-1)-D\varepsilon^{\prime}}{D^{2}+D-d-1-D\varepsilon^{ \prime}}r_{i}. \tag{13}\]
Given \(\varepsilon>0\), choosing \(\varepsilon^{\prime}\) sufficiently small compared to these quantities gives the desired conclusion:
\[r_{i+1}\geq\frac{d(D-1)}{D^{2}+D-d-1}r_{i}-\varepsilon r_{i}. \tag{14}\]
It is straightforward to verify that this implies that
\[r_{i+1}\geq\frac{d(2-D)}{2+d(2-D)}r_{i}\]
Figure 2. On the left, the adversary complexity function with \(d_{y}=1.2\) and \(D_{y}=1.44\), equipped with an initial good partition of yellow intervals from \(1\) to \(r^{\prime}\) and a teal interval \([r^{\prime},r]\). On the right, generating an improved partition from the good partition. Note that the green interval \([r_{1},r_{0}]\) is more than doubling. By combining it with a good partition of \([1,r_{1}]\)– which will clearly be all-yellow– we obtain an all yellow partition of \([1,r]\). \(1.44>2\cdot 1.2-1\), so this improves over the previous subsection.
for sufficiently small \(\varepsilon\). Indeed, using the above inequality, it suffices to show that
\[\frac{d(D-1)}{D^{2}+D-d-1}>\frac{d(2-D)}{2+d(2-D)},\]
or, equivalently, \(d(2-D)>D-D^{2}+1\).
By our assumption, \(d>1\), and so the above inequality follows. Thus, for \(r\) sufficiently large, we have
\[r_{i+1}\geq\frac{d(2-D)}{2+d(2-D)}r_{i}+1.\]
**Lemma 27**.: _Let \(\varepsilon>0\). Suppose that \([r_{i+1},r_{i}]\in\mathcal{P}\) is a yellow interval. Then,_
\[K^{A,x}_{r_{i},r_{i+1}}(|x-y|\mid|x-y|)\geq r_{i}-r_{i+1}-\varepsilon r_{i}.\]
Proof.: By assumption, \([r_{i+1},r_{i}]\) is the union of of yellow intervals \([a_{j+1},a_{j}]\) such that \(a_{j}\leq 2a_{j+1}\). By a simple greedy strategy we can construct a partition \(P_{1}=\{[b_{k+1},b_{k}]\}\) of \([r_{i+1},r_{i}]\) such that, for every \(k\), \([b_{k+1},b_{k}]\) is yellow, \(b_{k}\leq 2b_{k+1}\) and \(b_{k+2}>2b_{k}\). That is, \(P_{1}\) is a good partition of \([r_{i+1},r_{i}]\). The conclusion then follows from Lemma 22.
**Lemma 28**.: _Let \(\varepsilon>0\) be given and suppose that \([r_{i+1},r_{i}]\) is teal and \(r_{i}\) is sufficiently large. Then_
\[\frac{K^{A}_{r_{i},r_{i+1}}(y\mid y)}{r_{i}-r_{i+1}}\geq\min\{1,\frac{d(2D-d-1 )}{D^{2}+D-Dd-1}-\varepsilon\} \tag{15}\]
Proof.: Recall that we chose \(r_{i+1}\) to be
\[r_{i+1}=\max\{t,t^{\prime}\},\]
where \(t^{\prime}\) is the largest real such that \([t^{\prime},r_{i}]\) is green, and \(t\) is the largest real such that
\[f(t)=f(r_{i})+\frac{D-1}{D}\left(dr_{i}-(d+1)t\right)-d(r_{i}-t).\]
If \(r_{i+1}=t^{\prime}\), then
\[K^{A}_{r_{i+1}}(y)=K^{A}_{r_{i}}(y)-(r_{i}-r_{i+1}),\]
and the conclusion holds trivially.
We now assume that \(r_{i+1}=t\). Then, by the previous proposition,
\[r_{i+1}\geq\frac{d(D-1)}{D^{2}+D-d-1}r_{i}-\varepsilon r_{i}. \tag{16}\]
We proceed via a tedious, but straightforward, calculation. Noting that our interval is not green, we have from the definition of \(r_{i+1}\) that
\[K^{A}_{r_{i}}(y)-K^{A}_{r_{i+1}}(y)=-\frac{D-1}{D}\left(dr_{i}-(d+1)r_{i+1} \right)+d(r_{i}-r_{i+1}).\]
Using this condition and the above bound on \(r_{i+1}\) allows one to bound the growth rate on such an interval. We omit the details since the algebra becomes quite unpleasant.
**Observation 29**.: _When \(D\leq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\), \(\frac{K^{A}_{r_{i},r_{i+1}}(y\mid y)}{r_{i}-r_{i+1}}=1\) whenever \([r_{i+1},r_{i}]\) is teal._
The last goal of this subsection is to prove that these particular teals are useful, namely, that we can lower bound the growth of the complexity of the distance by the growth of the complexity of \(y\) on them. We start by stating the following lemma from [20]:
**Lemma 30**.: _Let \(x,y\in\mathbb{R}^{2}\) and \(r\in\mathbb{N}\). Let \(z\in\mathbb{R}^{2}\) such that \(|x-y|=|x-z|\). Then for every \(A\subseteq\mathbb{N}\),_
\[K_{r}^{A}(z)\geq K_{t}^{A}(y)+K_{r-t,r}^{A}(x\mid y)-K_{r-t}(x\mid p_{e^{\prime }}x,e^{\prime})-O(\log r), \tag{17}\]
_where \(e^{\prime}=\frac{y-z}{|y-z|}\) and \(t=-\log|y-z|\)._
Though it may appear somewhat cumbersome, the above lemma is a relatively straightforward consequence of attempting to compute \(x\) given access to \(y\) up to a certain precision through the use of \(z\) and \(w\) - the midpoint between \(y\) and \(z\) - which has the property that \(p_{e^{\prime}}x=p_{e^{\prime}}w\), which is a key connection between projections and distances. In particular, note the term \(K_{r-t}(x\mid p_{e^{\prime}}x,e^{\prime})\) above; bounding this is where the projection theorem will be useful. Now, we state the final lemma of this subsection.
**Lemma 31**.: _Suppose that \([r_{i+1},r_{i}]\in\mathcal{P}\) is a teal interval. For any \(\varepsilon>0\), provided that \(r_{i+1}\) is sufficiently large, we have_
\[K_{r_{i},r_{i},r_{i+1}}^{A,x}(y\mid|x-y|,y)\leq\varepsilon r_{i}. \tag{18}\]
_Therefore, \(K_{r_{i},r_{i+1}}^{A,x}(|x-y|\mid|x-y|)\geq K_{r_{i},r_{i+1}}^{A,x}(y\mid y)- \varepsilon r_{i}\)._
Notice that the conclusion of Lemma 15 is almost exactly the conclusion of this lemma. Thus, we need to verify that its conditions are satisfied, which is the content of this proof. Essentially, this entails proving a lower bound on the complexity of points \(z\) which are the same distance from \(x\) as \(y\) is.
Proof.: Let some small rational \(\varepsilon>0\) be given, and assume \(r_{i+1}\) is sufficiently large. Let \(\eta\) be the rational such that \(\eta r_{i}=K_{r_{i}}^{A}(y)-4\varepsilon\). Let \(G=D(r,y,\eta)\) be the oracle of Lemma 14 relative to \(A\).
Our goal is to apply Lemma 15. It is routine to verify that condition (i) of Lemma 15 holds. We must therefore verify condition (ii) That is, we need to show that, for any \(z\in B_{2^{-r_{i+1}}}(y)\) whose distance from \(x\) is \(|x-y|\), either (i) \(K_{r_{i}}^{A,G}(z)\) is greater than \(\eta r_{i}\) or (ii) \(z\) is very close to \(y\). Formally, we must show that, for any such \(z\),
\[K_{r_{i}}^{A,G}(z)\geq\eta r_{i}+\min\{\varepsilon r_{i},r_{i}-s-\varepsilon r _{i}\}, \tag{19}\]
where \(s=-\log|y-z|\).
To that end, let \(z\in B_{2^{-r_{i+1}}}(y)\) such that \(|x-y|=|x-z|\). Let \(s=|y-z|\). We consider two cases. For the first, assume that \(s\geq\frac{r_{i}}{2}-\log r_{i}\). Then, as observed in [20], the projections of \(y\) and \(z\) in the direction \(e\) are almost exactly the same. Specifically, \(|p_{e}y-p_{e}z|<r_{i}^{2}2^{-r_{i}}\). Then, letting \(r_{i}^{\prime}=r_{i}-2\log r_{i}\), these projections are indistinguishable at precision \(r_{i}^{\prime}\). This enables us to apply Lemma 13 which, in conjunction with property (C4) and the properties of our oracle \(G\) imply that
\[K_{r_{i}}^{A,G}(z)\geq K_{s}^{A,G}(y)+r_{i}-s-\frac{\varepsilon}{2}r_{i}-O( \log r_{i}) \tag{20}\]
Then, using the fact that \(K_{s}^{A,G}(y)=\min\{\eta r_{i},K_{s}^{A}(y)+O(\log r_{i})\) and considering each of these cases establishes (19) in the case that \(s\geq\frac{r_{i}}{2}-\log r_{i}\).
This leaves the case that \(s<\frac{r_{i}}{2}-\log r_{i}\). Note that this immediately implies that \(K_{s}^{A,G}(y)=K_{s}^{A}(y)-O(\log r_{i})\). Lemma 30, relative to \((A,G)\), implies that
\[K_{r_{i}}^{A,G}(z)\geq K_{s}^{A,G}(y)+K_{r_{i}-s,r_{i}}^{A,G}(x\mid y)-K_{r_{i}- s}^{A,G}(x\mid p_{e^{\prime}}x,e^{\prime})-O(\log r). \tag{21}\]
To bound the projection term, we need to apply Theorem 16 with respect to \(x\), \(e^{\prime}\), \(\varepsilon\), a constant \(C\) (depending only on \(x\) and \(y\)), \(t=s\), and \(r=r_{i}-s\). We now check that the conditions are satisfied.
First, observe that \(r_{i+1}-1<s<\frac{r_{i}}{2}-\log r_{i}\), since \(z\) is assumed to be within \(2^{-r_{i+1}}\) of \(y\). The second inequality implies that we can take \(r_{i}-s\) to be sufficiently large, since \(r_{i}\) is taken to be sufficiently large. From the first inequality and Lemma 26, we obtain that
\[s\geq\left(\frac{d(2-D)}{2+d(2-D)}r_{i}+1\right)-1. \tag{22}\]
Hence,
\[s\geq\frac{d(2-D)}{2}(r_{i}-s) \tag{23}\]
Thus conditions (P1) and (P2) are satisfied. As for condition (P3), from an observation in [20], \(e^{\prime}\) and \(e\) are close enough to each other that, using the fact that \(e\) and its orthogonal complement are computable from each other, we have for \(s^{\prime}\leq s\)
\[K_{s^{\prime}}^{A,x}(e^{\prime})=K_{s^{\prime}}^{A,x}(e)+O(\log s^{\prime}). \tag{24}\]
So, using condition (C2), we have
\[K_{s^{\prime}}^{A,x}(e^{\prime})\geq s^{\prime}-C\log s^{\prime} \tag{25}\]
and thus we may apply Theorem 16.
Using Theorem 16 the properties of \(G\), and our choice of \(r_{i+1}\) yields
\[K_{r_{i}}^{A,G}(z) \geq K_{s}^{A,G}(y)+K_{r_{i}-s,r_{i}}^{A,G}(x\mid y)-K_{r_{i}-s}^ {A,G}(x\mid p_{e^{\prime}}x,e^{\prime})-O(\log r)\] \[\geq K_{s}^{A}(y)+K_{r_{i}-s,r_{i}}^{A}(x\mid y)-K_{r_{i}-s}^{A,G }(x\mid p_{e^{\prime}}x,e^{\prime})-O(\log r)\] \[\geq K_{s}^{A}(y)+K_{r_{i}-s}^{A}(x)-K_{r_{i}-s}^{A,G}(x\mid p_{e ^{\prime}}x,e^{\prime})-O(\log r)\] \[\geq K_{s}^{A}(y)+K_{r_{i}-s}^{A}(x)-K_{r_{i}-s}^{A}(x\mid p_{e^{ \prime}}x,e^{\prime})-O(\log r)\] \[\geq K_{s}^{A}(y)+K_{r_{i}-s}^{A}(x)-\varepsilon r_{i}-O(\log r)\] \[-\max\{K_{r_{i}-s}^{A}(x)-\frac{d}{D}r_{i}+\frac{d+1-D}{D}s,K_{r_ {i}-s}^{A}(x)-(r_{i}-s)\}\]
Hence,
\[K_{r_{i}}^{A,G}(z)\geq K_{s}^{A}(y)+\min\{\frac{d}{D}r_{i}-\frac{d+1-D}{D}s,(r _{i}-s)\}-\varepsilon r_{i}-O(\log r) \tag{26}\]
By our choice of \(r_{i+1}\), (10), we see that
\[K_{s}^{A}(y)\geq K_{r_{i}}^{A}(y)-\frac{d}{D}r_{i}+\frac{d+1-D}{D}s. \tag{27}\]
Combining (26) and (27) shows that
\[K^{A,G}_{r_{i}}(z) \geq K^{A}_{s}(y)+\min\{\frac{d}{D}r_{i}-\frac{d+1-D}{D}s,(r_{i}-s) \}-\varepsilon r_{i}-O(\log r)\] \[\geq K^{A}_{r_{i}}(y)-\frac{d}{D}r_{i}+\frac{d+1-D}{D}s\] \[\qquad+\min\{\frac{d}{D}r_{i}-\frac{d+1-D}{D}s,(r_{i}-s)\}- \varepsilon r_{i}-O(\log r)\]
If \(\frac{d}{D}r_{i}-\frac{d+1-D}{D}s\leq r_{i}-s\), then we have
\[K^{A,G}_{r_{i}}(z) \geq K^{A}_{r_{i}}(y)-2\varepsilon r_{i}\] \[\geq\eta r_{i}+\varepsilon r_{i},\]
and (19) holds. Otherwise, since \([r_{i+1},r_{i}]\) is teal,
\[K^{A,G}_{r_{i}}(z) \geq K^{A}_{s}(y)+r_{i}-s-\varepsilon r_{i}-O(\log r)\] \[\geq K^{A}_{r_{i}}(y)-\varepsilon r_{i}-O(\log r)\] \[\geq\eta r_{i}+\varepsilon r_{i}\]
and we can again establish (19).
Therefore, we are able to apply Lemma 15, which shows that
\[K^{A,x}_{r_{i},r_{i+1}}(|x-y|\mid|x-y|) \geq K^{A,x}_{r_{i},r_{i+1}}(|x-y|\mid y)\] \[\geq K^{A,G,x}_{r_{i},r_{i+1}}(|x-y|\mid y)\] \[\geq K^{A,G,x}_{r_{i},r_{i+1}}(y\mid y)-3\varepsilon r+K( \varepsilon,\eta)-O(\log r_{i})\] \[\geq K^{A,x}_{r_{i},r_{i+1}}(y\mid y)-4\varepsilon r,\]
and the proof is complete.
### Main theorem for effective Hausdorff dimension
In this section, we prove the point-wise analog of the main theorem of this paper. That is, we prove the following.
**Theorem 32**.: _Suppose that \(x,y\in\mathbb{R}^{2}\), \(e=\frac{y-x}{|y-x|}\), and \(A,B\subseteq\mathbb{N}\) satisfy the following._
1. \(d_{x},d_{y}>1\)__
2. \(K^{x,A}_{r}(e)=r-O(\log r)\) _for all_ \(r\)_._
3. \(K^{x,A,B}_{r}(y)\geq K^{A}_{r}(y)-O(\log r)\) _for all sufficiently large_ \(r\)_._
4. \(K^{A}_{r}(e\mid y)=r-o(r)\) _for all_ \(r\)_._
_Then_
\[\dim^{x,A}(|x-y|)\geq d\left(1-\frac{(D-1)(D-d)}{2(D^{2}+D-1)-2d(2D-1)}\right),\]
_where \(d=\min\{d_{x},d_{y}\}\) and \(D=\max\{D_{x},D_{y}\}\). Furthermore, if_
\[D\leq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\]
_Then \(\dim^{x,A}(|x-y|)=1\)._
In the last subsection we have a good bound on the complexity growth of the distance on any teal interval, and the complexity growth on any yellow is \(1\). Then, it would seem that the worst case scenario is that our partition is (almost) all teal. But, this case is advantageous too, because if there is very little yellow, then almost all the complexity growth for \(y\) has to take place on the teals, and Lemma 31 indicates we can transfer _all_ the growth of \(K_{s}^{A}(y)\) on teals to \(K_{s}^{A}(|x-y|)\). So, the worst case scenario is actually when there is an intermediate amount of yellow. Now, we formalize this and prove the theorem.
Proof.: Let \(\varepsilon>0\) be given and let \(r\) be sufficiently large. Let \(\mathcal{P}=\{[r_{i+1},r_{i}]\}\) be the partition of \([1,r]\) defined in the previous section. Let \(L\) be the total length of the yellow intervals. Recall that if \([r_{i+1},r_{i}]\) is yellow, then we have that
\[K_{r_{i},r_{i+1}}^{A,x}(|x-y|\mid|x-y|)\geq r_{i}-r_{i+1}-\varepsilon r_{i}\]
By the previous lemma, and repeated applications of the symmetry of information, we have for sufficiently large \(r\) that
\[K_{r}^{A,x}(|x-y|) =\sum_{i\in Y}K_{r_{i},r_{i+1}}^{A,x}(|x-y|\mid|x-y|)+\sum_{i\in Y ^{C}}K_{r_{i},r_{i+1}}^{A,x}(|x-y|\mid|x-y|)\] \[\geq L-\frac{\varepsilon}{3}r+\sum_{i\in Y^{C}}K_{r_{i},r_{i+1}}^ {A,x}(|x-y|\mid|x-y|)\] \[\geq L-\frac{2\varepsilon}{3}r+\sum_{i\in Y^{C}}K_{r_{i},r_{i+1}} ^{A,x}(y\mid y)\] \[\geq L+\min\{1,\frac{d(2D-d-1)}{D^{2}+D-Dd-1}\}(r-L)-\varepsilon r.\]
From Observation 29, we have \(1\) as the minimum when \(D\leq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\), so in this case \(K_{r}^{A,x}(|x-y|)\geq L+(r-L)-\varepsilon r=r-\varepsilon r\). Taking \(\varepsilon\) as small as desired and letting \(r\) go to infinity completes the proof. Now assume we are not in the above case and that \(r\) is sufficiently large given \(\varepsilon\), and note the following bound which is advantageous when there is not much yellow:
\[d_{y}r \leq K_{r}^{A,x}(y)+\frac{\varepsilon}{3}\] \[=\sum_{i\in Y}K_{r_{i},r_{i+1}}^{A,x}(y\mid y)+\sum_{i\in Y^{C} }K_{r_{i},r_{i+1}}^{A,x}(y\mid y)\] \[\leq 2L+\frac{2\varepsilon}{3}+\sum_{i\in Y^{C}}K_{r_{i},r_{i+1} }^{A,x}(|x-y|)\] \[=L+K_{r}^{A,x}(|x-y|)+\varepsilon r.\]
Hence,
\[K_{r}^{A,x}(|x-y|)\geq\max\{L+\frac{d(2D-d-1)}{D^{2}+D-Dd-1}(r-L),dr-L\}- \varepsilon r. \tag{28}\]
The first term is increasing in \(L\) (since we are considering the case where \(\frac{d(2D-d-1)}{D^{2}+D-Dd-1}<1\)), and the second term is decreasing in \(L\), so we can set them equal to find the minimum over all \(L\), which yields
\[K_{r}^{x,A}(|x-y|)\geq d\left(1-\frac{\left(D-1\right)\left(D-d\right)}{2D^{2}+ \left(2-4d\right)D+d^{2}+d-2}\right)-\varepsilon r. \tag{29}\]
Since we can take \(\varepsilon\) as small as desired and then let \(r\) go to infinity, this completes the proof.
Now, we can consider a few special cases. First, note that when combined with Proposition 23, for any choice of \(d\), our lower bound on the dimension is monotone decreasing in \(D\). Thus, setting \(D=2\) gives the following corollary:
**Corollary 33**.: _When conditions (C1)-(C4) are satisfied,_
\[\dim^{x,A}(|x-y|)\geq\frac{d(d-4)}{d-5}. \tag{30}\]
Similarly, our lower bound is monotone increasing in \(d\), so setting \(d=1\) gives
**Corollary 34**.: _When conditions (C1)-(C4) are satisfied,_
\[\dim^{x,A}(|x-y|)\geq\frac{D+1}{2D}. \tag{31}\]
Note that these are the effective analogs of Corollary 2 and Corollary 3 in the introduction. The first of these corollaries is a helpful comparison point to previous work on the pinned distance problem, and the second will be useful in the next subsection.
### Main theorem for effective packing dimension
We can use our work in the previous section to prove a new bound on the packing dimension of pinned distance sets. The basic idea is that for effective Hausdorff dimension, we had to prove a lower bound for \(K_{r}^{x,A}(|x-y|)\) at _every_ sufficiently large precision (since \(\dim\) is defined with a limit inferior), whereas for effective packing dimension, we are free to _choose_ a sequence of advantageous precisions (since \(\Dim\) is defined with a limit superior). The idea is to consider "maximal" precisions \(r_{i}\) where \(K_{r_{i}}^{A}(y)\approx Dr_{i}\). These maximal precisions have to be contained in large yellow intervals, and prior to the large yellow intervals, we can use the bound from Corollary 34, which holds at _every_ precision.
We call an interval \([a,b]\) an **all yellow** interval if there is a good partition of \([a,b]\) consisting entirely yellow intervals whose lengths are at most doubling.
**Lemma 35**.: _Suppose that \(\varepsilon>0\), \(A,B\subseteq\mathbb{N}\) and \(x,y\in\mathbb{R}^{2}\) satisfy (C1)-(C4). In addition, assume that \(\Dim^{A}(x)\leq\Dim^{A}(y)\). Let \(r\) be a sufficiently large precision which is maximal in the sense that \(K_{r}^{A,x}(y)\geq D_{y}r-\varepsilon r\). Then_
\[K_{r}^{A,B,x}(|x-y|)\geq\frac{D^{2}-D+2}{2D}r-\varepsilon r. \tag{32}\]
_Moreover, if \([r_{1},r_{2}]\) is an all yellow interval containing \(r\), then_
\[K_{r_{2}}^{A,B,x}(|x-y|)\geq r_{2}-\frac{3D-D^{2}-2}{2D}r-\varepsilon r. \tag{33}\]
Proof.: Let \(r\) be sufficiently large such that
\[K_{r}^{A}(y)\geq Dr-\frac{\varepsilon}{2}r. \tag{34}\]
Let \(\mathcal{P}=\{[r_{i+1},r_{i}]\}\) be the partition of \([1,r]\) defined in the previous section. Let \(L\) be the total length of the yellow intervals in \(\mathcal{P}\). Using (28), we have that
\[K_{r}^{A,x}(|x-y|)\geq\max\{L+\frac{2}{D+1}(r-L),Dr-L\}-\varepsilon r. \tag{35}\]
We can therefore conclude that (32) holds.
Let \([r_{1},r_{2}]\) be an all yellow interval containing \(r\). Then, by Lemma 20,
\[r_{2}-r_{1}-\frac{\varepsilon r}{2} \leq K_{r_{2},r_{1}}^{A,x}(|x-y|\mid|x-y|)\] \[\leq K_{r_{2}}^{A,x}(|x-y|)-K_{r_{1}}^{A,x}(|x-y|)-O(\log r)\] \[\leq K_{r_{2}}^{A,x}(|x-y|)-\left(K_{r}^{A,x}(|x-y|)-(r-r_{1}) \right)-O(\log r).\]
Rearranging, and using (32), we see that for sufficiently large \(r\)
\[K_{r_{2}}^{A,x}(|x-y|) =K_{r}^{A,x}(|x-y|)-(r-r_{1})+r_{2}-r_{1}-\frac{\varepsilon}{2}r\] \[\geq\frac{D^{2}-D+2}{2D}r-\varepsilon r+r_{2}-r\] \[=r_{2}-\frac{3D-D^{2}-2}{2D}r-\varepsilon r,\]
and the conclusion follows.
**Theorem 36**.: _Suppose that \(x,y\in\mathbb{R}^{2}\), \(e=\frac{y-x}{|y-x|}\), and \(A,B\subseteq\mathbb{N}\) satisfy \(\operatorname{Dim}^{A}(x)\leq\operatorname{Dim}^{A}(y)\) and the following._
1. \(d_{x},d_{y}>1\)_,_
2. \(K_{r}^{x,A}(e)=r-O(\log r)\) _for all_ \(r\)_._
3. \(K_{r}^{A,A,B}(y)\geq K_{r}^{A}(y)-O(\log r)\) _for all sufficiently large_ \(r\)_._
4. \(K_{r}^{A}(e\mid y)=r-o(r)\) _for all_ \(r\)_._
_Then \(\operatorname{Dim}^{x,A,B}(|x-y|)\geq\frac{3D_{y}^{2}-D_{y}+6}{8D_{y}}\)_
_Remark_.: The extra requirement that \(\operatorname{Dim}^{A}(x)\leq\operatorname{Dim}^{A}(y)\) seems necessary to obtain the largest lower bound with our methods. Previously in this section, we assumed \(D\) was the larger of \(D_{x},D_{y}\), a safe assumption since a larger \(D_{x}\) gives a worse projection theorem and a larger \(D_{y}\) gives a worse adversary function when partitioning. For the packing bound, however, a higher \(D_{y}\) could actually improve the bound. Thus, if \(D_{y}\) were actually much smaller than \(D\), this could make for a worse bound, which necessitates the extra assumption.
Proof.: Let \(\varepsilon>0\). Let \(r\) be a sufficiently large precision which is maximal in the sense that \(K_{r}^{A}(y)\geq D_{y}r-\varepsilon r\). We also assume that the function \(f\) associated to \(K_{s}^{A}(y)\) is increasing on \([r-1,r]\). Note that there are infinitely many such \(r\). We first assume that there exists an all yellow interval \([r_{1},r_{2}]\) containing \(r\) such that \(r_{2}\geq\frac{4}{3}r\). Then,
\[K_{r_{2}}^{A,x}(|x-y|)\geq\frac{3D^{2}-D+6}{8D}r_{2}-\varepsilon r_{2},\]
and the proof is complete.
We now assume that, no such all yellow interval exists. Let \([r_{1},r_{2}]\) be an all yellow interval containing \(r\) which is of maximal length. This implies that there is an interval \([a,r_{2}]\) which is the union of green intervals, whose lengths are at most doubling, such that \(a\leq\frac{r_{2}}{2}\). Hence \(r\geq\frac{3}{2}a\). For convenience, we set \(r^{\prime}=\frac{r_{2}}{2}\). Let \(\mathcal{P}=\{[r_{i+1},r_{i}]\}\) be the partition of \([1,r^{\prime}]\). Let \(L\) be the total length of the yellow intervals in \(\mathcal{P}\). We first see that
\[K^{A,x}_{r^{\prime}}(|x-y|)\geq\max\{L+\frac{2}{D+1}(r^{\prime}-L),K^{A}_{r^{ \prime}}(y)-L\}. \tag{36}\]
Since \([a,r_{2}]\) is green, and \(r\geq\frac{3}{2}r^{\prime}\),
\[K^{A}_{r^{\prime}}(y) \geq K^{A}_{2r^{\prime}}(y)-r^{\prime}\] \[\geq K^{A}_{r}(y)-r^{\prime}\] \[=Dr-r^{\prime}\] \[>\frac{3D-2}{2}r^{\prime}.\]
We now show that
\[K^{A,x}_{r^{\prime}}(|x-y|)\geq\frac{3D^{2}-5D+6}{4D}r^{\prime}-\varepsilon r. \tag{37}\]
If
\[K^{A}_{r^{\prime}}(y)-L\geq\frac{3D^{2}-5D+6}{4D}r^{\prime}-\varepsilon r, \tag{38}\]
then (37) holds immediately. Otherwise, using our lower bound on \(K^{A}_{r^{\prime}}(y)\), we see that
\[L\geq\frac{3D^{2}+D-6}{4D}r^{\prime}. \tag{39}\]
Therefore,
\[K^{A,x}_{r^{\prime}}(|x-y|) \geq L+\frac{2}{D+1}(r^{\prime}-L)-\varepsilon r\] \[\geq\frac{3D^{2}-5D+6}{4D}r^{\prime}-\varepsilon r,\]
and so (37) holds.
Since \([r_{1},r_{2}]\) is all yellow, by Lemma 20 we see that
\[K^{x,A,B}_{2r^{\prime}}(|x-y|) \geq\frac{3D^{2}-5D+6}{4D}r^{\prime}+r^{\prime}-\varepsilon r\] \[=\frac{3D^{2}-D+6}{4D}r^{\prime}-\varepsilon r\] \[\geq\frac{3D^{2}-D+6}{8D}2r^{\prime}-2\varepsilon r^{\prime},\]
Noting we can take \(\varepsilon\) as small as desired and then let \(r\) go to infinity completes the proof.
## 5. Dimensions of pinned distance sets
In this section, we reduce the effective Hausdorff and packing theorems to their classical analogues. We prove Theorem 5 and then Theorem 6 in the first subsection, then consider packing dimension. Throughout, we have \(x,y\in\mathbb{R}^{2}\), \(e=\frac{x-y}{|x-y|}\) and \(A,B\subseteq\mathbb{N}\). For ease of reference, we list conditions \((C1)-(C4)\) here:
* \(\dim^{A}(x)>d_{x},d_{y}>1\)
* \(K_{r}^{x,A}(e)=r-O(\log r)\) for all \(r\).
* \(K_{r}^{x,A,B}(y)\geq K_{r}^{A}(y)-O(\log r)\) for all sufficiently large \(r\).
* \(K_{r}^{A}(e\mid y)=r-o(r)\) for all \(r\).
Note the modification of condition (C1), here we drop the convention that \(d_{x}=\dim^{A}(x)\). For the reduction, we will want \(d_{x}\) to be strictly less than \(\dim_{H}(X)\), because we will remove an exceptional set of size \(d_{x}\). Thus, we want \(\dim_{A}(x)>d_{x}\) so we can apply our effective theorem uniformly.
### Hausdorff dimension of pinned distance sets
A main tool we need to reduce the classical statements to their effective counterparts is the following radial projection theorem of Orponen [16]:
**Theorem 37**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be a Borel set with \(s=dim_{H}(Y)>1\) such that there is a measure \(\mu\in\mathcal{M}(Y)\) such that there is a measure satisfying \(I_{d}(\mu)<\infty\) for all \(1<d<s\). Then there is a Borel \(G\subseteq\mathbb{R}^{2}\) with \(\dim_{H}(G)\leq 2-\dim_{H}(Y)\) such that, for every \(x\in\mathbb{R}^{2}\setminus\text{spt}(\mu)\), \(\mathcal{H}^{1}(\pi_{x}(Y))>0\). Moreover, the pushforward of \(\mu\) under \(\pi_{x}\) is absolutely continuous with respect to \(\mathcal{H}^{1}|_{S^{1}}\) for \(x\notin G\)._
With this theorem, we are able to prove the following.
**Lemma 38**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be compact with \(0<\mathcal{H}^{d}(Y)<\infty\) for some \(d_{y}>1\), and let \(d_{x}>1\) be given. Let \(\mu=\mathcal{H}^{s}|_{Y}\). Let \(A\subseteq\mathbb{N}\) be an oracle such that \(A\) is a packing oracle for \(Y\), \(Y\) is computably compact relative to \(A\) and \(\mu\) is computable relative to \(A\). Then, there is a set \(G\) of Hausdorff dimension at most \(d_{x}\) such that for all \(x\in\mathbb{R}^{2}-(\text{spt}(\mu)\cup G)\), and all \(B\subseteq\mathbb{N}\), there exists some \(y\in Y\) such that the pair \(x,y\) satisfy conditions (C1)-(C4)._
Proof.: Let \(G\) be the set guaranteed by Orponen's radial projection theorem with respect to \(Y\) and \(\mu=\mathcal{H}^{s}|_{Y}\). Let \(x\in\mathbb{R}^{2}-(\text{spt}(\mu)\cup G)\) have effective Hausdorff dimension relative to \(A\) greater than \(d_{x}\), and define \(N=\{e\in S^{1}:(\exists^{\infty}r)K_{r}^{x,A}(e)<r-4\log(r)\}\). As observed in the second author's previous paper, \(\mathcal{H}^{1}|_{S^{1}}(N)=0\). Orponen's theorem guarantees the absolute continuity of \(\pi_{x\#}(\mu)\) with respect to \(\mathcal{H}^{1}|_{S^{1}}\) for \(x\) outside the exceptional set \(G\), where \(\pi_{x}(y)=\frac{y-x}{||y-x||}\in S^{1}\). Thus, \(\mathcal{H}^{s}|_{Y}(\pi_{x}^{-1}(N))=0\). Now, let
\[M=\{y\in Y:\dim^{A}(y)\geq s\text{ and }K_{r}^{x,A,B}(y)>K_{r}^{A}(y)-8\log r \text{ for large enough }r\}\]
Again from [20], we have that \(\mathcal{H}^{s}(M)=\mathcal{H}^{s}(Y)>0\) by assumption.8 Thus, \(M-\pi_{x}^{-1}(N)\) has dimension \(s\) and is in particular nonempty. Picking a \(y\) in \(M-\pi_{x}^{-1}(N)\), we may check the conditions: \(x\) satisfies (C1) by assumption, we group together all \(e\) not satisfying (C2) in the negligible set \(N\), and \(y\) satisfies its part of (C1) and
(C3) by the definition of \(M\). As for (C4), Lemma 31 in [20] shows that (C1)-(C3) imply (C4), so the proof of this lemma is complete.
Now, we would like to drop the \(\operatorname{spt}(\mu)\) from the excluded set, which we will do by proving the following lemma:
**Lemma 39**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be a compact set such that \(0<\mathcal{H}^{s}(Y)<\infty\) for some \(s>1\), let \(A\) be any oracle relative to which \(Y\) is effectively compact, and let \(d_{x}>1\) be given. Then there is a set \(G\subseteq\mathbb{R}^{2}\) of Hausdorff dimension at most \(d_{x}\) such that for every \(x\in\mathbb{R}^{2}-G\), and every \(B\subseteq\mathbb{N}\), there is a \(y\in Y\) such that the pair \(x,y\) satisfies (C1)-(C4)_
Proof.: Let \(Y\) be as in the statement of the lemma, and write \(Y=Y_{1}\cup Y_{2}\) for disjoint compact \(Y_{1}\), \(Y_{2}\) such that \(0<\mathcal{H}^{s}(Y_{1})<\infty\) and \(0<\mathcal{H}^{s}(Y_{2})<\infty\). Let \(\mu_{1},\mu_{2}\) be \(\mathcal{H}^{s}|_{Y_{1}},\mathcal{H}^{s}|_{Y_{2}}\) respectively. Suppose \(A_{1}\) and \(A_{2}\) are effective compactness oracles for \(Y_{1}\) and \(Y_{2}\) respectively, \(A_{3}\) is an effective compactness oracle for \(Y\), and let \(\hat{\mu}_{i}\) encode \(\mu_{i}(Q)\) for each ball \(Q\) with rational center and radius. Let \(A\) be the join of \(A_{1},A_{2},A_{3},\hat{\mu}_{1}\), and \(\hat{\mu}_{2}\). Then \(A\) and \(Y_{i}\) satisfy the hypotheses of the previous lemma for each \(i\).
Let \(G_{1},G_{2}\) be the exceptional sets guaranteed by the lemma. Let \(G=G_{1}\cup G_{2}\cup\{x\in\mathbb{R}^{2}:\dim^{A}(x)\leq d_{x}\}\). If \(x\in\mathbb{R}^{2}\setminus G\), then by the previous lemma, there is some \(y\) in either \(Y_{1}\) or \(Y_{2}\) satisfying conditions (C1)-(C4) (since \(x\) can only be in the support of at most _one_ of \(\mu_{1},\mu_{2}\)).
Now we are in a position to prove our main theorem.
**Theorem 5**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be analytic such that \(1<d_{y}=\dim_{H}(Y)\) and \(D_{y}=\dim_{p}(Y)\). Let \(X\subseteq\mathbb{R}^{2}\) be such that \(1<d_{x}<\dim_{H}(X)\) and \(D_{x}=\operatorname{Dim}_{p}(X)\). Then there is some \(F\subseteq X\) of full dimension such that_
\[\dim_{H}(\Delta_{x}E)\geq d\left(1-\frac{\left(D-1\right)\left(D-d\right)}{2D ^{2}+\left(2-4d\right)D+d^{2}+d-2}\right),\]
_for all \(x\in F\), where \(d=\min\{d_{x},d_{y}\}\) and \(D=\max\{D_{x},D_{y}\}\). In particular, \(\dim_{H}(X\setminus F)\leq d_{x}<\dim_{H}(X)\) Furthermore, if_
\[D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\]
_Then \(\dim_{H}(\Delta_{x}E)=1\)._
Proof.: Let \(Y\) and \(X\) be as above, and let \(B\) be the trivial oracle.9\(Y\) is analytic, thus for any \(s<d_{y}=\dim_{H}(Y)\), there is a compact \(Y_{s}\) such that \(0<\mathcal{H}^{s}(Y_{s})<\infty\). Let \(\{s_{i}\}_{i\in\mathbb{N}}=d_{y}-\frac{1}{i}\). Let \(Y_{s_{i}}\) be a sequence of such compact sets. The \(Y_{s_{i}}\) are compact, so let \(A_{1},A_{2},...\) be oracles relative to which they are effectively compact.
Footnote 9: We need the oracle \(B\) to make the _packing_ reduction go through; in the Hausdorff case, it can be removed.
Now, we use the general point-to-set principle. Let \(A_{Y}\) and \(A_{X}\) be packing oracles for \(Y\) and \(X\) respectively, that is, oracles such that \(\dim_{P}(Y)=\sup_{y\in Y}\operatorname{Dim}^{A_{Y}}(y)\) and likewise for \(X\). Now, take the join of \(A_{Y},A_{X},A_{1},A_{2},...\), and call this new oracle \(A\).
Note that this oracle retains all the desired properties of its constituents. In particular, every \(Y_{s_{i}}\) is effectively compact relative to \(A\), for all \(y\in Y\) we have that \(\operatorname{Dim}^{A}(y)\leq\dim_{P}(Y)=D_{y}\), and for all \(x\in X\) we have that \(\operatorname{Dim}^{A}(x)\leq\dim_{P}(X)=\dim_{P}(Y)=D_{x}\).
\(D_{x}\). Using \(A\), we may apply the previous lemma to each \(Y_{s_{i}}\), giving a corresponding exceptional set \(G_{i}\). Observe that \(\dim_{H}(X)>d_{x}\geq\dim_{H}(G_{i})\), so the set of \(x\in X\) for which there is some \(y\in Y_{s_{i}}\) satisfying conditions (C1)-(C4) is nonempty for each sufficiently large \(i\) (large enough that \(s_{i}>1\)). In fact, by the countable stability of Hausdorff dimension, \(\dim_{H}(X)>d_{x}\geq\dim_{H}(\bigcup_{i\in\mathbb{N}}G_{i})\). If we denote this union by \(G\), then for all \(x\in X-B\), there is a \(y\in Y\) such that (C1)-(C4) hold with \(A\) as the oracle.
We choose \(A\) such that the desired packing dimension conditions hold relative to \(A\) for _any_\(x\in X\), \(y\in Y_{s_{i}}\subseteq(Y)\), so along with conditions (C1)-(C4), all the hypotheses of our effective theorem are satisfied. Applying this theorem when \(D\geq\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\), then, we obtain that
\[\dim^{A,x}(|x-y|)\geq(d-\frac{1}{i})\left(1-\frac{(D-1)\left(D-(d-\frac{1}{i} )\right)}{2D^{2}+\left(2-4(d-\frac{1}{i})\right)D+(d-\frac{1}{i})^{2}+(d- \frac{1}{i})-2}\right).\]
By Observation 12, since \(Y_{s_{i}}\) is effectively compact relative to \(A\), \(\Delta_{x}Y_{s_{i}}\) is effectively compact relative to \((A,x)\). Finishing the proof, by Theorem 11, we have that
\[\dim_{H}(\Delta_{x}E) \geq\dim_{H}(\Delta_{x}E_{s_{i}})\] \[=\sup_{y\in E_{s_{i}}}\dim^{A,x}(|x-y|)\] \[\geq(d-\frac{1}{i})\left(1-\frac{(D-1)\left(D-(d-\frac{1}{i}) \right)}{2D^{2}+\left(2-4(d-\frac{1}{i})\right)D+(d-\frac{1}{i})^{2}+(d-\frac{ 1}{i})-2}\right).\]
Letting \(i\) go to infinity completes the proof in this case. If we are in the case that \(D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\), then for some \(i\) large enough, we have
\[D<\frac{(3+\sqrt{5})(d-\frac{1}{i})-1-\sqrt{5}}{2}.\]
Then, repeating the above argument with \(i\) at least this large gives the bound of \(1\) immediately and completes the proof.
Assuming \(E=X=Y\), we immediately have as a corollary that
**Theorem 1**.: _Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(1<d<\dim_{H}(E)\). Then there is a subset \(F\subseteq E\) of full dimension such that_
\[\dim_{H}(\Delta_{x}E)\geq d\left(1-\frac{(D-1)\left(D-d\right)}{2D^{2}+(2-4d) \,D+d^{2}+d-2}\right),\]
_for all \(x\in F\), where \(D=\dim_{P}(E)\). In particular, \(\dim_{H}(E\setminus F)\leq d<\dim_{H}(E)\). Furthermore, if_
\[D<\frac{(3+\sqrt{5})d-1-\sqrt{5}}{2}\]
_Then \(\dim_{H}(\Delta_{x}E)=1\)._
In a similar manner, we prove the following theorem.
**Theorem 6**.: _Let \(Y\subseteq\mathbb{R}^{2}\) be analytic with \(\dim_{H}(Y)>1\) and \(\dim_{P}(Y)<2\dim_{H}(Y)-1\). Let \(X\subseteq\mathbb{R}^{2}\) be any set such that \(\dim_{H}(X)>1\). Then for all \(x\in X\) outside a set of (Hausdorff) dimension one,_
\[\dim_{H}(\Delta_{x}Y)=1.\]
We can follow essentially the same argument as above, except now we only need the dimension of \(x\) to be greater than \(1\), at the cost of also requiring that \(\dim^{A}(y)\) and \(\operatorname{Dim}^{A}(y)\) are close enough, as in the hypothesis of Proposition 23.
Proof.: Let \(Y\) and \(X\) be as above, and again let \(B\) be the trivial oracle. \(Y\) is analytic, thus for any \(s<\dim_{H}(Y)\), there is a compact \(Y_{s}\) such that \(0<\mathcal{H}^{s}(Y_{s})<\infty\). Let \(s\) be such that \(\dim_{p}(Y)<2s-1<2\dim_{H}(Y)-1\), and let \(A_{s}\) be an oracle relative to which \(Y_{s}\) is effectively compact. Let \(A_{Y}\) be a packing oracle for \(Y\). Now, take the join of \(A_{Y},A_{s}\), and call this new oracle \(A\).
Now, apply Lemma 39 to \(Y_{s}\) relative to \(A\) with \(d_{x}=1\). As above, conditions (C1)-(C4) are satisfied with an exceptional set of dimension at most \(1\). \(Y_{s}\subseteq Y\), so for any \(y\in Y_{s}\), \(\operatorname{Dim}^{A}(y)\leq\dim_{P}(Y)\). Hence, \(\operatorname{Dim}^{A}(y)<2\dim^{A}(y)-1\), and all the conditions for Proposition 23 are satisfied, and we have as above that for \(x\) not in our exceptional set,
\[\dim_{H}(\Delta_{x}Y) \geq\dim_{H}(\Delta_{x}Y_{s})\] \[=\sup_{y\in Y_{s}}\dim^{A,x}(||x-y||)\] \[=1\]
### Packing dimension of pinned distance sets
**Theorem 4**.: _Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(\dim_{H}(E)>1\). Then, for some \(x\in E\),_
\[\dim_{P}(\Delta_{x}E)\geq\frac{12-\sqrt{2}}{8\sqrt{(}2)}\approx 0.9356.\]
Proof.: Let \(E\subseteq\mathbb{R}^{2}\) be analytic such that \(d=\dim_{H}(E)>1\). Let \(D=\dim_{P}(E)\). Since \(E\) is analytic, there is a compact subset \(F\subseteq E\) such that \(0<\mathcal{H}^{s}(F)<\infty\), for \(s=\frac{d+1}{2}\). Let \(\mu=\mathcal{H}^{s}_{|F}\). Let \(A_{1}\) be an oracle relative to which \(F\) is effectively compact.
Let \(A_{2}\) be a packing oracle for \(F\), that is, \(\dim_{P}(F)=\sup_{x\in F}\operatorname{Dim}^{A_{2}}(x)\). Let \(A\) be the join of \(A_{1}\) and \(A_{2}\).
Using Orponen's radial projection theorem, we see that there is an exceptional set \(G_{1}\) such that the pushforward of \(\mu\) under \(\pi_{x}\) is absolutely continuous with respect to \(\mathcal{H}^{1}_{|S^{1}}\) for all \(x\notin G_{1}\). Let \(G_{2}=\{x\in F\mid\dim^{A}(x)<\frac{s+1}{2}\}\) and \(G=G_{1}\cup G_{2}\). Since \(\dim_{H}(G_{1})\leq 1\) and \(\dim_{H}(G_{2})\leq\frac{s+1}{2}<s\) we see that \(\dim_{H}(G)<s\). We now choose \(x\in F-G\) such that the set
\[F^{\prime}=\{y\in F-G\mid\operatorname{Dim}^{A}(x)\leq\operatorname{Dim}^{A}( y)\}\]
has positive \(\mu\) measure, which is possible since we can choose some \(x\) that has minimal or nearly minimal effective packing dimension. Let \(B\) be a packing oracle for \(\Delta_{x}E\). The proof of Lemma 38 shows that there is a \(y\in F^{\prime}\) such that the pair \(x,y\) satisfies (C1)-(C4). Moreover, \(\operatorname{Dim}^{A}(x)\leq\operatorname{Dim}^{A}(y)\).
We may therefore apply Theorem 36, which shows that
\[\operatorname{Dim}^{A,x,B}(|x-y|)\geq\frac{3D^{2}-D+6}{8D},\]
where \(D=\operatorname{Dim}^{A}(y)\). Since this is minimized when \(D=\sqrt{2}\), we conclude that
\[\operatorname{Dim}^{A,x,B}(|x-y|)\geq\frac{12-\sqrt{2}}{8\sqrt{(2)}}.\]
Our choice of \(B\) and the point-to-set principle concludes the proof, since
\[\dim_{P}(\Delta_{x}E) \geq\sup_{y\in E}\operatorname{Dim}^{B}(|x-y|)\] \[\geq\sup_{y\in E}\operatorname{Dim}^{x,A,B}(|x-y|)\] \[\geq_{\frac{12-\sqrt{2}}{8\sqrt{(2)}}}.\]
## 6. Regular pin sets give full dimension pinned distance sets
Now, we consider a case that is not covered by our previous theorems. Our work thus far implies that if \(Y\) is sufficiently regular and \(\dim_{H}(X)>1\), then \(\dim_{H}(\Delta_{x}Y)=1\) for all \(x\) in a full dimension subset of \(X\).
More surprisingly, however, the above holds even if just the _pin_ set \(X\) is regular. In this case, we are able to deduce an essentially optimal effective projection theorem for \(x\) which allows us to utilize arbitrarily long green intervals when partitioning \(K_{r}^{A}(y)\). If we have access to these green intervals, we will never be forced to use a strictly teal interval10, implying that our partition is all-yellow and that \(\dim^{x,A}(|x-y|)=1\).
Footnote 10: A non-green teal interval, so one with a growth rate of strictly less than \(1\)
We would like to perform the reduction to the classical result by finding some point \(x\) that is regular, but there is a problem. Suppose \(X\) is regular of dimension greater than \(1\) and \(\dim_{H}(Y)>1\). In general, we cannot assume \(X\) contains regular points relative to a given oracle. For instance, consider the set
\[X=\{x\in\mathbb{R}^{2}:\operatorname{Dim}(x)=1.2\text{ and }\dim(x)<1.2\}.\]
The packing and Hausdorff dimensions of \(X\) will be \(1.2\), but, relative to any oracle \(A\), any point will have effective Hausdorff dimension strictly less than \(1.2\), though it may have packing dimension _exactly_\(1.2\). However, we can overcome this obstacle, due to the fact that \(\dim_{H}(Y)>1\). This fact implies that there a bound on the length of any green interval.
We start by stating and proving an effective projection theorem that holds when \(x\) is sufficiently regular.
**Proposition 40**.: _Let \(A\subseteq\mathbb{N}\), \(x\in\mathbb{R}^{2}\), \(e\in\mathcal{S}^{1}\), \(\varepsilon\in\mathbb{Q}^{+}\), \(C,C^{\prime}\in\mathbb{N}\), and \(t,r\in\mathbb{N}\). Suppose that \(r\) is sufficiently large, and that the following hold._
1. \(1<d_{x}<\dim^{A}(x)\)_._
2. \(t\geq\frac{r}{C}\)_._
3. \(K_{s}^{x,A}(e)\geq s-C^{\prime}\log s\)_, for all_ \(s\leq t\)
_Then there exists some \(\varepsilon_{x}\) depending only on \(d_{x}\) and \(C\) such that \(\operatorname{Dim}^{A}(x)-d_{x}<\varepsilon_{x}\) implies that_
\[K^{A}_{r}(x\,|\,p_{e}x,e)\leq K^{A}_{r}(x)-r+\varepsilon r.\]
The key of this result is that the \(\varepsilon_{x}\) we need in the almost-regularity condition does not depend on \(\varepsilon\). The way we will apply this theorem, \(C\) is going to depend on \(d_{y}\). So, given a lower bound for the effective Hausdorff dimension of points \(x\) and \(y\), \(\varepsilon_{x}>0\) expresses how close to regular \(x\) needs to be in order to apply this theorem, and that required closeness is _fixed_ given sets \(X,Y\) when we perform the reduction.
The idea of the proof is quite simple. When we partition in \(x\), we were allowed to use intervals up to length \(t\). By picking \(r\) large enough that the complexity function lies very close to the line \(d_{x}s\),11 we can show that it is impossible for there to be any red-green-blue sequences past some precision \(\varepsilon r\), since this would imply the existence of a green block of length at least \(t\). Thus, \([1,r]\) is almost entirely covered by red and green intervals, and we obtain an essentially all-yellow partition of it. Formally,
Footnote 11: How close it needs to be to this line for the argument to go through depends on \(C\), how close we are allowed to assume it is depends on \(\varepsilon_{x}\) since the point is not exactly regular. Thus, we choose \(\varepsilon_{x}\) based in part on \(C\).
Proof.: Let \(x\) satisfying the conditions of the proposition for some \(\varepsilon_{x}\) be given. Let \(C\), and \(\varepsilon\) be given and choose \(\varepsilon^{\prime}\) depending on \(\varepsilon_{x}\), \(d_{x}\), and \(C\) in a manner which we will detail shortly. Now let \(r\) be large enough that \(d_{x}s-\varepsilon^{\prime}s\leq K^{A}_{s}(x)\leq d_{x}s+\varepsilon^{\prime}s\) for all \(s\geq\sqrt{r}\). Note that we must have \(\varepsilon^{\prime}>\varepsilon_{x}\). Let \(t\geq\frac{r}{C}\) be given, and let \(\hat{\mathcal{P}}=\hat{\mathcal{P}}(x,r,t)\) be the partition of \([1,r]\) by red, blue and green intervals considered in section 3.1. We will show that if \(\varepsilon_{x}\) is sufficiently small depending only on \(C\) and \(d_{x}\), then the desired conclusion holds.
On one hand, any red-green-blue sequence has a green block of length at least \(t\); this was one of the key properties of this partition in section 3. On the other hand, since the slope is \(1\) on green intervals, if we have \(K^{A}_{r-t}(x)=(d_{x}+\varepsilon^{\prime})(r-t)\), (its maximum possible value) and that \([r-t,r]\) is green, then
\[K^{A}_{r}(x) \leq(d_{x}+\varepsilon^{\prime})(r-t)+t\] \[\leq(d_{x}+\varepsilon^{\prime})r-(d_{x}-1-\varepsilon^{\prime} )\frac{r}{C}\]
So, if \(\varepsilon^{\prime}\) is small enough that
\[-\frac{d_{x}}{C}+2\varepsilon^{\prime}-\frac{\varepsilon^{\prime}}{C}-\frac{ 1}{C}<0, \tag{40}\]
we have a contradiction, since by assumption \(K^{A}_{r}(x)\geq d_{x}r-\varepsilon^{\prime}r\). If we attempted to place the left endpoint of a green block of a red-green-blue sequence at any \(\sqrt{r}\leq s\leq r\), we would clearly have the same contradiction, in fact, the green interval would be forced to be even shorter. Hence, there are no red-green-blue sequences such that the green block is contained in \([\sqrt{r},r]\). Provided that \(\varepsilon_{x}\) satisfies equation (40), we can choose such an \(\varepsilon^{\prime}>\varepsilon_{x}\) also satisfying it.
Now, we want to determine when, given \(\varepsilon\), the interval \([\sqrt{r},\frac{\varepsilon}{2}r]\) is forced to intersect at least one red interval. For convenience, let \(\varepsilon^{\prime\prime}r=\sqrt{r}\). Then, we will choose \(r\) large enough to satisfy this condition and large enough that \(\varepsilon^{\prime}\) satisfies (40). Notice that \([\varepsilon^{\prime\prime}r,\frac{\varepsilon}{2}r]\) intersecting some red interval will imply there are no blue
intervals in \([\frac{\varepsilon}{2},r]\), or else we would have the green block of a red-green-blue sequence contained in \([\varepsilon^{\prime\prime}r,r]\). \([\varepsilon^{\prime\prime}r,\frac{\varepsilon}{2}r]\) must intersect a red interval when the average growth rate of \(K_{s}^{A}(x)\) on this interval is strictly greater than \(1\). We can easily bound that growth rate as follows
\[\frac{K_{\frac{5}{2}r}^{A}(y)-K_{\varepsilon^{\prime\prime}r}^{A}(y)}{(\frac{ \varepsilon}{2}-\varepsilon^{\prime\prime})r}\geq\frac{(d_{x}-\varepsilon^{ \prime\prime})\frac{\varepsilon}{2}r-(d_{x}+\varepsilon^{\prime\prime}) \varepsilon^{\prime\prime}r}{(\frac{\varepsilon}{2}-\varepsilon^{\prime\prime })r}. \tag{41}\]
Since \(d_{x}>1\), for any \(\varepsilon>0\), there exists some \(\varepsilon^{\prime\prime}\) small enough that the right hand side is greater than \(1\). Hence, for \(r\) sufficiently large, \([\frac{\varepsilon}{2}r,r]\) is covered entirely by red and green intervals.
Just as in the \(S=0\) case of section 3.2, we cite the result that an interval covered only by red and green intervals in \(\hat{\mathcal{P}}\) can be covered by an all-yellow \(3C\)-admissible partition. Summing over this partition using Lemma 18 relative to \(A\) with respect to \(\frac{\varepsilon}{2}\) then yields
\[K_{r}^{A}(x\mid p_{e}x,e)\leq K_{r}^{A}(x)-r+\varepsilon r, \tag{42}\]
which completes the proof.
Equipped with this proposition, we define a new partition of \([1,r]\) for \(K_{s}^{A}(y)\). The key is that this partition will, after a certain small precision, only use yellow intervals. Some of these yellow intervals will be very long green intervals, and we will use the above proposition to prove an analogue of Lemma 31 on them. Once we are able to sum over the intervals in our partition, we are essentially done. We now define this partition inductively.
Set \(r_{0}=r\) and assume we have defined the sequence up to \(r_{i}\). Take a good partition of \([1,r_{i}]\). Let \(a\) denote the minimal real such that \([a,r_{i}]\) is the union of yellow intervals whose lengths are at most doubling. If \(a<r_{i}\) add all these yellow intervals to the partition, and let \(r_{i+1}=a\). Otherwise, let \(r_{i+1}\) be the smallest precision such that \([r_{i+1},r_{i}]\) is green, i.e., such that \(f(r_{i})=f(r_{i+1})+r_{i}-r_{i+1}\). Note that such an \(r_{i+1}\) exists in this case, since \(r_{i}\) is the right endpoint of a teal interval. Greedily combine intervals that have a less than doubling union, re-index the intervals, and denote this partition by \(\mathcal{P}\). It is easy to see that \(2r_{i+2}<r_{i}\). As a result, once we have established that we can use these long green intervals in our sum, the conclusion of Lemma 22 follows immediately.
Now, we prove a lower bound for \(r_{i+1}\) on these green intervals (depending on \(d_{y}\)) on when \(r_{i}\) is sufficiently large.
**Lemma 41**.: _Let \(0<\varepsilon<\frac{d_{y}-1}{2}\) be given, and suppose \(r_{i}\) is large enough that for all \(s>\frac{d_{y}-1}{8}r_{i}\), we have \(d_{y}s-\varepsilon s\leq K_{s}^{A}(y)\leq 2s+\varepsilon s\). Then_
\[r_{i+1}\geq\frac{d_{y}-1-\varepsilon}{3+\varepsilon}r_{i}\geq\frac{d_{y}-1}{8 }r_{i} \tag{43}\]
Proof.: The desired result is an immediate consequence of the fact that the line \(K_{r_{i}}^{A}(y)-(r_{i}-s)\) cannot intersect \(K_{s}^{A}(y)\) for any \(s<\frac{d_{y}-1-\varepsilon}{3+\varepsilon}r_{i}\), hence, any green interval with right endpoint \(r_{i}\) has to have left endpoint larger than \(\frac{d_{y}-1-\varepsilon}{3+\varepsilon}r_{i}\). Now, assume \(s\) is such that \(K_{r_{i}}^{A}(y)-(r_{i}-s)=K_{s}^{A}(y)\). The aforementioned fact is the consequence of a simple calculation.
\[K_{r_{i}}^{A}(y)-(r_{i}-s)\geq d_{y}r_{i}-\varepsilon r_{i}-(r_{i}-s) \tag{44}\]
whereas \(K_{s}^{A}(y)\leq 2s+\varepsilon s\). Setting the bounds equal yields \(r_{i+1}\geq s\geq\frac{d_{y}-1-\varepsilon}{3+\varepsilon}r_{i}\) and thus completes the proof.
Now, we are able to state and prove the analogue of Lemma 31.
**Lemma 42**.: _Suppose that \([r_{i+1},r_{i}]\) is a green interval in our constructed partition. For any \(\varepsilon>0\), provided that \(r_{i+1}\) is sufficiently large, we have_
\[K_{r_{i},r_{i+1}}^{A,x}(y\mid|x-y|,y)\leq\varepsilon r_{i}. \tag{45}\]
_Therefore, \(K_{r_{i},r_{i+1}}^{A,x}(|x-y|\mid|x-y|)\geq K_{r_{i},r_{i+1}}^{A,x}(y\mid y)- \varepsilon r_{i}\)._
As before, the strategy is to use our projection theorem to verify that the hypotheses of Lemma 15 hold, which immediately implies the desired bound.
Proof.: Let some small rational \(\varepsilon>0\) be given, and assume \(r_{i+1}\) is sufficiently large. Let \(\eta\) be the rational such that \(\eta r=K_{r}^{A}(y)-4\varepsilon\). Let \(G=D(r,y,\eta)\) be the oracle of Lemma 14 relative to \(A\). Let \(z\in B_{2^{-r_{i+1}}}(y)\) and satisfying \(|x-y|=|x-z|\) be given.
Letting \(s=|y-z|\), we again have two cases. The \(s\geq\frac{r_{i}}{2}-\log r_{i}\) case is identical to the previous proof, so we assume \(s<\frac{r_{i}}{2}-\log r_{i}\). Lemma 29 applied relative to \((A,G)\) implies that
\[K_{r}^{A,G}(z)\geq K_{s}^{A,G}(y)+K_{r_{i}-s,r_{i}}^{A,G}(x\mid y)-K_{r_{i}-s} ^{A,G}(x\mid p_{e^{\prime}}x,e^{\prime})-O(\log r). \tag{46}\]
To bound the projection term, we need to apply Proposition 40 with respect to \(x\), \(e^{\prime}\), \(\frac{\varepsilon}{2}\), the constant \(C\) such that \(\frac{1}{C}=\frac{d_{y}-1}{16}\), \(t=s\), and \(r=r_{i}-s\).
Again, we have that \(r_{i+1}-1<s<\frac{r_{i}}{2}-\log r_{i}\), since \(z\) is assumed to be within \(2^{-r_{i+1}}\) of \(s\). Clearly, then, we can take \(r_{i}-s\) to be sufficiently large. By assumption, \(1<d_{x}=D_{x}\), and by 41, we have that
\[s \geq\frac{d_{y}-1}{8}r_{i}-1\] \[\geq\frac{d_{y}-1}{8}(r_{i}-s)-1\] \[\geq\frac{d_{y}-1}{16}(r_{i}-s)\] \[=\frac{r_{i}-s}{C}\]
Thus, we can examine sufficiently large precisions and conditions (P1) and (P2) are satisfied. As before, (P3) is satisfied using (C2). After using some of the properties of the oracle \(G\) and taking \(r_{i}-s\) to be sufficiently large, we can apply the projection theorem relative to \(A\), as below:
\[K^{A,G}_{r_{i}}(z) \geq K^{A,G}_{s}(y)+K^{A,G}_{r_{i}-s,r_{i}}(x\mid y)-K^{A,G}_{r_{i}- s}(x\mid p_{e^{\prime}}x,e^{\prime})-O(\log r)\] \[\geq K^{A,G}_{s}(y)+K^{A}_{r_{i}-s,r_{i}}(x\mid y)-K^{A,G}_{r_{i}- s}(x\mid p_{e^{\prime}}x,e^{\prime})-O(\log r)\] \[\geq K^{A,G}_{s}(y)+K^{A}_{r_{i}-s}(x)-K^{A,G}_{r_{i}-s}(x\mid p_{ e^{\prime}}x,e^{\prime})-O(\log r)\] \[\geq K^{A,G}_{s}(y)+d_{x}(r_{i}-s)-\frac{\varepsilon}{2}r_{i}-K^ {A}_{r_{i}-s}(x\mid p_{e^{\prime}}x,e^{\prime})\] \[\geq K^{A,G}_{s}(y)+d_{x}(r_{i}-s)-d_{x}(r_{i}-s)+(r_{i}-s)- \varepsilon r_{i}\] \[=K^{A,G}_{s}(y)+(r_{i}-s)-\varepsilon r_{i}\]
Because of the choice of \(\eta\), for small enough \(\varepsilon\) we can guarantee that \(K^{A,G}_{s}(y)=K^{A}_{s}(y)-O(\log r_{i})\), since \(s<\frac{r_{i}}{2}\). Hence,
\[K^{A,G}_{r_{i}}(z)\geq K^{A}_{s}(y)+(r_{i}-s)-2\varepsilon r_{i}.\]
Finally, using the fact that these intervals are green, hence teal, we have
\[K^{A,G}_{r_{i}}(z) \geq K^{A}_{s}(y)+(r_{i}-s)-2\varepsilon r_{i}\] \[\geq K^{A}_{r_{i}}(y)-(r_{i}-s)+(r_{i}-s)-2\varepsilon r_{i}\] \[\geq(\eta+\varepsilon)r_{i}\]
Thus, the conditions for Lemma 15 hold and we apply it, completing the proof.
Now, we are able to prove the main effective theorem of this section. As indicated, since we can now sum over our green intervals, we can begin with the conclusion of Lemma 22.
**Theorem 43**.: _There is some \(\varepsilon_{x}\) sufficiently small depending only on \(d_{x}\) and \(d_{y}\) that, supposing \(x,y\in\mathbb{R}^{2}\), \(e=\frac{y-x}{|y-x|}\), and \(A,B\subseteq\mathbb{N}\) satisfy the following._
1. \(1<d_{y}<\dim^{A}(y),d_{x}<\dim^{A}(x)\leq\operatorname{Dim}^{A}(x)<D_{x}\).12__
Footnote 12: Our condition (C1) is slightly different than before; this is just to make the reduction easier. Specifically, it will be helpful if we now think of \(d_{y}\) as a strict lower bound on the dimension of points \(y\). Note that this change will not complicate the application of Proposition 39, since we will use it on a compact set with dimension greater than \(d_{y}\).
1. \(K^{x,A}_{r}(e\mid y)\geq K^{A}_{r}(y)-O(\log r)\) _for all sufficiently large_ \(r\)_._
2. \(K^{A}_{r}(e\mid y)=r-o(r)\) _for all_ \(r\)_._
_Then, \(\operatorname{Dim}^{A}(x)-\dim^{A}(x)<\varepsilon_{x}\) implies_
\[\dim^{x,A}(|x-y|)=1\]
Proof.: Let \(\varepsilon>0\) be given. Let \(r\) be large enough that applying Lemma 22 in the form of equation (9), we have
\[K^{A,x}_{r}(|x-y|)\geq K^{A}_{r}(y)-\sum_{i\in\mathbf{Bad}}K^{A}_{a_{i+1},a_{i }}(y\mid y)-(a_{i+1}-a_{i})-\frac{\varepsilon}{2}r. \tag{47}\]
Recalling that the complexity grows at an average rate of exactly \(1\) on green intervals, thus implies
\[K_{r}^{A,x}(|x-y|) \geq K_{r}^{A}(y)-\sum_{i\in\mathcal{P}}K_{a_{i+1},a_{i}}^{A}(y\mid y )-(a_{i+1}-a_{i})-\frac{\varepsilon}{2}r \tag{49}\] \[\geq K_{r}^{A}(y)-K_{r}^{A}(y)+r-\varepsilon r\] (50) \[=(1-\varepsilon)r \tag{48}\]
Taking \(\varepsilon\) as small as desired then letting \(r\) go to infinity completes the proof.
Finally, we state and prove the classical result.
**Theorem 44**.: _Let \(X\subseteq\mathbb{R}^{2}\) be such that \(1<d_{x}<\dim_{H}(X)=\dim_{P}(X)\) and let \(Y\subseteq\mathbb{R}^{2}\) be analytic and satisfy \(1<d_{y}<\dim_{H}(Y)\). Then there is some \(F\subseteq X\) such that,_
\[\dim_{H}(\Delta_{x}Y)=1\]
_for all \(x\in F\). Moreover, \(\dim_{H}(X\setminus F)<\dim_{H}(X)\)._
Proof.: Let \(X\) and \(Y\) be as above, and let \(B\) be the trivial oracle. \(Y\) is analytic, thus for any \(d_{y}<s<\dim_{H}(Y)\), there is a compact \(Y_{s}\) such that \(0<\mathcal{H}^{s}(Y_{s})<\infty\). Fix some such \(s\) and a corresponding \(Y_{s}\), and let \(A_{s}\) be its effective compactness oracle.
Now, we use the general point-to-set principle. Let \(A_{Y}\) and \(A_{X}\) be packing oracles for \(Y\) and \(X\) respectively. Now, take the join of \(A_{X},A_{Y}\), and \(A_{s}\) and call this new oracle \(A\).
Relative to \(A\), we may apply Lemma 39 to \(Y_{s}\), giving a corresponding exceptional set \(G\). Note that we choose \(A\) such that the desired packing dimension conditions hold relative to \(A\) for _any_\(x\in X\). In light of this fact and observing that \(\dim_{H}(X)>d_{x}\geq\dim_{H}(G)\), the set of \(x\in X\) for which there is some \(y\in Y_{s}\) satisfying conditions relative to \(A\) (C1)-(C4) is nonempty. Then for all \(x\in X-G\), there is a \(y\in Y_{s}\) such that (C1)-(C4) hold with \(A\) as the oracle.
Now, further refine \(X-G\) by removing all the points such that \(\operatorname{Dim}_{P}(X)-\dim^{A}(x)\geq\varepsilon_{x}\) for the \(\varepsilon_{x}\) in Theorem 43. Since \(A\) is a packing oracle for \(X\), this implies that the remaining points satisfy \(\operatorname{Dim}^{A}(x)-\dim^{A}(x)<\varepsilon_{x}\) Letting \(F\) denote \(X-\left(G\cup\{x:\operatorname{Dim}_{P}(X)-\dim^{A}(x)\}\right)\) immediately implies that all \(x\in F\) satisfy the requirements of Theorem 43, and furthermore that \(\dim_{H}(X\setminus F)<\dim_{H}(X)\).
Finally, applying the effective theorem yields that \(\dim^{A,x}(|x-y|)=1\) for such a pair \(x,y\). By Observation 12, since \(Y_{s}\) is effectively compact relative to \(A\), \(\Delta_{x}Y_{s}\) is effectively compact relative to \((A,x)\). Finishing the proof, by theorem 11, we have that
\[\dim_{H}(\Delta_{x}Y) \geq\dim_{H}(\Delta_{x}Y_{s})\] \[=\sup_{y\in Y_{s}}\dim^{A,x}(|x-y|)\] \[=1\] |
2309.05774 | Nucleon axial and pseudoscalar form factors using twisted-mass fermion
ensembles at the physical point | We compute the nucleon axial and pseudoscalar form factors using three
$N_f=$2+1+1 twisted mass fermion ensembles with all quark masses tuned to
approximately their physical values. The values of the lattice spacings of
these three physical point ensembles are 0.080 fm, 0.068 fm, and 0.057 fm, and
spatial sizes 5.1 fm, 5.44 fm, and 5.47 fm, respectively, yielding $m_\pi
L$>3.6. Convergence to the ground state matrix elements is assessed using
multi-state fits. We study the momentum dependence of the three form factors
and check the partially conserved axial-vector current (PCAC) hypothesis and
the pion pole dominance (PPD). We show that in the continuum limit, the PCAC
and PPD relations are satisfied. We also show that the Goldberger-Treimann
relation is approximately fulfilled and determine the Goldberger-Treiman
discrepancy. We find for the nucleon axial charge $g_A$=1.245(28)(14), for the
axial radius $\langle r^2_A \rangle$=0.339(48)(06) fm$^2$, for the pion-nucleon
coupling constant $g_{\pi NN} \equiv \lim_{Q^2 \rightarrow -m_\pi^2} G_{\pi
NN}(Q^2)$=13.25(67)(69) and for $G_P(0.88m_{\mu}^2)\equiv g_P^*$=8.99(39)(49). | Constantia Alexandrou, Simone Bacchio, Martha Constantinou, Jacob Finkenrath, Roberto Frezzotti, Bartosz Kostrzewa, Giannis Koutsou, Gregoris Spanoudes, Carsten Urbach | 2023-09-11T19:00:18Z | http://arxiv.org/abs/2309.05774v2 | Nucleon axial and pseudoscalar form factors using twisted-mass fermion ensembles at the physical point
###### Abstract
We compute the nucleon axial and pseudoscalar form factors using three \(N_{f}=2+1+1\) twisted mass fermion ensembles with all quark masses tuned to approximately their physical values. The values of the lattice spacings of these three physical point ensembles are 0.080 fm, 0.068 fm and 0.057 fm, and spatial sizes 5.1 fm, 5.44 fm, and 5.47 fm, respectively, yielding \(m_{\pi}L>3.6\). Convergence to the ground state matrix elements is assessed using multi-state fits. We study the momentum dependence of the three form factors and check the partially conserved axial-vector current (PCAC) hypothesis and the pion pole dominance (PPD). We show that in the continuum limit, the PCAC and PPD relations are satisfied. We also show that the Goldberger-Treimann relation is approximately fulfilled and determine the Goldberger-Treiman discrepancy. We find for the nucleon axial charge \(g_{A}=1.245(28)(14)\), for the axial radius \(\langle r_{A}^{2}\rangle=0.339(48)(06)\) fm\({}^{2}\), for the pion-nucleon coupling constant \(g_{\pi NN}\equiv\lim_{Q^{2}\to-m_{\pi}^{2}}G_{\pi NN}(Q^{2})=13.25(67)(69)\) and for \(G_{P}(0.88m_{\mu}^{2})\equiv g_{P}^{*}=8.99(39)(49)\).
## I Introduction
The nucleon axial form factors are important quantities for weak interactions, neutrino scattering, and parity violation experiments. There are currently a number of neutrino scattering experiments that require knowledge of the axial form factors. At Fermi Lab, the two neutrino experiments, NO\(\nu\)A and MINER\(\nu\)A [1], share the same neutrino beam. The former is designed to study neutrino oscillations and the latter to perform high-precision measurements of neutrino interactions on a wide variety of materials, including helium, carbon, iron, and lead. The MicroBooNE experiment, also at Fermi Lab, aims at measuring low-energy neutrino cross sections, investigating the low-energy excess events observed by the MiniBooNE experiment, and studying neutrinos produced in supernovae. The T2K experiment at KEK in Japan and the CNGS experiment in Europe investigate neutrino flavor changes. The upcoming experiment DUNE will be the next-generation flagship experiment on neutrino physics.
These experimental efforts need to be matched by theoretical investigations. Computing reliably the nucleon axial form factors provides crucial input for these experiments. However, the theoretical extraction of these form factors is difficult due to their non-perturbative nature. Phenomenological approaches include chiral perturbation theory that provides a non-perturbative framework suitable for low values of \(Q^{2}\) up to about 0.4 GeV\({}^{2}\)[2; 3; 4]. Other models used include the perturbative chiral quark model [5], the chiral constituent quark model [6] and light-cone sum rules [7]. Lattice QCD provides the _ab initio_ non-perturbative framework for computing such quantities using directly the QCD Lagrangian. Early studies of nucleon axial form factors were carried out within the quenched approximation [8; 9], as well as, using dynamical fermion simulations at heavier than physical pion masses [10]. Only recently, several groups are computing the axial form factors including simulations generated directly at the physical value of the pion mass [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. This work is the first to use solely simulations performed at physical values of the pion mass to take the continuum limit, avoiding a chiral extrapolation.
The nucleon matrix element of the isovector axial-vector current \(A_{\mu}\) is written in terms of two form fac
tors, the axial, \(G_{A}(Q^{2})\), and the induced pseudoscalar, \(G_{P}(Q^{2})\). The axial form factor, \(G_{A}(Q^{2})\), is experimentally determined from elastic scattering of neutrinos with protons, \(\nu_{\mu}+p\rightarrow\mu^{+}+n\)[24; 25; 26], while \(G_{P}(Q^{2})\) from the longitudinal cross-section in pion electro-production [4; 27; 28]. At zero momentum transfer the axial form factor gives the axial charge \(g_{A}\equiv G_{A}(0)\), which is measured in high precision from \(\beta\)-decay experiments [29; 30; 31; 32]. The induced pseudoscalar coupling \(g_{P}^{*}\) can be determined via the ordinary muon capture process \(\mu^{-}+p\to n+\nu_{\mu}\) from the singlet state of the muonic hydrogen atom at the muon capture point, which corresponds to momentum transfer squared of \(Q^{2}=0.88m_{\mu}^{2}\)[33; 34; 35; 36; 37], where \(m_{\mu}\) is the muon mass. We also study the nucleon matrix element of the isovector pseudoscalar current that determines the pseudoscalar form factor \(G_{5}(Q^{2})\) and from it the pion-nucleon coupling constant \(g_{\pi NN}\).
In this work, we use three ensembles generated at physical quark masses of the light, strange, and charm quarks and at three values of the lattice spacing, namely \(a=0.080\) fm, \(a=0.068\) fm, and \(a=0.057\) fm. This same setup has been used in the calculation of the electromagnetic form factors [38] and transversity form factors [39]. This allows us to directly take the continuum limit of the axial and pseudoscalar form factors using, for the first time, only simulations performed at the physical pion mass. This is a major achievement since it avoids chiral extrapolation which, for the baryon sector, may introduce an uncontrolled systematic error. Such simulations at the physical pion mass can be used to check important relations, such as the partially conserved axial-vector current (PCAC) relation that at form factor level connects \(G_{A}(Q^{2})\) and \(G_{P}(Q^{2})\) with \(G_{5}(Q^{2})\). At low \(Q^{2}\) and assuming pion pole dominance (PPD) one can further relate \(G_{A}(Q^{2})\) to \(G_{P}(Q^{2})\) and derive the Goldberger-Treiman relation. These relations have been studied within lattice QCD and will be discussed in detail in this paper.
The remainder of this paper is organized as follows: In section II we discuss the decomposition of the nucleon matrix elements of the axial-vector and pseudoscalar operators in terms of form factors and the PCAC and Goldberger-Treiman relations and the pion pole dominance. In section III we give the details on the parameters of the twisted mass fermion ensembles analyzed and in section IV we discuss the extraction of the form factors from the two- and three-point correlators including the renormalization procedure. In section V we present the methods we employ for the identification of excited states and the extraction of the ground state matrix element, as well as the various fits we perform and the model averaging procedure. In section VI, we discuss our procedure of fitting the \(q^{2}\)-dependence of the form factors and taking the continuum limit, and in section VII, we give the results on the axial form factor, \(G_{A}(Q^{2})\), in the continuum limit. In section VIII we present the analogous analysis for the induced pseudoscalar, \(G_{P}(Q^{2})\), and pseudoscalar, \(G_{5}(Q^{2})\), form factors. We also investigate the PCAC and Goldberger-Treiman (GT) relations and evaluate the GT discrepancy. In section IX we compare with other recent lattice QCD results and in section X we summarize and provide our conclusions. In the appendix A, we provide values and parameterization of form factors at the continuum limit.
## II Decomposition of the Nucleon Axial-Vector and Pseudoscalar Matrix Elements
In this work, we consider only isovector quantities and neglect isospin-breaking effects due to QED interactions and \(u\)-\(d\) quark mass difference. Any corrections arising from such isospin-breaking effects are in fact immaterial as compared to our present accuracy and are expected to become relevant only at better than one percent precision. We summarize here for completeness the various relations using the same notation as that used in our previous work [19]. The isovector axial-vector operator is given by
\[A_{\mu}=\bar{u}\gamma_{\mu}\gamma_{5}u-\bar{d}\gamma_{\mu}\gamma_{5}d \tag{1}\]
where \(u\) and \(d\) are the up and down quark fields respectively. In the chiral limit, where the pion mass \(m_{\pi}=0\), the axial-vector current is conserved, namely \(\partial^{\mu}A_{\mu}=0\). For a non-zero pion mass, the spontaneous breaking of chiral symmetry relates the axial-vector current to the pion field \(\psi_{\pi}\), through the relation
\[\partial^{\mu}A_{\mu}=F_{\pi}m_{\pi}^{2}\psi_{\pi}. \tag{2}\]
We use the convention \(F_{\pi}=92.9\) MeV for the pion decay constant. In QCD, the axial Ward-Takahashi identity leads to the partial conservation of the axial-vector current (PCAC)
\[\partial^{\mu}A_{\mu}=2m_{q}P, \tag{3}\]
where \(P\) is the pseudoscalar operator and \(m_{q}=m_{u}=m_{d}\) is the light quark mass for degenerate up and down quarks. Using the PCAC relation, it then follows that the pion field can be expressed as
\[\psi_{\pi}=\frac{2m_{q}P}{F_{\pi}m_{\pi}^{2}}. \tag{4}\]
The nucleon matrix element of the axial-vector current of Eq. (1) can be written in terms of the axial, \(G_{A}(Q^{2})\), and induced pseudoscalar, \(G_{P}(Q^{2})\), form factors as
\[\langle N(p^{\prime},s^{\prime})|A_{\mu}|N(p,s)\rangle=\bar{u}_{N }(p^{\prime},s^{\prime})\] \[\left[\gamma_{\mu}G_{A}(Q^{2})-\frac{Q_{\mu}}{2m_{N}}G_{P}(Q^{2}) \right]\gamma_{5}u_{N}(p,s), \tag{5}\]
where \(u_{N}\) is the nucleon spinor with initial (final) 4-momentum \(p\) (\(p^{\prime}\)) and spin \(s\) (\(s^{\prime}\)), \(q=p^{\prime}-p\) the momentum transfer, \(q^{2}=-Q^{2}\) and \(m_{N}\) the nucleon mass.
The axial form factor is commonly parameterized as
\[G_{A}(Q^{2})=g_{A}\left(1-\frac{\langle r_{A}^{2}\rangle}{6}Q^{2}\right)+{\cal O }(Q^{4}), \tag{6}\]
where
\[g_{A} \equiv G_{A}(0) \tag{7}\] \[\langle r_{A}^{2}\rangle \equiv-\frac{6}{g_{A}}\frac{\partial G_{A}(Q^{2})}{\partial Q^{2} }\bigg{|}_{Q^{2}\to 0} \tag{8}\]
are the axial charge and radius, respectively. A quantity of interest for the induced pseudoscalar form factor is the induced pseudoscalar coupling determined at the muon capture point [40], namely
\[g_{P}^{*}\equiv\frac{m_{\mu}}{2m_{N}}G_{P}(0.88\,m_{\mu}^{2}) \tag{9}\]
with \(m_{\mu}=105.6\) MeV the muon mass.
The nucleon pseudoscalar matrix element is given by
\[\langle N(p^{\prime},s^{\prime})|P|N(p,s)\rangle=G_{5}(Q^{2})\bar{u}_{N}(p^{ \prime},s^{\prime})\gamma_{5}u_{N}(p,s), \tag{10}\]
where \(P=\bar{u}\gamma_{5}u-\bar{d}\gamma_{5}d\) is the isovector pseudoscalar current. The PCAC relation at the form factors level relates the axial and induced pseudoscalar form factors to the pseudoscalar form factor via the relation
\[G_{A}(Q^{2})-\frac{Q^{2}}{4m_{N}^{2}}G_{P}(Q^{2})=\frac{m_{q}}{m_{N}}G_{5}(Q^{ 2}). \tag{11}\]
Making use of Eq. (4), one can connect the pseudoscalar form factor to the pion-nucleon form factor \(G_{\pi NN}(Q^{2})\) as follows
\[m_{q}G_{5}(Q^{2})=\frac{F_{\pi}m_{\pi}^{2}}{m_{\pi}^{2}+Q^{2}}G_{\pi NN}(Q^{2}). \tag{12}\]
Eq. (12) is written so that it illustrates the pole structure of \(G_{5}(Q^{2})\) and the preferred usage of \(m_{q}G_{5}(Q^{2})\), which is a scale-independent quantity unlike \(G_{5}(Q^{2})\). Substituting \(m_{q}G_{5}(Q^{2})\) in Eq. (11), one obtains the Goldberger-Treiman relation [41; 10]
\[G_{A}(Q^{2})-\frac{Q^{2}}{4m_{N}^{2}}G_{P}(Q^{2})=\frac{F_{\pi}m_{\pi}^{2}}{m _{N}(m_{\pi}^{2}+Q^{2})}G_{\pi NN}(Q^{2}). \tag{13}\]
The pion-nucleon form factor \(G_{\pi NN}(Q^{2})\) at the pion pole gives the pion-nucleon coupling
\[g_{\pi NN}\equiv\lim_{Q^{2}\rightarrow-m_{\pi}^{2}}G_{\pi NN}(Q^{2})\,, \tag{14}\]
which can be computed using Eq. (12) to obtain
\[\lim_{Q^{2}\rightarrow-m_{\pi}^{2}}(Q^{2}+m_{\pi}^{2})m_{q}G_{5}(Q^{2})=F_{ \pi}m_{\pi}^{2}g_{\pi NN}. \tag{15}\]
Equivalently, \(g_{\pi NN}\) can be computed using Eq. (13), where the pole on the right-hand side of Eq. (13) must be compensated by a similar pole in \(G_{P}(Q^{2})\), since \(G_{A}(-m_{\pi}^{2})\) is finite, thus obtaining
\[\lim_{Q^{2}\rightarrow-m_{\pi}^{2}}(Q^{2}+m_{\pi}^{2})G_{P}(Q^{2})=4m_{N}F_{ \pi}g_{\pi NN}. \tag{16}\]
Additionally, close to the pole, the following relation holds
\[G_{P}(Q^{2})=\frac{4m_{N}F_{\pi}}{m_{\pi}^{2}+Q^{2}}G_{\pi NN}(Q^{2})\bigg{|} _{Q^{2}\rightarrow-m_{\pi}^{2}} \tag{17}\]
due to pion pole dominance (PPD). Inserting it in Eq. (12) we obtain the relation
\[G_{P}(Q^{2})=\frac{4m_{N}}{m_{\pi}^{2}}m_{q}G_{5}(Q^{2})\bigg{|}_{Q^{2} \rightarrow-m_{\pi}^{2}}, \tag{18}\]
which relates \(G_{P}(Q^{2})\) to \(G_{5}(Q^{2})\). Substituting \(G_{P}(Q^{2})\) in Eq. (13) we obtain the well-known relation [42]
\[m_{N}G_{A}(Q^{2})=F_{\pi}G_{\pi NN}(Q^{2})\bigg{|}_{Q^{2}\rightarrow-m_{\pi}^{ 2}}, \tag{19}\]
which means that \(G_{P}(Q^{2})\) can be expressed as [43]
\[G_{P}(Q^{2})=\frac{4m_{N}^{2}}{Q^{2}+m_{\pi}^{2}}G_{A}(Q^{2})\bigg{|}_{Q^{2} \rightarrow-m_{\pi}^{2}}, \tag{20}\]
close to the pion pole.
From Eq. (19), the pion-nucleon coupling can be expressed as
\[g_{\pi NN}=\frac{m_{N}}{F_{\pi}}G_{A}(-m_{\pi}^{2})=\frac{m_{N}}{F_{\pi}}g_{A} \bigg{|}_{m_{\pi}\to 0}, \tag{21}\]
where the latter holds in the chiral limit, \(m_{\pi}=0\). The deviation from Eq. (21) due to the finite pion mass is known as the Goldberger-Treiman discrepancy, namely
\[\Delta_{\rm GT}=1-\frac{g_{A}m_{N}}{g_{\pi NN}F_{\pi}} \tag{22}\]
and it is estimated to be at the 2% level [44] in chiral perturbation theory. The Goldberger-Treiman discrepancy is related to the low-energy constant \(\bar{d}_{18}\)[45] via
\[\Delta_{\rm GT}=-\frac{2\bar{d}_{18}m_{\pi}^{2}}{g_{A}}\,. \tag{23}\]
Given the above relations, we define the following ratios to test whether our lattice results satisfy these relations.
\[r_{\rm PCAC}(Q^{2}) =\frac{\frac{m_{\pi}}{m_{N}}G_{5}(Q^{2})+\frac{Q^{2}}{4m_{N}^{2}} G_{P}(Q^{2})}{G_{A}(Q^{2})}\,, \tag{24}\] \[r_{\rm PPD,1}(Q^{2}) =\frac{m_{\pi}^{2}+Q^{2}}{4m_{N}^{2}}\frac{G_{P}(Q^{2})}{G_{A}(Q^{2 })}\,,\] (25) \[r_{\rm PPD,2}(Q^{2}) =\frac{4m_{N}}{m_{\pi}^{2}}\frac{m_{q}G_{5}(Q^{2})}{G_{P}(Q^{2})}\,. \tag{26}\]
The first is based on the PCAC relation in Eq. (11). Since PCAC is an exact operator relation, it provides a stringent test of our analysis on the form factor level. The second and third relations assume pion pole dominance and use Eqs. (20) and (18), respectively, and they are only expected to be unity near the pion pole. We note that we can use the PCAC relation in Eq. (11) to write
\[r_{\rm PPD,2}(Q^{2})=\frac{4m_{N}^{2}}{m_{\pi}^{2}}\frac{G_{A}(Q^{2})}{G_{P}(Q^ {2})}-\frac{Q^{2}}{m_{\pi}^{2}}\,. \tag{27}\]
Using the parameterization of \(G_{A}(Q^{2})\) in Eq. (6) to evaluate \(G_{A}(-m_{\pi}^{2})\) we obtain that near the pion pole the ratio
\[\frac{4m_{N}^{2}}{m_{\pi}^{2}}\frac{G_{A}(Q^{2})}{G_{P}(Q^{2})} = \frac{g_{A}m_{N}}{g_{\pi NN}F_{\pi}}\left(1+\frac{\langle r_{A}^{ 2}\rangle m_{\pi}^{2}}{6}\right)\left(1+\frac{Q^{2}}{m_{\pi}^{2}}\right)\] \[= \left(1-\Delta_{\rm GT}+\frac{\langle r_{A}^{2}\rangle m_{\pi}^{ 2}}{6}\right)\left(1+\frac{Q^{2}}{m_{\pi}^{2}}\right),\]
at leading order in \(m_{\pi}^{2}\), \(\Delta_{\rm GT}\) and \(Q^{2}\). Using the latter in Eq. (27) we obtain [46]
\[r_{\rm PPD,2}(Q^{2})=1+\left(\frac{\langle r_{A}^{2}\rangle m_{\pi}^{2}}{6}- \Delta_{\rm GT}\right)\left(1+\frac{Q^{2}}{m_{\pi}^{2}}\right) \tag{28}\]
and therefore a deviation from unity in \(r_{\rm PPD,2}(Q^{2})\) can be related to the Goldberger-Treiman discrepancy.
## III Gauge Ensembles and Statistics
We employ the twisted-mass fermion discretization scheme [47; 48], which provides automatic \({\cal O}(a)\)-improvement [49]. The bare light quark parameter \(\mu_{l}\) is tuned to reproduce the isosymmetric pion mass \(m_{\pi}=0.135\) MeV [50; 51], while the heavy quark parameters, \(\mu_{s}\) and \(\mu_{c}\) are tuned using the kaon mass and an appropriately defined ratio between the kaon and D-meson masses as well as the D-meson mass, following the procedure of Refs. [50; 51]. The action also includes a clover term that reduces isospin-breaking effects. The values of the parameters of the ensembles analyzed in this work can be found in Table 1. The lattice spacings and pion masses are taken from Ref. [52]. The values of the lattice spacing are determined both in the meson and nucleon sectors. We quote the ones from the meson sector which are compatible with the values determined from the nucleon mass in Ref. [53].
The nucleon matrix elements of the axial-vector and pseudoscalar operators are obtained via appropriate combinations of three- and two-point nucleon correlation functions, as will be explained in more detail in the following section. In Table 2, we give the statistics used for computing the two- and three-point functions in terms of the number of configurations analyzed and the number of point sources employed per configuration. The statistics of the three-point functions are increased at increasing source-sink separation such that the errors are kept approximately constant among all the time separations. For the twisted mass formulation employed here, the disconnected quark loop contributions are of order \(a^{2}\) and, thus, vanish in the continuum limit [47]. For this reason, we can safely neglect them in the present work.
## IV Extraction of Nucleon Matrix Elements
To evaluate the nucleon matrix elements of the operators given in Eqs. (5) and (10), we compute three- and two-point correlation functions. The two-point function is given by
\[C(\Gamma_{0},\vec{p};t_{s},t_{0}) =\!\!\sum_{\vec{x}_{s}}\!\!e^{-i(\vec{x}_{s}-\vec{x}_{0})\cdot \vec{p}}\,\times \tag{29}\] \[\mbox{Tr}\left[\Gamma_{0}\langle{\cal J}_{N}(t_{s},\vec{x}_{s}) \vec{\cal J}_{N}(t_{0},\vec{x}_{0})\rangle\right],\]
where \(x_{0}\) is the source, \(x_{s}\) is the sink positions on the lattice, and \(\Gamma_{0}\) is the unpolarized positive parity projector \(\Gamma_{0}=\frac{1}{2}(1+\gamma_{0})\). States with the quantum numbers of the
\begin{table}
\begin{tabular}{|r|r|r|r|r|r|r|} \hline \multicolumn{2}{|c|}{cB211.072.64} & \multicolumn{2}{|c|}{cC211.060.80} & \multicolumn{2}{|c|}{C211.054.96} \\ \hline \multicolumn{2}{|c|}{750 configurations} & \multicolumn{2}{|c|}{400 configurations} & \multicolumn{2}{|c|}{500 configurations} \\ \hline \(t_{s}/a\) & \(t_{s}[\mbox{fm}]\) & \(n_{src}\) & \(t_{s}/a\) & \(t_{s}[\mbox{fm}]\) & \(n_{src}\) & \(t_{s}/a\) & \(t_{s}[\mbox{fm}]\) & \(n_{src}\) \\ \hline
8 & 0.64 & 1 & 6 & 0.41 & 1 & 8 & 0.46 & 1 \\
10 & 0.80 & 2 & 8 & 0.55 & 2 & 10 & 0.57 & 2 \\
12 & 0.96 & 5 & 10 & 0.69 & 4 & 12 & 0.68 & 4 \\
14 & 1.12 & 10 & 12 & 0.82 & 10 & 14 & 0.80 & 8 \\
16 & 1.28 & 32 & 14 & 0.96 & 22 & 16 & 0.91 & 16 \\
18 & 1.44 & 112 & 16 & 1.10 & 48 & 18 & 1.03 & 32 \\
20 & 1.60 & 128 & 18 & 1.24 & 45 & 20 & 1.14 & 64 \\ \hline Nucleon 2pt & 477 & 20 & 1.37 & 116 & 22 & 1.25 & 16 \\ & & 22 & 1.51 & 246 & 24 & 1.37 & 32 \\ \cline{2-6} & & \multicolumn{2}{|c|}{Nucleon 2pt} & \multicolumn{2}{|c|}{26} & \multicolumn{2}{|c|}{1.48} & \multicolumn{2}{|c|}{64} \\ \cline{2-6} & & \multicolumn{2}{|c|}{Nucleon 2pt} & \multicolumn{2}{|c|}{480} \\ \hline \end{tabular}
\end{table}
Table 2: Statistics used in the computation of the isovector matrix elements for the cB211.072.64 (left table) the cC211.060.80 (middle table) and the cD211.054.96 (right table) ensemble. In each table, we provide the sink-source separations used in lattice units (first column) and physical units (second column) and the number of source positions per configuration (third column). For each ensemble, the bottom row indicates the number of source positions used for the two-point functions.
\begin{table}
\begin{tabular}{|r|r|r|r|r|} \hline \hline Ensemble & \(V/a^{4}\) & \(\beta\) & \(a\) [fm] & \(m_{\pi}\) [MeV] & \(m_{\pi}L\) \\ \hline cB211.072.64 & \(64^{3}\times 128\) & 1.778 & 0.07957(13) & 140.2(2) & 3.62 \\ cC211.060.80 & \(80^{3}\times 160\) & 1.836 & 0.06821(13) & 136.7(2) & 3.78 \\ cD211.054.96 & \(96^{3}\times 192\) & 1.900 & 0.05692(12) & 140.8(2) & 3.90 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters for the \(N_{f}=2+1+1\) ensembles analyzed in this work. In the first column, we give the name of the ensemble, in the second the lattice volume, in the third \(\beta=6/g^{2}\) with \(g\) the bare coupling constant, in the fourth the lattice spacing, in the fifth the pion mass, and in the sixth the value of \(m_{\pi}L\). Lattice spacings and pion masses are taken from Ref. [52].
nucleon are created and destroyed by the interpolating field
\[\mathcal{J}_{N}(t,\vec{x})=\epsilon^{abc}u^{a}(x)\left[u^{bT}(x)\mathcal{C}\gamma_{ 5}d^{c}(x)\right], \tag{30}\]
where \(\mathcal{C}\) is the charge conjugation matrix. By inserting the unit operator in Eq. (29) in the form of a sum over states of the QCD Hamiltonian only states with the quantum numbers of the nucleon survive. The overlaps between the interpolating field and the nucleon state \(|N\rangle\), such as \(\langle\Omega|\mathcal{J}_{N}|N\rangle\), need to be canceled to access the matrix element. It is desirable to increase the overlap with the nucleon state and reduce it with excited states so that the ground state dominates for as small as possible Euclidean time separations. This is because the signal-to-noise ratio decays exponentially with the Euclidean time evolution. To accomplish ground state dominance, we apply Gaussian smearing [54; 55] to the quark fields entering the interpolating field
\[\tilde{q}(\vec{x},t)=\sum_{\vec{y}}[\mathbf{1}+a_{G}H(\vec{x},\vec{y};U(t))]^{ N_{G}}q(\vec{y},t), \tag{31}\]
where the hopping matrix is given by
\[H(\vec{x},\vec{y};U(t))=\sum_{i=1}^{3}\left[U_{i}(x)\delta_{x,y-\hat{i}}+U_{i} ^{\dagger}(x-\hat{i})\delta_{x,y+\hat{i}}\right]. \tag{32}\]
The parameters \(a_{G}\) and \(N_{G}\) are tuned [56; 57] in order to approximately give a smearing radius for the nucleon of 0.5 fm. For the links entering the hopping matrix, we apply APE smearing [58] to reduce statistical errors due to ultraviolet fluctuations. In Table 3, we give the APE and Gaussian smearing parameters used for each ensemble.
For the construction of the three-point correlation function, the current is inserted at time slice \(t_{\text{ins}}\) between the time of the creation and annihilation of the states with the nucleon quantum numbers, \(t_{0}\) and \(t_{s}\), respectively. The expression for the three-point function is given by
\[C_{\mu}(\Gamma_{k},\vec{q},\vec{p}^{\,\prime};t_{s},t_{\text{ins }},t_{0}) =\!\!\!\sum_{\vec{x}_{\text{ins}},\vec{x}_{s}}\!\!\!e^{i(\vec{\mathcal{J}}_ {\text{inc}}-\vec{x}_{0})\cdot\vec{q}}e^{-i(\vec{x}_{s}-\vec{x}_{0})\cdot\vec{p }^{\,\prime}}\times\] \[\text{Tr}\left[\Gamma_{k}\langle\mathcal{J}_{N}(t_{s},\vec{x}_{s })j_{\mu}(t_{\text{ins}},\vec{x}_{\text{ins}})\bar{\mathcal{J}}_{N}(t_{0},\vec {x}_{0})\rangle\right], \tag{33}\]
where \(\Gamma_{k}=i\Gamma_{0}\gamma_{5}\gamma_{k}\) and \(j_{\mu}\) is either the axial-vector current \(A_{\mu}\) needed for computing the matrix elements in Eq. (5) or \(P\) for computing the pseudoscalar form factor in Eq. (10). The Euclidean momentum transfer squared is given by \(Q^{2}=-q^{2}=-(p^{\prime}-p)^{2}\). The connected three-point functions are computed using sequential propagators inverted through the sink, i.e. using the so-called _fixed-sink_ method. This requires new sequential inversions for each sink momentum. Therefore, we restrict to \(\vec{p}^{\,\prime}=0\), meaning the source momentum \(\vec{p}\) is determined via momentum conservation by the momentum transfer as \(\vec{p}=-\vec{q}\) and in the following we drop the usage of \(\vec{p}^{\,\prime}\). Without loss of generality, we also take, in the following, \(t_{s}\) and \(t_{\text{ins}}\) relative to the source time \(t_{0}\), or equivalently \(t_{0}\) is set to zero.
### Excited states contamination and large time limit
The interpolating field in Eq. (30) creates a tower of states with the quantum numbers of the nucleon. The spectral decomposition of the two- and three-point functions are given respectively by
\[C(\Gamma_{0},\vec{p},t_{s}) =\sum_{i}^{N_{st}-1}c_{i}(\vec{p})e^{-E_{i}(\vec{p})t_{s}}\quad \text{and} \tag{34}\] \[C_{\mu}(\Gamma_{k},\vec{q},t_{s},t_{\text{ins}}) =\sum_{i,j}^{N_{st}-1}\mathcal{A}_{\mu}^{i,j}(\Gamma_{k},\vec{q} )e^{-E_{i}(\vec{0})(t_{s}-t_{\text{ins}})-E_{j}(\vec{q})t_{\text{ins}}}. \tag{35}\]
The coefficients of the exponential terms in the two-point function of Eq. (34) are overlap terms given by
\[c_{i}(\vec{p})=\text{Tr}[\Gamma_{0}\langle\Omega|\mathcal{J}_{N}|N_{i}(\vec{p} )\rangle\langle N_{i}(\vec{p})|\bar{\mathcal{J}}_{N}|\Omega\rangle], \tag{36}\]
where spin indices are suppressed. The \(i\)-index denotes the \(i^{\text{th}}\) state with the quantum numbers of the nucleon that may also include multi-particle states. The coefficients \(\mathcal{A}^{i,j}\) appearing in the three-point function of Eq. (35) are given by
\[\mathcal{A}_{\mu}^{i,j}(\Gamma_{k},\vec{q}) =\text{Tr}[\Gamma_{k}\langle\Omega|\mathcal{J}_{N}|N_{i}(\vec{0} )\rangle\langle N_{i}(\vec{0})|A_{\mu}|N_{j}(\vec{p})\rangle\] \[\langle N_{j}(\vec{p})|\bar{\mathcal{J}}_{N}|\Omega\rangle], \tag{37}\]
where \(\langle N_{i}(\vec{0})|A_{\mu}|N_{j}(\vec{p})\rangle\) is the matrix element between \(i^{\text{th}}\) and \(j^{\text{th}}\) states. In practice, one truncates the sums in Eqs. (34) and (35) up to some state \(N_{st}\). Finally, \(E_{i}(\vec{p})\) is the energy of the \(i^{\text{th}}\) state carrying momentum \(\vec{p}\). For the ground state, we use the dispersion relation to obtain \(E_{0}(\vec{p})\), namely
\[E_{0}(\vec{p})=E_{N}(\vec{p})=\sqrt{m_{N}^{2}+\vec{p}^{\,2}}, \tag{38}\]
where the nucleon mass \(m_{N}\) is determined from the zero momentum projected two-point function.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Ensemble & \(n_{G}\) & \(\alpha_{G}\) & \(n_{\text{APE}}\) & \(\alpha_{\text{APE}}\) & \(\sqrt{\langle r^{2}\rangle_{\psi}}\) [fm] \\ \hline cB211.072.64 & 125 & 0.2 & 50 & 0.5 & 0.461(2) \\ cC211.060.80 & 140 & 1.0 & 60 & 0.5 & 0.516(2) \\ cD211.054.96 & 200 & 1.0 & 60 & 0.5 & 0.502(3) \\ \hline \hline \end{tabular}
\end{table}
Table 3: The number of Gaussian smearing iterations \(n_{G}\) and the Gaussian smearing coefficient \(\alpha_{G}\) used for each ensemble. We also provide the number of APE-smearing iterations \(n_{\text{APE}}\) and parameter \(\alpha_{\text{APE}}\) applied to the links that enter the Gaussian smearing hopping matrix. The resulting source r.m.s. obtained is given in the last column, where the error is due to the uncertainty in the lattice spacing.
To extract the nucleon matrix element that we are interested in, any contribution from nucleon excited states and/or multi-particle states has to be sufficiently suppressed. How fast ground state dominance is achieved, depends on the smearing procedure applied on the interpolating fields and the current type entering the three-point function. Since the noise increases exponentially with increasing \(t_{s}\), establishing from the data convergence to the asymptotic ground state matrix element is very difficult. For this reason, we employ a multi-state analysis by fitting the explicit contribution of the first \(N_{st}-1\) excited states. Our fitting strategy is described in Sec. V and aims at determining reliably the values of \(c_{0}\) and \(\mathcal{A}_{\mu}^{0,0}\).
In order to cancel unknown overlaps of the interpolating field in Eq. (30) with the nucleon state, one commonly constructs an appropriate ratio of three- to a combination of two-point functions [59; 60; 61; 62],
\[R_{\mu}(\Gamma_{k},\vec{q};t_{s},t_{\rm ins})=\frac{C_{\mu}( \Gamma_{k},\vec{q};t_{s},t_{\rm ins}\;)}{C(\Gamma_{0},\vec{0};t_{s})}\times\] \[\sqrt{\frac{C(\Gamma_{0},\vec{q};t_{s}-t_{\rm ins})C(\Gamma_{0}, \vec{0};t_{\rm ins})C(\Gamma_{0},\vec{0};t_{s})}{C\;(\Gamma_{0},\vec{0};t_{s}- t_{\rm ins})C(\Gamma_{0},\vec{q};t_{\rm ins})C(\Gamma_{0},\vec{q};t_{s})}}. \tag{39}\]
The ratio in Eq. (39) is constructed such that in the limit of large time separations \((t_{s}-t_{\rm ins})\gg a\) and \(t_{\rm ins}\gg a\), it converges to the nucleon ground state matrix element, namely
\[R_{\mu}(\Gamma_{k};\vec{q};t_{s},t_{\rm ins})\xrightarrow[t_{\rm ins}\gg a]{ t_{s}-t_{\rm ins}\gg a}\Pi_{\mu}(\Gamma_{k};\vec{q})\,. \tag{40}\]
By substituting Eqs. (34) and (35) into Eq. (39) we obtain
\[\Pi_{\mu}(\Gamma_{k};\vec{q})=\frac{\mathcal{A}_{\mu}^{0,0}(\Gamma_{k},\vec{ q})}{\sqrt{c_{0}(\vec{0})c_{0}(\vec{q})}}\,. \tag{41}\]
In this work, we also consider the ratio
\[R^{\prime}_{\mu}(\Gamma_{k};\vec{q};t_{s},t_{\rm ins})=\frac{C_{\mu}(\Gamma_{ k},\vec{q};t_{s},t_{\rm ins}\;)}{\sqrt{C(\Gamma_{0},\vec{0};t_{s})C(\Gamma_{0}, \vec{q};t_{s})}}, \tag{42}\]
which has the same large time-separation limit as the ratio of Eq. (39) when \(t_{\rm ins}=t_{s}/2\) while avoiding potential excited state contaminations in the two-point functions for small values of \(t_{\rm ins}\).
### Analysis of nucleon correlators
The ground state matrix elements, \(\Pi_{\mu}\), are decomposed into form factors. In the following, we provide their decomposition in Euclidean space and for \(\vec{p}\,^{\prime}=0\). In the case of the matrix element of the axial-vector current we have
\[\Pi_{i}(\Gamma_{k},\vec{q}) = \frac{i\mathcal{K}}{4m_{N}}\left[\frac{q_{k}q_{i}}{2m_{N}}G_{P}(Q ^{2})-\delta_{i,k}(m_{N}+E_{N})G_{A}(Q^{2})\right] \tag{43}\]
for the case that \(\mu=i\). For the temporal direction, the corresponding expression is
\[\Pi_{0}(\Gamma_{k},\vec{q}) = -\frac{q_{k}\mathcal{K}}{2m_{N}}\left[G_{A}(Q^{2})+\frac{(m_{N}-E _{N})}{2m_{N}}G_{P}(Q^{2})\right].\]
One can then form a \(2\times 2\) matrix of kinematical coefficients multiplying \(G_{A}(Q^{2})\) and \(G_{P}(Q^{2})\), given by
\[\mathcal{G}_{\mu}(\Gamma_{k};\vec{q})=\begin{pmatrix}-q_{k}\frac{\mathcal{K}} {2m_{N}}&-q_{k}\frac{\mathcal{K}(m_{N}-E_{N})}{4m_{N}}\\ -i\delta_{i,k}\frac{\mathcal{K}(m_{N}+E_{N})}{4m_{N}}&iq_{k}q_{i}\frac{ \mathcal{K}}{8m_{N}^{2}}\end{pmatrix}, \tag{45}\]
where the first row of the matrix is for \(\mu=0\) and the second row for \(\mu=i\), while the first column gives the kinematic coefficients multiplying \(G_{A}(Q^{2})\) and the second column those multiplying \(G_{P}(Q^{2})\). For the case of the matrix element of the pseudoscalar current we have
\[\Pi_{5}(\Gamma_{k},\vec{q})=-\frac{iq_{k}\mathcal{K}}{2m_{N}}G_{5}. \tag{46}\]
In the above expressions, \(E_{N}\) is the energy of the nucleon and \(\mathcal{K}\) is a kinematic factor given by
\[\mathcal{K}=\sqrt{\frac{2m_{N}^{2}}{E_{N}(E_{N}+m_{N})}}. \tag{47}\]
Given the above momentum-dependence of the decomposition, we can average over all momentum components for a given \(Q^{2}\) value, namely
\[\overline{\Pi}_{0}(Q^{2}) =-\frac{\sum_{k\neq 0}^{k}}{\sum_{q_{k}}^{k}}\Pi_{0}(\Gamma_{k},\vec{q})\] \[=\frac{\mathcal{K}}{2m_{N}}\left(G_{A}(Q^{2})+\frac{m_{N}-E_{N}}{ 2m_{N}}G_{P}(Q^{2})\right) \tag{48}\] \[\overline{\Pi}_{AP}(Q^{2},p^{2}) =i\sum_{k\neq 0}^{k}\Pi_{k}(\Gamma_{k},\vec{q})\] \[=\frac{\mathcal{K}}{4m_{N}}\left((E_{N}+m_{N})G_{A}(Q^{2})-\frac{ p^{2}}{2m_{N}}G_{P}(Q^{2})\right)\] (49) \[\overline{\Pi}_{P}(Q^{2}) =-i\sum_{\begin{subarray}{c}i\neq k\\ k,i\neq 0\end{subarray}}^{i,k}\frac{1}{q_{k}q_{i}}\Pi_{i}(\Gamma_{k},\vec{q})= \frac{\mathcal{K}}{8m_{N}^{2}}G_{P}(Q^{2})\] (50) \[\overline{\Pi}_{5}(Q^{2}) =i\sum_{k\neq 0}^{k}\frac{1}{q_{k}}\Pi_{5}(\Gamma_{k},\vec{q})= \frac{\mathcal{K}}{2m_{N}}G_{5}(Q^{2})\,, \tag{51}\]
where \(p^{2}=\sum_{k}q_{k}^{2}\), the symbol \(\overline{\Sigma}\) stands for the average and we indicate above the symbol the indices of the sum, which are always spatial, and below the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol the symbol symbol the symbol the symbol symbol the symbol the symbol the symbol symbol the symbol symbol the symbol symbol the symbol the symbol the symbol the symbol symbol the symbol symbol the symbol the symbol the symbol the symbol symbol the symbol symbol the symbol the symbol symbol the symbol
in the sum. We note that, while \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) can be extracted directly from \(\overline{\Pi}_{P}\) and \(\overline{\Pi}_{5}\), respectively, \(G_{A}(Q^{2})\) is always coupled to \(G_{P}(Q^{2})\) in \(\overline{\Pi}_{0}\) and \(\overline{\Pi}_{AP}\) for \(Q^{2}>0\). On the other hand, \(G_{A}(Q^{2})\) is the only form factor accessible at zero momentum transfer, while all others need to be extrapolated to \(Q^{2}=0\). Our strategy for extracting the three form factors is to perform a combined fit of the \(\overline{\Pi}\)s at fixed \(Q^{2}\) and express the ground state matrix elements in Eq. (41) in terms of the above linear combinations of form factors.
### Renormalization
Matrix elements computed in lattice QCD need to be renormalized in order to relate to physical observables. In the twisted mass fermion formulation, we need the renormalization functions \(Z_{S}\) for the renormalization of the pseudoscalar form factor \(G_{5}(Q^{2})\), \(Z_{P}\) for the renormalization of the bare quark mass and \(Z_{A}\) for the renormalization of the axial-vector current. We note that we do not use \(Z_{S}\) since \(G_{5}(Q^{2})\) is evaluated in the scale-independent and ultra-violet finite combination \(m_{q}G_{5}(Q^{2})\). In Figs. 7 and 10 where \(G_{5}(Q^{2})\) is shown without \(m_{q}\), it is only done for visualization purposes. In those cases, we use \(Z_{S}\) computed as \(Z_{P}/(Z_{P}/Z_{S})\) with \(Z_{P}\) computed in the RI\({}^{\prime}\) scheme. This is because a direct evaluation of \(Z_{S}\) in RI\({}^{\prime}\) is more difficult than for \(Z_{P}\) due to increased hadronic contamination effects observed in the case of \(Z_{S}\).
We use methods based on Ward identities or on the universality of renormalized hadronic matrix elements, which are often referred to as hadronic methods, in order to compute ultra-violet finite renormalization factors, such as \(Z_{A}\) and \(Z_{P}/Z_{S}\). Hadronic methods are fully nonperturbative and require no gauge fixing, unlike the RI\({}^{\prime}\) scheme. For more details, we refer to Appendix B of Ref. [52], where this approach is used to extract the renormalization constants for the ensembles employed here. This approach is preferred to the usual RI\({}^{\prime}\) scheme because it provides much more accurate results on \(Z_{A}\) and \(Z_{P}/Z_{S}\). The RI\({}^{\prime}\) scheme is employed for the determination of \(Z_{P}\), as discussed in Ref. [53]. For completeness, the values of the renormalization constants used in this work are collected in Table 4.
In what follows we will denote by \(G_{A}(Q^{2})\) and \(G_{P}(Q^{2})\) the renormalized form factors obtained by multiplying the lattice three-point functions of the axial-vector current by \(Z_{A}\). For \(G_{5}(Q^{2})\) we consider the combination \(m_{q}G_{5}(Q^{2})\) that renormalizes with \(\mu Z_{S}/Z_{P}\), involving only the ratio \(Z_{S}/Z_{P}\) that is determined accurately from hadronic matrix elements. The light bare quark mass \(\mu\) takes values \(\mu=0.00072\), \(0.00060\), and \(0.00054\) for the cB211.72.64, cC211.60.80, and cD211.54.96 ensembles, respectively.
## V Extraction of Form Factors
As described in Sec. IV.2, bare form factors at each value of \(Q^{2}\) are extracted from combined fits of two- and three-point functions, after we construct the averages given in Eqs. (48)-(51). Two-point functions are available for all source-sink separations, \(t_{s}\), while three-point functions are measured at selected values of \(t_{s}\), listed in Table 2, and available for all \(t_{\rm ins}\in[0,t_{s}]\). Since the optimal fit range in \(t_{s}\) and \(t_{\rm ins}\) may vary for each case, as well as the number of states needed to describe the correlation functions, we explore a wide parameter space in the fitting ranges and number of excited states included. Results are then combined using model averaging as described below. Specifically, at each value of \(Q^{2}\), we use the following fitting approach:
\(N_{\rm st}\)**:**: We perform either two- or three-state fits of all quantities, cutting the sum in Eqs. (34) and (35), to a maximum of \(i_{\rm max}=N_{st}-1\) with \(N_{st}\in\{2,3\}\). \(t_{\rm 2pt,\,min}\)**:**: We vary \(t_{\rm 2pt,\,min}\), the lower bound in the fit of the two-point functions. The upper bound is taken to be the source-sink separation where the correlator becomes compatible with zero within \(5\sigma\). This upper maximum value varies from approximately \(2.5\) fm at \(Q^{2}=0\) to \(1.5\) fm at \(Q^{2}=1\) GeV\({}^{2}\). \(t_{\rm 3pt,\,min}\)**:**: We vary \(t_{\rm 3pt,\,min}\), the smallest value of \(t_{s}\) used for fitting the three-point functions. We fit to all \(t_{s}\geq t_{\rm 3pt,\,min}\) available. \(t_{\rm ins,\,0}\) and \(t_{\rm ins,\,S}\)**:**: We vary the number of insertion time slices from the source and the sink kept in the fit, using \(t_{\rm ins}\in[t_{\rm ins,\,0},t_{s}-t_{\rm ins,\,S}]\). We only allow for \(t_{\rm ins,\,0}\geq t_{\rm ins,\,S}\) since the energy gap at the source, where we have momentum, is expected to be smaller than the energy gap at the sink, where there is no momentum. At \(Q^{2}=0\) we fix \(t_{\rm ins,\,0}=t_{\rm ins,\,S}\). \(N_{O}\)**:**:**: We vary the number of exponential terms when we perform three-state fits to the three-point functions, since certain overlaps may be sufficiently suppressed. The suppression rate is ordered according to the energy gaps of the first and second excited state energies. Beyond the ground state \(\mathcal{A}^{0,0}_{\mu}\), the suppression increases for the terms containing the overlaps \(\mathcal{A}^{1,0}_{\mu}\), \(\mathcal{A}^{0,1}_{\mu}\), \(\mathcal{A}^{1,1}_{\mu}\), \(\mathcal{A}^{2,0}_{\mu}\), \(\mathcal{A}^{0,2}_{\mu}\), \(\mathcal{A}^{2,1}_{\mu}\), \(\mathcal{A}^{1,2}_{\mu}\). \(\mathcal{A}^{2,2}_{\mu}\). We use either the first four, six, or all parameters, namely, we take \(N_{O}\in\{4,6,9\}\). \(N_{O}=4\) corresponds to a full two-state fit.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Ensemble & \(Z_{A}\) & \(Z_{P}/Z_{S}\) & \(Z_{P}\) [\(\overline{\rm MS}\) 2 GeV] \\ \hline cB211.072.64 & 0.74294(24) & 0.79018(35) & 0.4746(49) \\ cC211.060.80 & 0.75830(16) & 0.82308(23) & 0.4771(49) \\ cD211.054.96 & 0.77395(12) & 0.85095(18) & 0.4871(49) \\ \hline \end{tabular}
\end{table}
Table 4: Values of the scheme-independent renormalization constants \(Z_{A}\) and \(Z_{P}/Z_{S}\) taken from Ref. [52] and of the scheme-dependent \(Z_{P}\) given in \(\overline{\rm MS}\) at \(\mu_{\rm ref}=2\) GeV computed in Ref. [53].
In summary, \(N_{st}\) and \(N_{O}\) affect the number of parameters in the fit, while \(t_{\text{2pt, min}}\), \(t_{\text{3pt, min }}t_{\text{ins, 0}}\), and \(t_{\text{ins, 8}}\) the number of data used in the fit. We fit together the data for \(Q^{2}=0\) and for the lowest non-zero value of \(Q^{2}\) obtained when the momentum transfer in one spatial direction is \(2\pi/L\). After performing the model averaging for the zero and for the lowest non-zero value of \(Q^{2}\), we extract \(m_{N}\) and use it as a prior to fit independently each larger \(Q^{2}\) value.
### Model average
Results obtained using the different fit approaches are averaged using the Akaike Information Criterion (AIC) and we refer to Ref. [63, 64] for a detailed introduction to the method. In the following, we summarize the practical aspects of our implementation. To each fit \(i\), we assign a weight \(w_{i}\), defined as
\[\log(w_{i})=-\frac{\chi_{i}^{2}}{2}+N_{\text{dof},i}, \tag{52}\]
where \(N_{\text{dof}}=N_{\text{data}}-N_{\text{params}}\) is the number of degrees of freedom, given as the difference between the number of data, \(N_{\text{data}}\), and the number of parameters, \(N_{\text{params}}\), used in the corresponding fit. We use correlated fits and, therefore, the \(\chi^{2}\) is defined as
\[\chi_{i}^{2}=\vec{r}_{i}^{\,T}C_{i}^{-1}\vec{r}_{i}\quad\text{with}\quad\vec{r }_{i}=\vec{y}_{i}-f_{i}(\vec{x}_{i}), \tag{53}\]
where, for each fit \(i\), \(C_{i}\) is the covariance matrix between the selected data \(\vec{y}_{i}\) and \(\vec{r}_{i}\) is the residual computed using the selected fit approach \(f_{i}\) evaluated on the selected data range \(x_{i}\). From the weights in Eq. (52), we define the probability
\[p_{i}=\frac{w_{i}}{Z}\quad\text{with}\quad Z=\sum_{i}w_{i}. \tag{54}\]
The model-averaged value of an observable \(\mathcal{O}\) is given as
\[\begin{split}\langle\mathcal{O}\rangle=\text{mean}\,(\text{error })\quad\text{with}\quad\text{mean}=\sum_{i}\bar{\mathcal{O}}_{i}p_{i}\\ \text{and}\quad\text{error}^{2}=\sum_{i}(\sigma_{i}^{2}+\bar{ \mathcal{O}}_{i}^{2})p_{i}-\text{mean}^{2}\end{split} \tag{55}\]
where \(\bar{\mathcal{O}}_{i}\) and \(\sigma_{i}\) are, respectively, the central value and the error of the observable \(\mathcal{O}\) measured using the parameters of the \(i^{\text{th}}\) fit.
### Selection of data and fits
We first illustrate our fitting procedure by considering the zero-momentum nucleon two-point function. In Fig. 1, we show the nucleon effective mass for each ensemble. We observe an impressive agreement among the data using the three ensembles showing very mild cutoff effects and compatible excited state contamination. This confirms that maintaining the radius constant of the Gaussian smearing, as shown in Table 3, is a good strategy.
In Fig. 2, we show the nucleon effective mass separately for each of the three ensembles, as well as the values of the nucleon mass obtained via fits to two- and three-point functions keeping only those fits with model probability
Figure 1: Nucleon effective mass using the three physical point ensembles. The dashed line is the value of the nucleon mass \(m_{N}=0.938\) GeV.
Figure 2: Nucleon effective mass versus the time separation (left column) and results for the nucleon mass obtained via fits that have a model probability larger than 1% (right column). The horizontal bands spanning both the left and right panels are the results of the model average among all fits using two states (red points and red band) and three states (blue points and blue band). The most probable fit is depicted with open symbols. In the left panel, we show for all ensembles, the result of the most probable fit using two states (red curve) and three states (blue curve) over the range used in the fit. Panels from top to bottom are for the cB211.72.64, cC211.60.80, and cD211.54.96 ensembles, respectively.
\(\geq 1\%\). Note that since we perform a combined fit of the nucleon two- and three-point functions, for \(Q^{2}=0\) and the lowest non-zero value of \(Q^{2}\) as described at the beginning of this section, the values depicted in the figure are not obtained by fitting only the nucleon effective mass data shown in these figures. We observe a good distribution of the probabilities of the fits with the most probable fit having a probability between 10% to 50%. We will discuss taking the continuum limit in the next section.
Similarly, in Figs. 3 and 4, we show, respectively, results for \(Q^{2}=0\) and the lowest momentum transfer for the axial form factor. The data shown in these figures are those obtained from the ratio \(\overline{\Pi}_{AP}(Q^{2},0)\) defined in Eq. (49). We note that the results for the lowest non-zero value of the momentum transfer also have information from \(\overline{\Pi}_{0}(Q^{2})\) and \(\overline{\Pi}_{AP}(Q^{2},(2\pi/L)^{2})\) given in Eqs. (48) and (49). Results on the latter two are shown in Figs. 5 and 6, respectively, from which \(G_{P}(Q^{2})\) is extracted for the lowest non-zero value of \(Q^{2}\). Finally, in Fig. 7 we show the corresponding results for \(G_{5}(Q^{2})\) for the lowest non-zero \(Q^{2}\) value.
### Determination of axial charge and radius
Before presenting the analysis of the \(Q^{2}\)-dependence of form factors, we perform fits to the zero and the lowest non-zero momentum transfers. For \(Q^{2}=0\) only \(G_{A}(Q^{2})\) can be extracted, yielding the isovector axial charge \(g_{A}\). Computing the slope using the values of \(G_{A}(Q^{2})\) at these two values yields the radius \(\langle r_{A}^{2}\rangle\), namely
\[\langle r_{A}^{2}\rangle=-\frac{6}{Q_{1}^{2}}\left(\frac{G_{A}(Q_{1}^{2})}{G_ {A}(0)}-1\right)\,, \tag{56}\]
where \(Q_{1}^{2}\) is the lowest non-zero momentum transfer squared. Results obtained using two- or three-state fits are analyzed separately. The results on \(g_{A}\) and \(r_{A}^{2}\) extracted after model averaging for each ensemble are collected in Table 5 and depicted in Fig. 8. We also include the results obtained for \(m_{N}\). We perform a linear extrapolation in \(a^{2}\) to the continuum limit. We observe a very good agreement at the continuum limit between the results from fits using two- and three-states for all three quantities. On the other hand, there are slight deviations at finite lattice spacings between two- and three-state fits. For this reason, we analyze separately all quantities using two- and three-state fits. We take as our mean value the one extracted using the two-state fit and we give as a systematic error the difference between the central values using two- and three-state fits. We find
\[g_{A} =1.244(45)(20)\] \[r_{A}^{2} =0.354(96)(61)\ \text{fm}^{2}. \tag{57}\]
Figure 3: The ratio \(R^{\prime}\) of Eq. (42) that yields \(g_{A}\) versus \(t_{\text{ins}}-t_{s}/2\) (left column) and versus \(t_{s}\) for \(t_{\text{ins}}=t_{s}/2\) (middle column). In the header of the figure, we give the symbols used to denote the various values of \(t_{s}/a\). In the right column, we show the value of the nucleon isovector axial charge obtained via fits, as in Eq. (41), versus the fit probability, using the notation of Fig. 2. In the left and middle panels, the curves correspond to the fit results, which have the largest probability among all two- (blue) or three- (red) state fits.
Figure 4: The same as in Fig. 3, but for \(G_{A}(Q^{2})\) obtained via \(\overline{\Pi}_{AP}(Q^{2},0)\) for the lowest non-zero value of \(Q^{2}\).
### Energy spectrum and dispersion relation
As customarily done in similar studies [12; 18; 19; 23], we analyze the first excited state at the source and sink, \(E_{1,\vec{p}}\) and \(E_{1,\vec{0}}\), respectively. These are obtained using two-state fits to the three-point functions. Our results are depicted in Fig. 9 for the three ensembles, where we also depict the dispersion relation for the nucleon energy \(E_{N}\). We observe the following:
* Since the dispersion relation is included using priors when fitting each \(Q^{2}\) value larger than the lowest non-zero it is not surprising that we see excellent agreement between the extracted energy and the dispersion relation;
Figure 8: Continuum limit of the nucleon isovector axial charge \(g_{A}\) (top), radius \(r_{A}\) (middle), and nucleon mass \(m_{N}\) (bottom) using a linear extrapolation in \(a^{2}\). The dashed line in the top panel is the experimental value \(g_{A}=1.27641(56)\)[65] and in the bottom panel the Nucleon mass \(m_{N}=938\) MeV.
Figure 5: \(\overline{\Pi}_{0}(Q^{2})\) for the lowest non-zero value of \(Q^{2}\) using the notation of Fig. 3.
Figure 7: \(G_{5}(Q^{2})\) for the lowest non-zero value of \(Q^{2}\) using the notation of Fig. 3.
* The first excited state at zero momentum transfer is compatible with the Roper;
* \(E_{1,\bar{6}}\) for low values of the momentum is compatible with the lowest energy of the \(\pi N\) system in the rest frame, namely \(\pi\) and \(N\) moving with momentum \(2\pi/L\) back to back; As the momentum grows the lowest energy of \(\pi N\) becomes larger than the mass of the Roper and then \(E_{1,\bar{6}}\) becomes approximately constant somewhat above the mass of the Roper which is expected since in a two-state fit the first excited energy is contaminated by higher states;
* \(E_{1,\bar{p}}\) is compatible with \(N(0)+\pi(\vec{p})\) for all non-zero values of \(\vec{p}^{2}\leq 0.6\) GeV\({}^{2}\). After that, the energy of the Roper denoted by \(R(\vec{p})\) becomes smaller and the results tend to be in between the energy of the Roper and the energy of the \(N(0)+\pi(\vec{p})\) system. These are, indeed, the two lowest one-particle and two-particle excited state energies;
* Excited state contamination is similar for all three ensembles and this is in line with the observation of mild cut-off effects for the nucleon mass.
### Comparison of results extracted with two- and three-state fits
The renormalized form factors obtained using two- and three-state fits are compared in Fig. 10, where we also depict the difference \(\Delta\) between the results extracted using two- and three-state fit normalized such that errors are unity, namely
\[\delta\equiv\langle G_{2\rm st}-G_{3\rm st}\rangle\quad\text{and}\quad\Delta( Q^{2})\equiv\frac{\delta(Q^{2})}{\sigma_{\delta}(Q^{2})}, \tag{58}\]
with \(\sigma_{\delta}\) the jackknife error on the difference \(\delta\). We observe a very good agreement between the results extracted using two- and three-state fits with all differences \(\Delta\) lying within three standard deviations. We note that we only perform three-state fits to extract the form factors up to \(Q^{2}\simeq 0.5\) GeV\({}^{2}\) despite the fact that the stability of the three-state fits improves because time separations become more dense as the lattice spacing decreases. Since we observe consistency between the results from two- and three-state fits and the two-state fits do not suffer from instabilities at large values of \(Q^{2}\), we opt to take the results from two-state fits up to \(Q^{2}\simeq 1\) GeV\({}^{2}\). The three-state fits are only considered up to \(Q^{2}\simeq 0.5\) GeV\({}^{2}\), which is the largest \(Q^{2}\) where the three-state fits for the cB211.72.64 are stable.
## VI \(Q^{2}\)-dependence and continuum limit
We will first discuss the parameterization of the \(Q^{2}\) dependence of form factors that are free from the pion pole such as the axial form factor. Typically two functional forms are employed, the dipole Ansatz and the model independent \(z\)-expansion [66; 67]. The dipole Ansatz, given by
\[G(Q^{2})=\frac{g}{\left(1+\frac{Q^{2}}{m^{2}}\right)^{2}}, \tag{59}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c|}{Ensemble} & \multicolumn{1}{c|}{\(g_{A}\)} & \(\langle r_{A}^{\prime}\rangle\) [rm\(\overline{\rm m}^{2}\)] & \(m_{N}\) [GeV] \\ \hline \multirow{4}{*}{\begin{tabular}{c} Roper \\ \(\overline{\rm m}\) \\ \end{tabular} } & cB211.72.64 & 1.253(21) & 0.240(52) & 0.9464(30) \\ & cC211.60.80 & 1.228(14) & 0.220(37) & 0.9436(25) \\ & cD211.54.96 & 1.255(20) & 0.300(39) & 0.9414(29) \\ \cline{2-5} & \(a=0\) & 1.244(45) & 0.354(96) & 0.9362(65) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} cB211.72.64 \\ \(\overline{\rm c}\)C211.60.80 \\ \(\overline{\rm c}\)C211.54.96 \\ \end{tabular} } & cB211.72.64 & 1.322(20) & 0.408(67) & 0.9290(50) \\ & cC211.60.80 & 1.241(19) & 0.300(37) & 0.9346(27) \\ \cline{1-1} & cD211.54.96 & 1.277(17) & 0.395(34) & 0.9261(35) \\ \cline{1-1} \cline{2-5} & \(a=0\) & 1.264(52) & 0.415(97) & 0.9237(89) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Values for the nucleon isovector axial charge \(g_{A}\), radius \(\langle r_{A}^{2}\rangle\), and nucleon mass \(m_{N}\) for each ensemble and extrapolated to the continuum limit using a linear function in \(a^{2}\). These results are referred to as obtained via the “direct approach” in the text. Results are given separately for values extracted from a two- and a three-state fit analysis.
has two parameters, the charge \(g\) and the dipole mass \(m\). In this case, the radius defined in Eq. (8) is given by
\[r^{2}=\frac{12}{m^{2}}. \tag{60}\]
We parameterize cut-off effects of the charge and of the radius using a linear function in \(a^{2}\), namely
\[g(a^{2})=g_{0}+a^{2}g_{2}\quad\text{and}\quad r^{2}(a^{2})=r_{0}^{2}+a^{2}r_{2 }^{2} \tag{61}\]
and obtain for the dipole Ansatz the following combined \((Q^{2},a^{2})\)-dependence
\[G(Q^{2},a^{2})=\frac{g(a^{2})}{(1+\frac{Q^{2}}{12}r^{2}(a^{2}))^{2}}, \tag{62}\]
which we will use to fit all form factors after factoring in any pion pole dependence for a given lattice spacing.
In the case of the \(z\)-expansion, the form factor is parameterized as,
\[G(Q^{2})=\sum_{k=0}^{k_{\text{max}}}a_{k}\ z^{k}(Q^{2}), \tag{63}\]
where
\[z(Q^{2})=\frac{\sqrt{t_{\text{cut}}+Q^{2}}-\sqrt{t_{\text{cut}}+t_{0}}}{\sqrt {t_{\text{cut}}+Q^{2}}+\sqrt{t_{\text{cut}}+t_{0}}} \tag{64}\]
with \(-t_{\text{cut}}<t_{0}<\infty\) and \(t_{0}\) an arbitrary number and \(t_{\text{cut}}\) the particle production threshold. For \(t_{\text{cut}}\), we
Figure 10: Results for the three form factors obtained using a two- (red squares) or a three-state (blue crosses) fit analysis. From left to right we show results for the cB211.72.64, cC211.60.80, and cD211.54.96 ensembles. From top to bottom, we show results for the axial, induced pseudoscalar, and pseudoscalar form factors. In the lower panel of each plot, we include the difference \(\Delta\) defined in Eq. (58) between the results extracted using two- and three-state fits. The difference \(\Delta\) is normalized such that errors are unity and the dashed line represents a three standard deviation difference. Three-state fits are stable for \(Q^{2}<0.465\) GeV\({}^{2}\) for all three ensembles. For larger values the fits become unstable. We display these points by using lighter blue color and we thus do not use them in the extraction of form factors.
use the three-pion production threshold, namely \(t_{\rm cut}=\left(3m_{\pi}\right)^{2}\)[67] with \(m_{\pi}=0.135\) GeV. For \(t_{0}\) we use a vanishing value such that the charge is given by \(a_{0}\) and the radius is proportional to the ratio \(a_{1}/a_{0}\), namely
\[g=a_{0}\quad\mbox{and}\quad r^{2}=-\frac{3a_{1}}{2a_{0}t_{\rm cut}}\quad\mbox{ with}\quad t_{0}=0. \tag{65}\]
We introduce the dependence on the lattice spacing by writing
\[G(Q^{2},a^{2})=g(a^{2})\sum_{k=0}^{k_{\rm max}}c_{k}(a^{2})\;z^{k}(Q^{2}), \tag{66}\]
where \(c_{k}=a_{k}/a_{0}\) and
\[\begin{split}& c_{0}(a^{2})=1\,,\quad c_{1}(a^{2})=-\frac{2t_{ \rm cut}}{3}r^{2}(a^{2})\quad\mbox{and}\\ & c_{k}(a^{2})=c_{k,0}+a^{2}c_{k,2}\quad\mbox{for}\quad k\geq 2.\end{split} \tag{67}\]
The coefficients \(c_{k}\) can be further constrained by requiring that the \(z\)-expansion converges smoothly to zero at infinite momentum, namely [68]
\[\sum_{k=0}^{k_{\rm max}}c_{k}\left.\frac{d^{n}z^{k}}{dz^{n}}\right|_{z=1}=0 \quad\mbox{with}\quad n=0,1,\ldots \tag{68}\]
This suggests that priors centered around zero should be used to help enforce this condition at various orders with a width that falls like \(1/k\)[25]. Additionally, an examination of the explicit spectral functions and scattering data [67] motivates the bound of \(|c_{k}|\leq 5\). We, therefore, use the following Gaussian priors
\[c_{k,0}\sim 0(w/k),\quad c_{k,2}\sim 0(20w/k)\quad\mbox{for}\quad k\geq 2, \tag{69}\]
where \(w\leq 5\) is a fitting parameter that we vary together with the order of the expansion \(k_{\rm max}\in[1,4]\).
In both the dipole and \(z\)-expansion fits that follow, we will refer to one- and two-step fits. In two-step fits we first fit the \(Q^{2}\) dependence for each lattice spacing separately and then take the continuum limit of the parameters, while in the one-step fits the three ensembles are fitted together. The one-step approach provides for a global \(\chi^{2}\).
## VII The axial form factor \(G_{A}(Q^{2})\)
We first present the analysis of the axial form factor, which at \(Q^{2}=0\) yields the axial charge already discussed in Sec. II.
### Dipole Ansatz
An example fit using the dipole Ansatz is shown in Fig. 11, where we depict the results of using Eq. (62) to fit the form factors for each of the three ensembles and then taking the continuum limit of the parameters, i.e. following a two-step approach. The values for \(G_{A}(Q^{2})\) shown in this figure are obtained using two-state fits in the range \(0\leq Q^{2}<1\) GeV\({}^{2}\).
As already mentioned, an alternative to the two-step approach is to perform a simultaneous fit to the \(Q^{2}\) dependence for all three ensembles. To demonstrate that taking a global fit in a one-step approach is equivalent to performing a two-step approach, we show in Fig. 12 the continuum limit of the axial charge and the radius extracted from the one- and two-step approaches. In Table 6, we give the corresponding values extracted when using the one- and two-step approaches, including their reduced \(\chi^{2}\). As can be seen, the continuum values are in perfect agreement both in terms of the central value and in terms of the error. The reduced \(\chi^{2}\) for the two-step procedure only refers to the linear extrapolation to the continuum limit. The one-step approach provides for a single value of \(\chi^{2}\) that reflects the quality of the fit to the combined \(Q^{2}\)- and \(a^{2}\)-dependence and it is thus more practical to compute the relative weights between the various fits when carrying out our model averaging. Therefore, from now on we will proceed with the one-step approach.
Using the one-step approach, we perform dipole fits to results obtained using two- and three-state fits to the correlators. We vary the largest \(Q^{2}\) included in the fits, and for the two-state fit results, which are more precise, we also repeat the fits omitting the result at \(Q^{2}=0\). The reasoning is that at \(Q^{2}=0\) only \(G_{A}(0)\) survives, which can affect the determination of the energy extracted for the first excited state, as already shown in Fig. 9. A comparison of the results obtained using these variations is shown in Fig. 13. We perform a model average of the
Figure 11: The axial form factor obtained on each of the three ensembles using two-state fits (blue circles, orange downwards pointing triangles, and green triangles). The continuum limit form factor (red curve and band) and value for the axial charge (red cross) are obtained via dipole fits within the one-step approach. Also shown are the form factor curves obtained at the three values of the lattice spacing (blue, orange, and green curves, respectively).
results separately for the case of using two- and three-state fits on the correlators. We find
\[\begin{split} g_{A}&=1.196(24)\qquad\qquad\text{(2-state)} \\ &=1.228(34)\qquad\qquad\text{(3-state)}\\ \langle r_{A}^{2}\rangle&=0.210(17)\ \text{fm}^{2} \qquad\text{(2-state)}\\ &=0.300(59)\ \text{fm}^{2}\,.\quad\text{(3-state)}\end{split} \tag{70}\]
Since the values are compatible, we opt to quote the model-averaged values obtained from data that were extracted using two-state fits to the correlators. We then give as a systematic error the difference between the central values of the model-averaged results obtained from data extracted using two- and three-state fits to the correlators. We find
\[\begin{split} g_{A}&=1.196(24)(32)\\ \langle r_{A}^{2}\rangle&=0.210(17)(90)\ \text{fm}^{2}\,. \end{split} \tag{71}\]
### \(z\)-expansion
#### iv.2.1 First order \(z\)-expansion
We repeat the same procedure using a first-order \(z\)-expansion that has the same number of parameters as the dipole Ansatz, and where no priors are employed. In Fig. 14 we demonstrate the one-step approach as an example, with the same notation as Fig. 12; in Fig. 15 and Table 7 we similarly demonstrate that our one-step approach is equivalent to the two-step approach; and in Fig. 16 we depict the results as a function of \(Q_{\text{max}}^{2}\) for the case of data extracted using two- and three-state fits to the correlators. After model averaging we find
\[\begin{split} g_{A}&=1.283(31)\qquad\qquad\text{(2- state)}\\ &=1.249(37)\qquad\qquad\text{(3-state)}\\ \langle r_{A}^{2}\rangle&=0.421(18)\ \text{fm}^{2}\, \end{split} \tag{72}\]
Again, we observe that the values are compatible whether we use results from the two- or three-state fits to the correlators and thus we quote the model-averaged value for the case of using two-state fits and take as systematic error the difference between the central values obtained when using two- and three-state fits to the correlators. We find
\[\begin{split} g_{A}&=1.283(31)(34)\\ \langle r_{A}^{2}\rangle&=0.421(18)(14)\ \text{fm}^{2}\,. \end{split}\quad\text{($z^{1}$-expansion)} \tag{73}\]
Figure 12: The axial charge (top) and radius (bottom) obtained via a dipole fit to each ensemble (blue circles, orange downwards pointing triangles, and green triangles). The filled red asterisks and corresponding bands show the continuum limit using the two-step approach, i.e. via fits linear in \(a^{2}\) to the axial charge and radius obtained at each value of \(a^{2}\). At \(a^{2}=0\) we compare with results obtained from a one-step approach (open asterisks), as described in the text.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Ensemble & \(g_{A}\) & \(\langle r_{A}^{2}\rangle\) [fm\({}^{2}\)] & \(\chi^{2}/N_{\text{dof}}\) \\ \hline cB211.72.64 & 1.216(11) & 0.2632(67) & 3.72 \\ cC211.60.80 & 1.2120(85) & 0.2519(47) & 1.61 \\ cD211.54.96 & 1.2174(95) & 0.2476(59) & 1.16 \\ \hline \(a=0\), 1-step & 1.218(22) & 0.231(14) & 2.04 \\ \(a=0\), 2-step & 1.217(22) & 0.230(14) & 0.19 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Values for the nucleon isovector axial charge \(g_{A}\), radius \(r_{A}\), and reduced \(\chi^{2}\) obtained using a dipole Ansatz to fit each ensemble (first three rows). We also give the continuum limit values using the one-step (second to last row) and two-step (last row) approaches.
Figure 13: Results for the axial charge and radius extracted using the dipole Ansatz, versus \(Q_{\text{max}}^{2}\), i.e. the maximum value of \(Q^{2}\) included in the fit. We show separately dipole fits to results obtained from two- (red squares) and three- (blue diamonds) state fits to the three- and two-point correlation functions. For the two-state fit case, we include a variation in which the values of the form factors at \(Q^{2}=0\) are not included in the fit (open squares).
#### vi.2.2 Convergence of the \(z\)-expansion
In order to check the stability of the \(z\)-expansion fits, we study the convergence of the \(z\)-expansion as a function of the order and the amplitude of the priors used in Eq. (69). We observe convergence for \(k_{\rm max}\geq 3\) for all cases. We vary the amplitude of the prior using \(w\in[1,5]\). In Fig. 17 we depict the extracted axial charge and radius as a function of the prior width and \(Q_{\rm max}^{2}\), the largest \(Q^{2}\) used in the fit. We observe that the results obtained by changing the width of the priors are all consistent. The result of the model average is
\[\begin{split} g_{A}&=1.245(28)\qquad\qquad\text{(2- state)}\\ &=1.231(34)\qquad\qquad\text{(3-state)}\\ \langle r_{A}^{2}\rangle&=0.339(48)\ {\rm fm}^{2} \qquad\text{(2-state)}\\ &=0.333(72)\ {\rm fm}^{2}\,.\quad\text{(3-state)}\end{split} \tag{74}\]
quoting again the value from the data extracted from two-state fits with a systematic error in the difference between the central values of the model-averaged results when using data from two- and three-state fit to the correlators
\[\begin{split} g_{A}&=1.245(28)(14)\\ \langle r_{A}^{2}\rangle&=0.339(67)(06)\ {\rm fm}^{2}\,. \end{split}\quad\text{($z^{3}$-expansion)} \tag{75}\]
### Final results
Having presented the variations used to extract the axial charge and radius at the continuum limit, we continue here to discuss the consistency among them and how we choose our final values. To summarize the variations, we have used: i) in Sec. V.3 a direct determination using the matrix element at \(Q^{2}=0\) and, for the radius the matrix element at the lowest non-zero \(Q^{2}\) value yielding the results of Eq. (57); ii) in Sec. VII.1 the dipole Ansatz to describe the \(Q^{2}\)-dependence resulting in the values given in Eq. (71); iii) in Sec. VII.2.1 the first-order \(z\)-expansion to describe the \(Q^{2}\)-dependence resulting in the values given in Eq. (73); and iv) in Sec. VII.2.2 the higher-order \(z\)-expansion with the resulting values given in Eq. (75). In Table 8, we collect these values and
Figure 16: Results for the axial charge and radius extracted using the first-order \(z\)-expansion, versus \(Q_{\rm max}^{2}\). The notation is the same as in Fig. 13.
Figure 14: The same as in Fig. 12, but using the first-order \(z\)-expansion to fit the \(Q^{2}\) dependence.
Figure 15: The axial charge (top) and radius (bottom) obtained via first-order \(z\)-expansion fits to each ensemble. The notation is the same as in Fig. 12.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline Ensemble & \(g_{A}\) & \(\langle r_{A}^{2}\rangle\) [fm\({}^{2}\)] & \(\chi^{2}/N_{\rm dof}\) \\ \hline cB211.72.64 & 1.277(12) & 0.4349(54) & 1.43 \\ cC211.60.80 & 1.2739(90) & 0.4248(41) & 1.34 \\ cD211.54.96 & 1.281(10) & 0.4214(52) & 0.79 \\ \hline \(a=0\), 1-step & 1.292(24) & 0.431(12) & 1.32 \\ \(a=0\), 2-step & 1.291(24) & 0.431(12) & 0.29 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Values for the nucleon isovector axial charge \(g_{A}\), radius \(r_{A}\), and reduced \(\chi^{2}\) obtained using a first-order \(z\)-expansion fit on each ensemble and extrapolated to the continuum limit using our one- or two-step approach.
depict them in Fig. 18. In all cases, the central value and first error are obtained via model averaging of the two-state fit data, while the second error is a systematic obtained as the difference between the central values when model averaging the two- or three-state fit data. We note the following
* All results agree with the value extracted using the \(z^{k}\)-expansion with \(k=3\) within error bars. Results using the dipole Asantz and the \(z^{1}\)-expansion yield respectively smaller or larger values as compared to those using the \(z^{3}\)-expansion. This observation is compatible with what has been found in another study [23].
* Furthermore, the results using the \(z^{3}\)-expansion are completely consistent with those from the direct approach. Since the direct approach uses the matrix element at \(Q^{2}=0\) for \(g_{A}\) and for \(\langle r_{A}^{2}\rangle\) the slope using also the lowest non-zero value of \(Q^{2}\), it does not depend on any Ansatz used to fit the \(Q^{2}\)-dependence of the form factor. The fact that the \(z^{3}\)-expansion yields the same results shows that indeed it provides a model-independent approach to extract the same information on these two quantities. The error when using the \(z^{3}\)-expansion is smaller compared to the errors when using the direct approach since the \(z\)-expansion makes use of more information.
Given the above observations, we quote as our final values the results from the \(z^{k}\)-expansion that has shown convergence for \(k=3\) and is model-independent. Thus, we take as our final values for the axial charge and radius
\[g_{A} =1.245(28)(14)[31] \tag{76}\] \[\langle r_{A}^{2}\rangle =0.339(48)(06)[48]\ \text{fm}^{2}\,,\]
where in the square brackets we have combined quadratically the two errors. We also collect all our final values in the conclusions in Eq. (97).
In Figs. 19 and 20 we show the model-averaged results as a function of \(Q^{2}\) when using the \(z^{3}\)-expansion for either two- or three-state fits to the correlators, respectively. As can be seen, in Fig. 10 the data extracted using two- and three-state fits are compatible. However, small statistical fluctuations and the lack of data in the case of the three-state fit analysis for \(Q^{2}>0.5\) GeV\({}^{2}\) can affect the fits of the \(Q^{2}\)-dependence and thus the continuum limit, given that we only have three lattice spacings. We remind the reader that we perform simultaneously fits to the \(Q^{2}\)-dependence for each ensemble and at the same time take the continuum limit. We compare the resulting \(G_{A}(Q^{2})\) in the continuum limit for these two cases in Fig. 21, where we give the continuum fits only. The fit parameters of the curves corresponding to the standard form of the \(z\)-expansion in Eq. (63) with \(k_{\text{max}}=3\), \(t_{\text{cut}}=(3m_{\pi})^{2}\), \(m_{\pi}=0.135\) GeV and \(t_{0}=0\) GeV\({}^{2}\) are
\[\vec{a}_{\text{2-state}} =\left[1.245(28),-1.19(18),-0.54(55),-0.13(59)\right] \tag{77}\] \[\vec{a}_{\text{3-state}} =\left[1.231(34),-1.16(27),-0.80(47),-1.23(58)\right].\]
As can be seen, the resulting curve for the three-state fit case is in agreement with that for the two-state fit for \(Q^{2}\leq 0.5\) GeV\({}^{2}\). Also, the parameters of the two fits are in good agreement. Since however, the three-state fits become unstable for \(Q^{2}>0.5\) GeV\({}^{2}\), we take \(a_{\text{ 2-state}}\) as
Figure 17: Results for the axial charge and radius as a function of \(Q^{2}_{\text{max}}\) obtained from using the \(z^{3}\)-expansion to parameterize the \(Q^{2}\)-dependence. For each \(Q^{2}_{\text{max}}\) we depict five points having prior width \(w=1,2,3,4,5\). The points are shifted to the right as \(w\) increases with an increasing symbol size.
Figure 18: Results for the isovector axial charge and radius extracted using four different approaches as described in the text. Numerical values are given in Table 8.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline Method & \(g_{A}\) & \(\langle r_{A}^{2}\rangle\) [fm\({}^{2}\)] \\ \hline Direct approach & 1.244(45)(20) & 0.354(96)(61) \\ Dipole Ansatz & 1.196(24)(32) & 0.210(17)(90) \\ \(z^{1}\)-expansion & 1.283(31)(34) & 0.421(18)(14) \\ \(z^{3}\)-expansion & 1.245(28)(14) & 0.339(48)(06) \\ \hline \hline \end{tabular}
\end{table}
Table 8: Results for the isovector axial charge and radius extracted using four different approaches as described in the text. The values are depicted in Fig. 18.
our central values and the difference between the central values of \(a_{\,\,2\text{-state}}\) and \(a_{3-state}\) as the systematic error to account for systematics due to excited states. Our final parameterization for the form factor is then
\[\vec{a}_{A}= \big{[}1.245(28)(14)[31],-1.19(18)(03)[18], \tag{78}\] \[-0.54(55)(26)[61],-0.13(59)(1.1)[1.3]\big{]}\] \[\text{corr}_{\vec{a},A}= \begin{pmatrix}1.0&-0.421&0.247&-0.246\\ -0.421&1.0&-0.918&0.799\\ 0.247&-0.918&1.0&-0.952\\ -0.246&0.799&-0.952&1.0\end{pmatrix}\,,\]
where we have used the correlation matrix of the parameters from two-state fit data. More information on the form factors at the continuum limit is provided in Appendix A.
We include the resulting \(G_{A}(Q^{2})\) when we assign this systematic error to the parameters of the continuum fit in Fig. 21. We consider the values of \(G_{A}(Q^{2})\) including the systematic uncertainty as our final results. Our final results for the form factor \(G_{A}(Q^{2})\) are given in Table 1. We will adopt the same strategy for the analysis of the other two form factors and for checking the PCAC and PPD relations.
## VIII Induced pseudoscalar \(G_{p}(q^{2})\) and pseudoscalar \(G_{5}(q^{2})\) form factors
We perform a similar analysis to determine \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) to the one discussed in detail above for \(G_{A}(Q^{2})\). The additional complication in the case of \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) is that both form factors have a pole, at \(Q^{2}=-m_{\pi}^{2}\), which needs to be removed before proceeding to apply similar fit functions to the ones applied for \(G_{A}(Q^{2})\). Therefore, before proceeding with the \(Q^{2}\)-dependence analysis of these form factors, we present a detailed study of pion pole dominance.
### Pion pole dominance (PPD)
The pion pole dominance (PPD) hypothesis introduced in Sec. II can be tested by forming two ratios of form factors, one of which is
\[\frac{G_{A}(Q^{2})}{G_{P}(Q^{2})}=\frac{Q^{2}+m_{\pi}^{2}}{4m_{N}^{2}}\bigg{|} _{Q^{2}\to-m_{\pi}^{2}}, \tag{79}\]
arising from Eq. (20) and the second \(r_{\text{PPD},2}\) derived in Eq. (26) assuming a non-zero Goldberger-Treiman discrepancy. Using the results for the form factors from the two-state fits to the correlators we find the ratios depicted in Fig. 22. We indeed observe for both ratios a linear dependence in \(Q^{2}\), as expected from Eq. (79) and Eq. (27), respectively. We also observe clear cut-off effects for the first ratio whereas for the second the results from the three ensembles are consistent among them. To capture the \(a\)-dependence we fit the ratios using the functional
Figure 21: Results on \(G_{A}(Q^{2})\) at the continuum limit when fitting data extracted from the two- (red band) and three- (blue band) state fit analysis of the correlators. The darker blue curve indicates up to which \(Q^{2}\) we had data for the three-state fit analysis. The yellow band is when we added systematic errors to the parameters that define the red curve as discussed in the text. The parameters of the fit are given in Eq. (78).
Figure 19: Continuum limit of \(G_{A}(Q^{2})\) using the \(z^{3}\)-expansion and data from the two-state fit analysis of the correlators up to \(Q^{2}=1\) GeV\({}^{2}\) for the three ensembles with the symbols as indicated in the header of the figure.
Figure 20: Continuum limit of \(G_{A}(Q^{2})\) using the \(z^{3}\)-expansion and data from the three-state fit analysis of the correlators up to \(Q^{2}=0.47\) GeV\({}^{2}\) for the three ensembles with the symbols as indicated in the header of the figure.
form
\[f(Q^{2},a^{2})=b_{0}+b_{2}a^{2}+(c_{0}+c_{2}a^{2})Q^{2}, \tag{80}\]
where we include the leading order \(a\) dependence to both the intercept and the \(Q^{2}\)-slope. We also perform fits where we set \(b_{2}\) and \(c_{2}\) to zero to account for the fact that the second ratio \(r_{\rm PPD,2}\) shows no detectable cut-off effects to the accuracy of our data. We then perform a model average over all fits where we both include and exclude cut-off effects as well as change the largest \(Q^{2}\) value, \(Q^{2}_{\rm max}\), used in the fit. The resulting continuum fits are shown in Fig. 22.
The conclusions drawn from Fig. 22 are
* The pion pole dominance hypothesis is satisfied for both ratios at the pole since we obtain the expected value at \(Q^{2}=-m_{\pi}^{2}\), namely \[\begin{split}&\frac{G_{A}(-m_{\pi}^{2})}{G_{P}(-m_{\pi}^{2})}=0.0004(15)\approx 0\\ & r_{\rm PPD,2}(-m_{\pi}^{2})=1.015(12)\approx 1\,.\end{split}\] (81)
* For the first ratio, \(G_{A}(Q^{2})/G_{P}(Q^{2})\), we find a slope and intercept in the continuum limit consistent with the PPD hypothesis. Namely, from the intercept at \(Q^{2}=-m_{\pi}^{2}\), we find a value of the pion pole mass at the continuum limit of \[m_{\pi}^{\rm pole}=0.141(20)~{}{\rm GeV}\approx 0.135~{}{\rm GeV}\] (82) compatible with the physical pion mass; and from the \(Q^{2}\)-slope, whose value is expected to be \(1/4m_{N}^{2}\), we find a nucleon mass of \[m_{N}=0.9401(39)~{}{\rm GeV}\approx 0.938~{}{\rm GeV}\,,\] (83) when the fit is done with \(Q^{2}_{\rm max}=0.3~{}{\rm GeV}^{2}\). Thus, we conclude that the ratio \(G_{A}(Q^{2})/G_{P}(Q^{2})\) satisfies the PPD relation close to the pole.
* Examining the pion pole mass determined from the fits to a given ensemble at finite lattice spacing, we find a significantly different pion pole mass. It is well-known that the twisted mass fermion formulation has significant cut-off effects in the pion mass [69] but much milder ones in other quantities such as other hadron masses [70] or hadronic operator matrix elements [69]. In Table 9, we give the pion masses that we find per ensemble as well as the values of the unitary pion mass, denoted by \(m_{\pi}^{\rm TM}\) and the Osterwalder-Seiler (OS) pion mass [71; 72], denoted by \(m_{\pi}^{\rm OS}\). We would like to clarify that the difference between the charged pion mass \(m_{\pi}^{\rm TM,+/-}\) and its neutral counterpart \(m_{\pi}^{\rm TM,0}\) is an \({\cal O}(a^{2})\) artifact due to the breaking of isospin symmetry in the clover twisted-mass fermion lattice action formulation. We find that the mass difference between charged and neutral unitary pions \(m_{\pi}^{\rm TM,-/+}-m_{\pi}^{\rm TM,0}\) is of order 20-40 MeV at the lattice spacings employed here. The uncertainty in the determination of this mass difference arises due to the statistical error of the disconnected quark contribution entering in the computation of \(m_{\pi}^{\rm TM,0}\). While, the larger difference between \(m_{\pi}^{\rm TM,+/-}\) and \(m_{\pi}^{\rm OS}\) is also an \({\cal O}(a^{2})\) cutoff effect of the OS mixed action, the coefficient is different. Technically, the difference between the neutral \(m_{\pi}^{\rm TM,0}\) and \(m_{\pi}^{\rm OS}\) can be traced back to the presence in the two-point correlator of the neutral TM pion of quark-disconnected contributions of \({\cal O}(a^{2})\) that are absent in the two-point correlator of the OS pion. We observe that the pion pole mass that we find is very close to \(m_{\pi}^{\rm OS}\). This is because, in our evaluation of \(G_{P}(Q^{2})\), we use the flavor diagonal
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Ensemble & \(m_{\pi}^{\rm pole}\) [MeV] & \(m_{\pi}^{\rm TM}\) [MeV] & \(m_{\pi}^{\rm OS}\) [MeV] \\ \hline cB211.72.64 & 299.3(4.5) & 140.2(2) & 297.5(7) \\ cC211.60.80 & 266.7(3.2) & 136.6(2) & 248.9(5) \\ cD211.54.96 & 235.8(4.8) & 140.8(3) & 210.0(4) \\ \hline \end{tabular}
\end{table}
Table 9: Pion pole mass extracted from the ratio \(G_{A}(Q^{2})/G_{P}(Q^{2})\) for each ensemble, compared to the simulated unitary pion mass and the Osterwalder-Seiler (OS) pion mass.
Figure 22: \(G_{A}(Q^{2})/G_{P}(Q^{2})\) (top) and \(r_{\rm PPD,2}\) (bottom) for the three ensembles. The blue, orange, and green curves show the results of combined linear fit in \(Q^{2}\) for the cB211.72.64, the cC211.60.80 and cD211.54.96, respectively, using the form of Eq. (80) to take into account cut-off effects. The red band is the continuum extrapolation after performing the model average as described in the text. The dashed line shows the expected value of the ratios if PPD close to the pion pole is satisfied, namely a slope of \(1/4m_{N}^{2}\) from Eq. (79) for the first ratio and unity for the second.
isovector current and neglect the noisy \({\cal O}(a^{2})\) quark disconnected contributions, which in turn corresponds to computing the three-point correlators in the OS mixed action formulation. We, thus, obtain a neutral pion pole with mass given by the OS pion mass, \(m_{\pi}^{\rm OS}\). In this way, indeed, we can understand the large cut-off effects observed for form factors that have a pion pole behavior within the twisted mass formulation with OS-type valence quarks in contrast to other fermion discretization schemes that observe similar cut-off effects in both \(G_{A}(Q^{2})\) and \(G_{P}(Q^{2})\), as when e.g. using Clover Wilson fermions [23], although in their formulation the form factors have \({\cal O}(a)\) discretization errors.
* We demonstrate that cut-off effects are due to the pion pole in Fig. 23, where we show results for the ratio \(r_{\rm PPD,1}\) defined in Eq. (25) removing the pole using either the unitary or the OS pion mass. Data obtained using the OS pion mass shows, indeed, very mild cut-off effects and yield a value of \(r_{\rm PPD,1}\) close to unity as expected by PPD and as observed by other groups using clover fermions.
* The deviation from unity observed for the ratio, \(r_{\rm PPD,2}\), is connected to the Goldberger-Treiman discrepancy as given in Eq. (28) where we followed a similar analysis to that of Ref. [46]. Using the slope of our fit at the continuum limit to extract \(\Delta_{\rm GT}\) from Eq. (28) and \(\bar{d}_{18}\) from Eq. (23), we find \[\begin{split}\Delta_{\rm GT}&=2.13(38)\%\approx 2 \%\\ \bar{d}_{18}&=-0.73(13)\ {\rm GeV}^{-2}.\end{split}\] (84) We note that in extracting these values we use our final values for \(g_{A}\) and \(\langle r_{A}^{2}\rangle\) given in Eq. (76). For both \(\Delta_{\rm GT}\) and the low energy constant \(\bar{d}_{18}\) we find values that are compatible with chiral perturbation theory, which predicts for \(\Delta_{\rm GT}\sim 2\%\) and for \(-1.40(24)\,{\rm GeV}^{-2}<\bar{d}_{18}<-0.78(27)\,{\rm GeV}^{-2}\) depending on the type of fit used in the determination [45]. Our determination gives a more precise value and provides valuable input for chiral perturbation theory.
* The mild cut-off effects observed for the ratio \(r_{\rm PPD,2}\) in Fig. 22 is understood by the fact that this ratio involves \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) both of which have the same pion pole mass dependence, thus canceling the cut-off effects.
### Parametrizations for the fits at the continuum of \(G_{p}(q^{2})\) and \(G_{5}(q^{2})\)
In the previous section, we made three important observations, namely: i) \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) have the same pion pole mass dependence at each lattice spacing; ii) the pion pole mass obtained using valence OS quarks in the mixed action twisted fermion mass formulation shows significant cut-off effects, much larger than the mass splitting between the unitary charged and neutral pion; and iii) in the continuum limit we obtained a pion pole consistent with the physical pion mass of \(m_{\pi}=0.135\) GeV. Based on these observations, we use the following functional form
\[G_{\rm w\,pole}(Q^{2},a^{2})=\frac{1}{Q^{2}+m_{\pi}^{2}+ba^{2}}G_{\rm res}(Q^{ 2},a^{2}) \tag{85}\]
to fit \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\), where for \(G_{\rm res}\) we used the \(z^{k}\)-expansion and repeat the same analysis presented for \(G_{A}(Q^{2})\) in Sec. VI. Instead of fitting \(G_{5}(Q^{2})\) we fit the scaled form factor \(\tilde{G}_{5}(Q^{2})\) given by
\[\tilde{G}_{5}(Q^{2})=\frac{4m_{N}}{m_{\pi}^{2}}m_{q}G_{5}(Q^{2}). \tag{86}\]
The combination \(m_{q}G_{5}(Q^{2})\) is scale-independent and renormalizes with \(Z_{S}/Z_{P}\), which is accurately determined. Furthermore, scaling by \(1/m_{\pi}^{2}\) takes into account the slight difference in the unitary pion mass for each ensemble (see Table 1) and by \(m_{N}\) makes the whole combination dimensionless. Since the pion pole mass is the same for both \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\), we in addition, perform a combined fit of both form factors taking the parameter "\(b\)" of the pole to be a common fit parameter. Since, as demonstrated in the previous section, the PPD relation is satisfied at the continuum limit, we perform fits where we enforce the value of \(g_{\pi NN}\) extracted from both form factors to take the same value at the continuum limit. This is implemented by using a \(z^{3}\)-expansion with \(t_{0}=-m_{\pi}^{2}\) and, therefore,
\[G_{\rm res}(-m_{\pi}^{2},0)=a_{0}\,, \tag{87}\]
since, according to Eq. (64), \(z(-m_{\pi}^{2})=0\) and \(a_{0}\) is a fit parameter as given in Eq (63). The coupling constant \(g_{\pi NN}\) is then the same for both form factors if
\[a_{0}^{(P)}=\tilde{a}_{0}^{(5)}=a_{0}\,, \tag{88}\]
Figure 23: The ratio \(r_{\rm PPD,1}\) for the cB211.72.64 (blue circles), cC211.60.80 (orange down-pointing triangles) and cD211.54.96 (green upwards-pointing triangles) ensembles when using the unitary pion mass \(m_{\pi}^{\rm TM}\) (open symbols) and the OS pion mass \(m_{\pi}^{\rm OS}\) (filled symbols) to remove the pion pole. The pion mass values are given in Table 9. The dashed line shows the expected value \(r_{\rm PPD,1}=1\) based on PPD.
namely by making \(a_{0}\) a common fit parameter for both \(G_{P}(Q^{2})\) and \(\tilde{G}_{5}(Q^{2})\).
### Convergence of the \(z\)-expansion
As for the analysis performed for \(G_{A}(Q^{2})\), we study the convergence of the \(z^{k}\)-expansion as a function of the order \(k\), the width of the priors used, and the largest \(Q^{2}\) employed in the fit. We first discuss results when we fit separately \(G_{P}(Q^{2})\) and \(\tilde{G}_{5}(Q^{2})\) without enforcing to have the same pole or the same value of \(g_{\pi NN}\) and we monitor convergence by looking at the values it takes fitting \(G_{P}(^{2})\), and \(\tilde{G}_{5}(Q^{2})\). We also monitor the value of \(g_{P}^{*}\), which is extracted from \(G_{P}(Q^{2})\) using Eq. (9). In Fig. 24, we show results on these quantities when we use data determined from the two- and three-state fit analysis to the correlators. We observe convergence for \(k_{\rm max}\geq 3\), stability in the values we extract as a function of \(Q_{\rm max}^{2}\) and the width of the priors for both data from the two- and three-case analysis fits. After model averaging we find
\[\begin{split} g_{P}^{*}&=8.87(66)\quad\text{(2- state)}\\ &=8.9(1.1)\quad\text{(3-state)}\\ g_{\pi NN}&=13.0(1.2)\quad\text{(2-state, from $G_{P}$)}\\ &=13.5(1.3)\quad\text{(2-state, from $\tilde{G}_{5}$)}\\ &=13.3(2.0)\quad\text{(3-state, from $G_{P}$)}\\ &=11.9(1.7)\quad\text{(3-state, from $\tilde{G}_{5}$)}\,.\end{split} \tag{89}\]
All values determined using data from the two- and three-state fit analyses are in good agreement with each other. The values of \(g_{\pi NN}\) extracted from \(G_{P}(Q^{2})\) and \(\tilde{G}_{5}(Q^{2})\) are also in agreement within error bars.
If we enforce the pion pole and \(g_{\pi NN}\) to have the same value as determined from \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) and perform the same analysis, we obtain
\[\begin{split} g_{P}^{*}&=8.99(39)\quad\text{(2- state)}\\ &=8.50(51)\quad\text{(3-state)}\\ g_{\pi NN}&=13.25(67)\quad\text{(2-state)}\\ &=12.56(87)\quad\text{(3-state)}\,.\end{split} \tag{90}\]
These results are in agreement with those where we did not enforce the value of \(g_{\pi NN}\) and have smaller uncertainties thanks to the combined fit approach. Since we have demonstrated that the PPD relation is satisfied at the continuum limit, we opt to quote these as our final results for these quantities. We follow the same strategy as for \(G_{A}(Q^{2})\) and quote the model-averaged value determined from using the data from the two-state fit analysis of the correlators and take as a systematic error the difference between the model-averaged central values of the data from the two- and three-state fits. We find the following values
\[\begin{split} g_{P}^{*}&=8.99(39)(49)[63]\\ g_{\pi NN}&=13.25(67)(69)[96]\end{split} \quad\text{(final value)}\,, \tag{91}\]
where in the square brackets we have summed in quadrature the two errors in parentheses. In Figs. 25 and 26 we depict results on \(G_{P}(Q^{2})\) obtained after taking the model average using the data from the two- or three-state fit analysis, respectively. In Figs. 27 and 28 we show the corresponding results for \(G_{5}(Q^{2})\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c|}{Ensemble} & \(m_{\pi}^{G_{P}}\) [MeV] & \(m_{\pi}^{G_{5}}\) [MeV] & \(m_{\pi}^{\text{GP},\text{GS}}\) [MeV] \\ \hline \hline \multirow{3}{*}{\(g_{\pi NN}\)} & cB211.72.64 & 279(27) & 295(27) & 284(21) \\ & cC211.60.80 & 249(22) & 262(22) & 254(17) \\ & cD211.54.96 & 221(17) & 231(17) & 224(13) \\ \hline \multirow{3}{*}{\(g_{\pi NN}\)} & cB211.72.64 & 306(45) & 317(44) & 292(35) \\ & cC211.60.80 & 271(37) & 281(37) & 260(29) \\ \cline{1-1} & cD211.54.96 & 238(29) & 246(29) & 229(23) \\ \hline \end{tabular}
\end{table}
Table 10: Pion pole masses per ensemble extracted from the individual or combined fit of \(G_{P}\) and \(\tilde{G}_{5}\) from two- or three-state fit data.
Figure 24: Induced pseudoscalar coupling, \(g_{P}^{*}\), and pion-nucleon coupling, \(g_{\pi NN}\) from a \(z^{3}\)-expansion fit as a function of the largest \(Q^{2}\) used in the fit, \(Q_{\rm max}^{2}\), and the width of the priors. For each \(Q_{\rm max}^{2}\) we depict five points having prior width of \(w=1,2,3,4,5\). The points are shifted to the right as \(w\) increases with an increasing symbol size.
In Table 10, we quote the values of the pion pole masses per ensemble as extracted from the individual or combined fit of \(G_{P}\) and \(\tilde{G}_{5}\) from two- or three-state fit data. We observe an overall good agreement within errors, and the values confirm the agreement already discussed in Sec. VIII.1 with the OS pion mass reported in Table 9.
One can drastically reduce cut-off effects by considering for \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) the following modified expression
\[G_{\rm improved}(Q^{2},a^{2})=\frac{Q^{2}+m_{\pi,{\rm OS}}^{2}}{Q^{2}+m_{\pi,{ \rm TM}}^{2}}G_{\rm w\,pole}(Q^{2},a^{2})\,. \tag{92}\]
In Fig. 29, we show the improved expressions for form factors per ensemble. We observe that upon using the improved expression defined in Eq. (92), the results per ensemble are compatible with each other and with those obtained in the continuum limit by extrapolating \(G_{\rm wpole}(Q^{2})\). These findings further confirm the interpretation that the sizable cut-off artifacts in \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) stem from the cutoff effects in using the OS pion mass for the pole. Since at finite \(a\) we neglect discon
Figure 29: Results per gauge ensemble for \(G_{P}(Q^{2})\) (left) and \(\tilde{G}_{5}(Q^{2})\) (right) when using the data for \(G_{\rm w\,pole}(Q^{2})\) (open symbols) compared to those when using \(G_{\rm improved}(Q^{2})\) (filled symbols) of Eq. (92) by correcting for the pole OS pion mass. The continuum limit form factors (red band) are those determined in Fig. 25 and Fig. 27 using the data for \(G_{\rm w\,pole}\).
Figure 27: Results on \(\tilde{G}_{5}(Q^{2})\) as defined in Eq. (86) for each ensemble (blue band for the cB211.72.64, orange band for the cC211.60.80 and green band for the cD211.54.96 ensemble) and at the continuum limit (red band) using the \(z^{3}\)-expansion to fit the \(Q^{2}\)-dependence of the data determined from the two-state fit analysis up to \(Q^{2}=1\) GeV\({}^{2}\).
Figure 26: Results on \(G_{P}(Q^{2})\) using the \(z^{3}\)-expansion to fit the \(Q^{2}\)-dependence of the data determined from the three-state fit analysis of the correlators up to \(Q^{2}=0.47\) GeV\({}^{2}\). The notation is the same as that for Fig. 25.
Figure 28: Results on \(\tilde{G}_{5}(Q^{2})\) as defined in Eq. (86) using the \(z^{3}\)-expansion to fit the \(Q^{2}\)-dependence of the data determined from the three-state fit analysis of the correlators up to \(Q^{2}=0.47\) GeV\({}^{2}\). The notation is the same as that of Fig. 27.
Figure 25: Results on \(G_{P}(Q^{2})\) for each ensemble (blue band for the cB211.72.64, orange band for the cC211.60.80 and green band for the cD211.54.96 ensemble) and at the continuum limit (red band) using the \(z^{3}\)-expansion to fit the \(Q^{2}\)-dependence of the data determined from the two-state fit analysis up to \(Q^{2}=1\) GeV\({}^{2}\).
nected \(O(a^{2})\) terms in our form factor computation this is indeed the expected behavior and fully justifies our fit Ansatz in Eq. (85) for the continuum extrapolation of the data when using \(G_{\text{wpole}}\).
### Continuum results for \(G_{p}(q^{2})\) and \(G_{5}(q^{2})\)
We follow the same procedure as the one for \(G_{A}\) in Sec. VII.3 to arrive at the \(Q^{2}\) parameterization of \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\). In particular, in Fig. 30, we show the corresponding results for \(G_{P}(Q^{2})\) as those shown for \(G_{A}(Q^{2})\) in Fig. 21 showing the comparison of results obtained when using data from the two- and three-state fit analysis after removing the pole, namely we show results for \((Q^{2}+m_{\pi}^{2})G_{P}(Q^{2})\). As in the case of \(G_{A}(Q^{2})\), the data from the three-state analysis of the correlators are in agreement with those from the two-state fit analysis. However, after the continuum extrapolated results using the data from the three-state fit analysis yield systematically smaller values for \(G_{P}(Q^{2})\) for higher \(Q^{2}\) values. As we already pointed out, the three-state fit analysis becomes unstable for \(Q^{2}>0.5\) GeV affecting the fits to the \(Q\)-dependence. Since cut-off effects are larger for \(G_{P}(Q^{2})\) the slope in linear \(a^{2}\) extrapolation is larger and thus more affected by small fluctuations in the data given that we also only have three lattice spacings. This explains why the continuum results in the two cases differ by up to a standard deviation at large \(Q^{2}\) while the lattice data for the three ensembles are compatible. Having higher statistics will enable us to extract more reliable results using the three-state fit procedure and having more lattice spacings will better control the continuum extrapolation, something that we plan to do in the future when more computational resources are available.
In the following, we provide parameters for the standard form of the \(z\)-expansion in Eq. (63) with \(k_{\text{max}}=3\), \(t_{\text{cut}}=(3m_{\pi})^{2}\), \(m_{\pi}=0.135\) GeV and \(t_{0}=-m_{\pi}^{2}\). The fit parameters of the two- and three-state fit data curves are given by
\[\vec{a}_{\text{ 2-state}} = \left[4.62(23),-3.0(1.2),-4.7(2.5),-0.1(2.4)\right]\] \[\vec{a}_{\text{ 3-state}} = \left[4.38(30),-3.1(1.5),-5.9(2.6),-2.9(2.0)\right]. \tag{93}\]
As can be seen, the parameters are in agreement albeit some carry large statistical errors and thus, we follow the same strategy as for \(G_{A}(Q^{2})\) for determining the best parametrization of the continuum results and for estimating the errors. Our final parameterization that takes into account systematic errors is
\[\vec{a}_{P}= \big{[}4.62(23)(24)[33],-3.0(1.2)(0.1)[1.2],\] \[-4.7(2.5)(1.2)[2.8],-0.1(2.4)(2.8)[3.7]\big{]}\] \[\text{corr}_{\vec{a},P}= \left(\begin{matrix}1.0&-0.812&0.414&0.151\\ -0.812&1.0&-0.819&0.23\\ 0.414&-0.819&1.0&-0.713\\ 0.151&0.23&-0.713&1.0\end{matrix}\right)\,. \tag{94}\]
The values of \(G_{P}(Q^{2})\) that result from this parametrization are given in Table 11 of the Appendix.
Repeating the same analysis for \(\tilde{G}_{5}(Q^{2})\), we find the results shown in Fig. 31, after removing the pole, namely we show results for \((Q^{2}+m_{\pi}^{2})\tilde{G}_{5}(Q^{2})\). The behavior of the continuum limit results is the same as that observed for \(G_{P}(Q^{2})\) since both have similar cut-off effects due to the pion pole dominance. The fit parameters of the two- and three-state fit data curves are given by
\[\vec{a}_{\text{ 2-state}} = \left[4.62(23),-2.2(1.2),-2.9(2.4),-1.2(2.4)\right]\] \[\vec{a}_{\text{ 3-state}} = \left[4.38(30),-4.3(1.6),-0.1(2.7),-0.7(2.0)\right]. \tag{95}\]
Our final parameterization that takes into account sys
Figure 31: Results on \((Q^{2}+m_{\pi}^{2})\tilde{G}_{5}(Q^{2})\) at the continuum limit when fitting data extracted from the two- (red band) and three- (blue band) state fit analysis of the correlators. The darker blue curve indicates up to which \(Q^{2}\) we had data for the three-state fit analysis. The yellow band is when we added systematic errors to the parameters that define the red curve as discussed in the text. The parameters of the fit are given in Eq. (96).
Figure 30: Results on \((Q^{2}+m_{\pi}^{2})\,G_{P}(Q^{2})\) at the continuum limit when fitting data extracted from the two- (red band) and three- (blue band) state fit analysis of the correlators. The darker blue curve indicates up to which \(Q^{2}\) we had data for the three-state fit analysis. The yellow band is when we added systematic errors to the parameters that define the red curve as discussed in the text. The parameters of the fit are given in Eq. (94).
tematic errors is
\[\begin{split}\vec{a}_{5}=&\big{[}4.62(23)(24)[33],-2.2(1. 2)(2.1)[2.5],\\ &-2.9(2.4)(2.8)[3.7],-1.2(2.4)(0.5)[2.4]\big{]}\\ \mathrm{corr}_{\vec{a},5}=&\left(\begin{array}{ cccc}1.0&-0.804&0.435&0.14\\ -0.804&1.0&-0.825&0.217\\ 0.435&-0.825&1.0&-0.694\\ 0.14&0.217&-0.694&1.0\end{array}\right)\,.\end{split} \tag{96}\]
The values of \(\tilde{G}_{5}(Q^{2})\) that result from this parametrization are given in Table 13 of the Appendix A, where we also provide more information on the form factors at the continuum limit.
### Continuum limit of the PCAC and PPD relations
Having determined the three form factors \(G_{A}(Q^{2})\), \(G_{P}(Q^{2})\), and \(G_{5}(Q^{2})\), we can check the PCAC relation at the continuum limit. We use the values of the fit parameters of the \(z^{3}\)-expansion to the \(Q^{2}\)-dependence after taking the model average for each ensemble. We use the form factors extracted from the two-state fits to correlators. We also repeat using the three-state fits correlators. In both cases, we also take the continuum limit of the parameters determined at each lattice spacing, as previously discussed. In Fig. 32, we depict the resulting \(r_{\mathrm{PCAC}}\) as a function of \(Q^{2}\) using data from the two- and three-state fit analysis, upper and lower panels, respectively. As can be seen, in both cases the PCAC relation is recovered in the continuum limit. In addition, we obtain the PCAC ratio in the continuum limit by using the final parameterizations of the form factors that take into account the systematic uncertainty due to how we treat excited states, i.e. difference of central values when we use two- or three-state fits, namely the results shown by the yellow band of Figs. 21, 25 and 27 for \(G_{A}(Q^{2})\), \((m_{\pi}^{2}+Q^{2})G_{P}(Q^{2})\), and \((m_{\pi}^{2}+Q^{2})\tilde{G}_{5}(Q^{2})\), respectively. As expected, the PCAC relation is recovered but the systematic error due to the treatment of excited states increases the error band. For comparison, we plot in Fig. 33 in the same format, the results for the ratio \(r_{\mathrm{PPD},1}\). It is no surprise that it also fulfills the PPD dominance in the continuum limit, as already discussed in relation to Fig. 22. As in the case of \(r_{\mathrm{PCAC}}\) we show both the continuum limit curve extracted using the data from the two-state fit analysis and the one when we include the systematic uncertainty difference between the central values of the fit parameters determined by using data from to the two- and three-state fit analysis.
agreement showing that cut-off effects are mild for these quantities. The error on \(g_{A}\) increases after taking the continuum limit, while the error on the axial radius is approximately the same. The fact that the errors on \(g_{P}^{*}\) and \(g_{\pi NN}\) are much smaller is a combination of two things: i) taking the continuum limit and ii) in our previous work we used the PCAC and PPD relation and lattice QCD data on \(G_{A}(Q^{2})\) which is more precisely determined. The reason was that with one lattice spacing, we could not account for the large cut-off effects on \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) leading to a violation of the PCAC relation. In this work, \(g_{P}^{*}\) and \(g_{\pi NN}\) are determined directly from our data on \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\), although, as shown in this work, in the continuum limit the PCAC relation holds and could be used to determine them. We note that the trend that we observe of errors becoming larger in a number of quantities highlights the importance of having results using ensembles with smaller lattice spacings. However, as it is well known, simulations for lattice spacing \(a<0.05\) fm become difficult due to the increase in the autocorrelation time. There are ongoing efforts to address the critical slowing down of Hybrid Monte Carlo simulations [75; 76; 77].
The nucleon axial charge and radius as well as the coupling constants \(g_{P}^{*}\) and \(g_{\pi NN}\) are compared to other recent lattice QCD results in Fig. 35. We selected studies that provide results at the continuum limit and at the physical pion mass, either computed directly like ours or via a combined chiral and continuum extrapolation. There is a nice agreement among all lattice QCD results on these quantities which are defined either at \(Q^{2}=0\) or at the limit \(Q^{2}\to 0\).
Figure 37: Results on \(G_{P}(Q^{2})\) determined directly from our lattice data using the parameters \(a\)\({}_{2\text{-state}}\) (red solid line and band) and when including the systematic uncertainty as the difference in the central values of \(a\)\({}_{2\text{-state}}\) and \(a\)\({}_{3\text{-state}}\) (yellow band) are compared to those obtained by using PPD given in Eq. (20) and to our data on \(G_{A}(Q^{2})\) (dark blue and light blue when including the systematic error).
Figure 36: Top: \(G_{A}(Q^{2})\) determined within this work using the parameters \(a\)\({}_{2\text{-state}}\) (red solid line and band) and when including the systematic uncertainty as the difference in the central values of \(a\)\({}_{2\text{-state}}\) and \(a\)\({}_{3\text{-state}}\) (yellow band). We compare to the fit to the deuterium bubble-chamber data [25] shown by the green dashed line with error band and with the fit to the recent MINER\(\nu\)A antineutrino-hydrogen data [1] shown by the blue dot-dashed line with error band. Bottom: \(G_{A}(Q^{2})\) determined within this work compared to two recent lattice QCD calculations: i) by the Mainz group [22] shown with the gray dashed line with its error band, and ii) by PNDME [23] shown with the blue dashed line with its error band.
Figure 35: From left to right we show recent lattice QCD results on \(g_{A}\), \(\langle r_{A}^{2}\rangle\), \(g_{P}^{*}\) and \(g_{\pi NN}\). Our results are shown with the red star and red error band. The blue triangles show the recent results by PNDME [23], the green triangles by RQCD [18; 73], the yellow squares by NME [46], the gray diamonds by the Mainz group [22] and the magenta square by CaILat [14]. The cyan circle shows the FLAG21 average of lattice results published at the time of the report [74].
Figure 34: From left to right we show our lattice QCD results from this work on \(g_{A}\), \(\langle r_{A}^{2}\rangle\), \(g_{P}^{*}\) and \(g_{\pi NN}\) (red stars). The open circles show results extracted using only the cB211.72.64 [19] and the PCAC and PPD relations to extract \(g_{P}^{*}\) and \(g_{\pi NN}\) from \(G_{A}(Q^{2})\).
In Fig. 36 we compare our final parameterization of given in Eq. (78) with fits to experimental data on \(G_{A}(Q^{2})\) and with fits to data computed by other lattice QCD groups. When compared to experimental data, our results fall off slower than the fits to experimental data. While our results are within two standard deviations as compared to the recent results from the Miner\(\nu\)a experiment [1], they show more tension with the fit to the deuterium bubble-chamber data [25]. In addition, our results are in good agreement with the results by the Mainz group [22] and close to the results by PNDME [23] and NME [46, 78].
We comment below on some aspects of the lattice QCD calculations:
* The results of this work are the only ones that are extrapolated to the continuum limit using only ensembles simulated directly with physical pion mass.
* The rest of the collaborations combined results extracted using ensembles simulated with larger than physical pion masses to extrapolate to the physical pion mass and to the continuum limit. Specifically, NME [46] uses no physical point ensembles, the Mainz group [22] uses one with their physical point results having large errors and RQCD [18] and PNDME [23] use two physical pion mass ensembles.
* In the case of the RQCD [18] and PNDME [23], results using the physical pion mass ensembles have larger statistical errors as compared to those of ensembles with heavier than physical mass. This means that results at the physical point weigh less in the extrapolation. Additionally, in both studies, the form factors are computed using the physical pion mass ensembles only for \(Q^{2}\lesssim 0.3\) GeV\({}^{2}\) and information at high \(Q^{2}\) is provided by a subset of the ensembles. We instead compute the form factors up to \(Q^{2}=1\) GeV\({}^{2}\).
* The PNDME collaboration [23] uses a \(N_{f}=2+1+1\) mixed action of clover fermions on a staggered sea. They have employed thirteen ensembles simulated at four values of the lattice spacing, three values of the pion mass (135, 220, and 310 MeV), and volumes with \(3.7\leq m_{\pi}L\leq 5.5\). Their axial-vector current is unimproved which means they have \(\mathcal{O}(a)\) cut-off effects. Nevertheless, they observe mild dependence on the lattice spacing. They also do not observe any significant lattice volume dependence, while they do see a stronger dependence on the pion mass. In this work, we use ensembles with approximately the same volume, namely with \(3.6\leq m_{\pi}L\leq 3.9\). Given the volume study by PNDME [23], we expect finite size effects on our results to be small.
PNDME also carried out an elaborate study of excited states highlighting the effects of \(\pi N\) states and concluded that an approach compatible with the one employed in this work is the most suitable. Namely, they propose performing a combined fit of all matrix elements at the same \(Q^{2}\) using common fit parameters for the excited states. However, they do not include a systematic error due to excited states and this explains why their results have a smaller error band.
* The Mainz group [22] has used fourteen CLS \(N_{f}=2+1\) ensembles simulated with clover-improved Wilson fermions at four values of the lattice spacing, pion masses in the range \(130\,\mathrm{MeV}\leq m_{\pi}\leq 350\,\mathrm{MeV}\), and volumes with \(3.9\leq m_{\pi}L\leq 5.9\). Their current is \(\mathcal{O}(a)\)-improved and they observe a mild dependence on the lattice spacing, while a stronger dependence on the pion mass, which requires the inclusion of higher order corrections that are not considered by PNDME. The Mainz group also includes a systematic error in a similar way to what we do.
* The RQCD collaboration [18] uses the same CLS ensembles as the Mainz group, but thirty-seven of them, having five values of the lattice spacing, pion masses in the range \(130\,\mathrm{MeV}\leq m_{\pi}\leq 420\,\mathrm{MeV}\) and volumes with \(3.5\leq m_{\pi}L\leq 6.4\). Their physical point limit also involves a limit to the physical strange quark mass that is not required in the set of ensembles used by the Mainz group. They perform a thorough study of excited states including the effect of \(\pi N\) states. They also observe a strong dependence on the pion mass. They have provided results using dipole fits (labeled \(12P\)) or using \(z\)-expansion (labeled \(1z^{4+3}\)), without selecting one of the two as the final value. For this reason, we report two sets of points in Fig. 35 for RQCD19. In their recent work [73], they have provided a new value for the axial charge extracted from the analysis of matrix elements at zero momentum transfer and from an analysis of ten additional ensembles having four at the physical pion mass.
* NME [46] has used seven \(N_{f}=2+1\) Wilson-clover fermions ensembles simulated at five values of the lattice spacing, with pion masses in the range \(166\,\mathrm{MeV}\leq m_{\pi}\leq 285\,\mathrm{MeV}\) and volumes with \(3.9\leq m_{\pi}L\leq 6.2\). Their axial-vector current is not \(\mathcal{O}(a)\)-improved and they observe strong lattice spacing effects and pion mass dependence. The analysis of the excited states is compatible with the one carried out by PNDME and they include an elaborated study of excited states using priors around the \(\pi N\) excited states. For this case, we only show the values of the coupling constants and axial radius in Fig. 35.
* The PACS collaboration computed \(G_{A}(Q^{2})\), \(G_{P}(Q^{2})\) and \(G_{5}(Q^{2})\) using \(N_{f}=2+1\) clover-improved fermions and a large spatial volume of
length \(L=8.1\) fm [17] and pion mass of 146 MeV and lattice spacing of \(a=0.085\) fm. However, they only have time separations up to \(t_{s}\sim 1.3\) fm and only perform plateau fits to individual form factors. They also have results for these form factors for lower \(Q^{2}\) values up to \(\sim 0.25\) GeV\({}^{2}\). Since their results are given only at one lattice spacing they are not included in the comparisons.
While \(G_{A}(Q^{2})\) is determined by a number of lattice QCD collaborations, there are scarce results on \(G_{P}(Q^{2})\) and, to our knowledge, this work is the first to compute \(G_{5}(Q^{2})\) at the continuum limit. Experimental studies also probe \(G_{A}(Q^{2})\) and one could use PCAC and PPD to estimate \(G_{P}(Q62)\). In Fig. 37, we show our results from a _direct_ evaluation of \(G_{P}(Q^{2})\) in comparison with the ones obtain using our data on \(G_{A}(Q^{2})\) and the Eq. (20) to extract \(G_{P}(Q^{2})\). As can be seen, the results are in perfect agreement with the uncertainties. Therefore, one would be justified to use Eq. (20) and the experimental data on \(G_{A}(Q^{2})\) to estimate \(G_{P}(Q^{2})\).
## X Conclusions
In this work, we present results on the axial, induced pseudoscalar, and pseudoscalar form factors in the continuum limit. This study is performed using three \(N_{f}=2+1+1\) twisted-mass ensembles with all quark masses tuned to their physical value and simulated at three values of the lattice spacing. Our analysis is also done up to \(Q^{2}=1\) GeV\({}^{2}\) as compared to some other lattice QCD studies where, for physical point ensembles, only smaller values of \(Q^{2}\) were accessible. Our final values for the nucleon axial charge and radius as well as the coupling constants \(g_{P}^{*}\) and \(g_{\pi NN}\) are
\[\begin{split} g_{A}&=1.245(28)(14)[31]\\ \langle r_{A}^{2}\rangle&=0.339(48)(06)[48]\ {\rm fm}^{2} \\ g_{P}^{*}&=8.99(39)(49)[63]\\ g_{\pi NN}&=13.25(67)(69)[96]\,,\end{split} \tag{97}\]
where the central values and the first error in the parenthesis are obtained from an analysis of data extracted from the two-state fits to the correlators, the second error is the systematic error due to the excited states computed as the difference between the central values from using the data extracted from the two- and three-state fit analysis of the correlators and the third error in the square brackets is the total error obtained by summing in quadrature the first two. In Appendix A, we provide the final parameterization and values of the form factors at the continuum limit with and without systematic uncertainties due to the excited states.
From our analysis of the ratio \(r_{\rm PPD,2}\) defined in Eq. (27) we also determine the values of the Goldberger-Treiman discrepancy and the low-energy constant \(\bar{d}_{18}\)
\[\begin{split}\Delta_{\rm GT}&=2.13(38)\%\\ \bar{d}_{18}&=-0.73(13)\ {\rm GeV}^{-2}\,.\end{split} \tag{98}\]
Our results on \(G_{A}(Q^{2})\) are in good agreement with other recent lattice QCD studies. Having taken the continuum limit using only ensembles at the physical point mass we avoid a chiral extrapolation that in the nucleon sector can lead to an uncontrolled systematic error. An advantage of this setup is that it allows us to directly access cut-off effects. We find that for \(G_{A}(Q^{2})\), cut-off effects for the range of lattice spacings used are mild, ranging from not detectable within our errors at low \(Q^{2}\) to slightly positive at high \(Q^{2}\). On the other hand, the induced pseudoscalar and the pseudoscalar form factors exhibit similar large cut-off effects that can be traced back to the known \(O(a^{2})\) artifacts on the pion mass pole. Such cut-off effects are expected in the twisted mass fermion formulation used in this work for the computation of these form factors and as such they can be conveniently parameterized in our continuum extrapolation fits. Alternatively, they can be also substantially reduced by considering modified expression for the pole of the form factors. As shown in this work, the important conclusion is that in the continuum limit, all cut-off effects are safely eliminated as expected. In particular, both the pion pole dominance close to \(Q^{2}=-m_{\pi}^{2}\), with \(m_{\pi}=135\) MeV, and the fundamental PCAC relation that follows from QCD chiral Ward identities, are fully recovered.
###### Acknowledgements.
We thank all members of ETMC for the most enjoyable collaboration. C.A. acknowledges partial support by the project 3D-nucleon, id number EXCELENCE/0421/0043, co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation. S.B. is funded by the project QC4LGT, id number EXCELENCE/0421/0019, co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation. J.F. acknowledges support by the German Research Foundation (DFG) research unit FOR5269 "Future methods for studying confined gluons in QCD". S.B. and J.F. also acknowledge funding from the EuroCC project (grant agreement No. 951740). G.K. acknowledges partial support by the project NiceQuarks, id number EXCELENCE/0421/0195, co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation. M.C. acknowledges financial support from the U.S. Department of Energy, Office of Nuclear Physics, Early Career Award under Grant No. DE-SC0020405. G.S. acknowledges financial support from the European Regional Development Fund and the Republic of Cyprus through
the Research and Innovation Foundation under contract number EXCELLENCE/0421/0025. This work was supported by grants from the Swiss National Supercomputing Centre (CSCS) under projects with ids s702 and s1174. We also acknowledge PRACE for awarding us access to Piz Daint, hosted at CSCS, Switzerland, and Marconi100, hosted at CINECA, Italy. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS-Booster [79] at Julich Supercomputing Centre (JSC). Part of the results were created within the EA program of JUWELS Booster also with the help of the JUWELS Booster Project Team (JSC, Atos, ParTec, NVIDIA). We thank the developers of the QUDA [80, 81, 82] library for their continued support, without which the calculations for this project would not have been possible. Ensemble production for this analysis made use of tmLQCD [83, 84], DD-\(\alpha\)AMG [85, 86, 87].
## Appendix A Results on the axial and pseudoscalar form factors
In this appendix, we collect our results on the two axial form factors \(G_{A}(Q^{2})\) and \(G_{P}(Q^{2})\) and the pseudoscalar form factor \(G_{5}(Q^{2})\) computed at the continuum limit. The \(Q^{2}\)-dependence of the form factors is parameterized using a \(z^{3}\)-expansion of the form
\[G(Q^{2})=\sum_{k=0}^{3}a_{k}\ z^{k}(Q^{2}), \tag{10}\]
where
\[z(Q^{2})=\frac{\sqrt{t_{\rm cut}+Q^{2}}-\sqrt{t_{\rm cut}+t_{0}}}{\sqrt{t_{ \rm cut}+Q^{2}}+\sqrt{t_{\rm cut}+t_{0}}} \tag{11}\]
with \(t_{\rm cut}=\left(3m_{\pi}\right)^{2}\), \(m_{\pi}=0.135\) GeV and \(t_{0}\) chosen at convenience as discussed below. As discussed, taking \(k_{\rm max}=3\) we obtained results that are stable as compared to taking higher orders in the \(z\)-expansion.
### Axial form factor \(G_{A}(Q^{2})\)
In Table 11, we provide values for \(G_{A}(Q^{2})\) up to 1 GeV\({}^{2}\). For this form factor, we use \(t_{0}=0\) GeV\({}^{2}\). The values of the fit parameters of two-state fit data are given by
\[\vec{a}_{\ \rm 2\text{-state}}=[1.245(28),-1.19(18),-0.54(55),-0.13( 59)]\] \[\text{corr }_{\ \rm 2\text{-state}}=\begin{pmatrix}1.0&-0.421&0.247& -0.246\\ -0.421&1.0&-0.918&0.799\\ 0.247&-0.918&1.0&-0.952\\ -0.246&0.799&-0.952&1.0\end{pmatrix}\,. \tag{12}\]
\begin{table}
\begin{tabular}{c|c} \hline \hline \(Q^{2}\) [GeV\({}^{2}\)] & \(G_{A}\) \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 11: Values for \(G_{A}(Q^{2})\) in the continuum limit as a function of \(Q^{2}\). We provide values for 21 points uniformly distributed in the range \(Q^{2}\in[0,1]\) GeV\({}^{2}\). The central values and first errors are obtained from the \(z^{3}\)-expansion fitted to the two-state fit data. The second error is the systematic error due to excited states, computed as explained in the text, namely by the difference between the central values of the \(z^{3}\)-expansion parameters when fitting the two- or three-state fit data. The third error is the total error obtained by summing in quadrature the first two.
The fit parameters of three-state fit data are given by
\[\vec{a}_{\text{ 3-state}}=[1.231(34),-1.16(27),-0.80(47),-1.23(58)] \tag{4}\] \[\text{corr 3-state}=\begin{pmatrix}1.0&-0.575&0.116&-0.051\\ -0.575&1.0&-0.5&0.046\\ 0.116&-0.5&1.0&-0.52\\ -0.051&0.046&-0.52&1.0\end{pmatrix}\,.\]
The fit parameters of the final curve, when we include the systematic error taken as the difference \(|a_{\text{2-state}}-a_{\text{3-state}}|\) that quantifies systematic uncertainties in the analysis of the excited states, are given by
\[\vec{a}_{\text{ final}}=[1.245(31),-1.19(18),-0.54(61),-0.1(1.3)] \tag{5}\] \[\text{corr }_{\text{ final}}=\begin{pmatrix}1.0&-0.421&0.247&-0.246\\ -0.421&1.0&-0.918&0.799\\ 0.247&-0.918&1.0&-0.952\\ -0.246&0.799&-0.952&1.0\end{pmatrix}\,.\]
The final form factor reproduces the quoted values of \(g_{A}\) and \(\langle r_{A}\rangle\), namely
\[g_{A} =1.245(31) \tag{6}\] \[\langle r_{A}^{2}\rangle =0.339(49)\text{ fm}^{2}\,.\]
\(G_{A}(Q^{2})\) and its derivative \(G_{A}^{\prime}(Q^{2})\) also have the correct limit as \(Q^{2}\to\infty\) having the values
\[G_{A}(\infty) =\sum_{k}a_{k}=-0.62(82) \tag{7}\] \[G_{A}^{\prime}(\infty) =\sum_{k}ka_{k}=-2.7(2.8),\]
both compatible with zero.
### Induced pseudoscalar axial form factor \(G_{p}(q^{2})\)
In Table 11, we provide values for \((Q^{2}+m_{\pi}^{2})\,G_{P}(Q^{2})\) up to 1 GeV\({}^{2}\). For this form factor, we use \(t_{0}=-m_{\pi}^{2}\). The values of the fit parameters of two-state fit data are given by
\[\vec{a}_{\text{ 2-state}}=[4.62(23),-3.0(1.2),-4.7(2.5),-0.1(2.4)] \tag{8}\] \[\text{corr 2-state}=\begin{pmatrix}1.0&-0.812&0.414&0.151\\ -0.812&1.0&-0.819&0.23\\ 0.414&-0.819&1.0&-0.713\\ 0.151&0.23&-0.713&1.0\end{pmatrix}\,.\]
The fit parameters of three-state fit data are given by
\[\vec{a}_{\text{ 3-state}}=[4.38(30),-3.1(1.5),-5.9(2.6),-2.9(2.0)] \tag{9}\] \[\text{corr 3-state}=\begin{pmatrix}1.0&-0.795&0.342&0.214\\ -0.795&1.0&-0.712&-0.292\\ 0.342&-0.712&1.0&0.129\\ 0.214&-0.292&0.129&1.0\end{pmatrix}\,.\]
The fit parameters of the final curve, when we include the systematic error taken as the difference \(|a_{\text{2-state}}-a_{\text{3-state}}|\) that quantifies systematic uncertainties in the analysis of the excited states, are given by
\[\vec{a}_{\text{ final}}=[4.62(33),-3.0(1.2),-4.7(2.8),-0.1(3.7)] \tag{10}\] \[\text{corr }_{\text{ final}}=\begin{pmatrix}1.0&-0.812&0.414&0.151\\ -0.812&1.0&-0.819&0.23\\ 0.414&-0.819&1.0&-0.713\\ 0.151&0.23&-0.713&1.0\end{pmatrix}\,.\]
The final form factor reproduces the quoted values of \(g_{\pi NN}\) and \(g_{P}^{*}\), namely
\[g_{\pi NN}=13.25(96)\quad\text{and}\quad g_{P}^{*}=8.99(63)\,. \tag{11}\]
### Pseudoscalar form factor \(G_{5}(q^{2})\)
In Table 11, we provide values for \((Q^{2}+m_{\pi}^{2})\,\tilde{G}_{5}(Q^{2})\) up to 1 GeV\({}^{2}\). The \(\tilde{G}_{5}(Q^{2})\) is defined as
\[\tilde{G}_{5}(Q^{2})=\frac{4m_{N}}{m_{\pi}^{2}}m_{q}G_{5}(Q^{2}), \tag{12}\]
\(m_{N}=0.938\) GeV and \(m_{q}=3.636(89)\) MeV [53] in the \(\overline{\text{MS}}\)(2 GeV) scheme at the continuum limit. For this form factor, we use \(t_{0}=-m_{\pi}^{2}\). The values of the fit parameters of two-state fit data are given by
\[\vec{a}_{\text{ 2-state}}=[4.62(23),-2.2(1.2),-2.9(2.4),-1.2(2.4)] \tag{13}\] \[\text{corr 2-state}=\begin{pmatrix}1.0&-0.804&0.435&0.14\\ -0.804&1.0&-0.825&0.217\\ 0.435&-0.825&1.0&-0.694\\ 0.14&0.217&-0.694&1.0\end{pmatrix}\,.\]
The fit parameters of three-state fit data are given by
\[\vec{a}_{\text{ 3-state}}=[4.38(30),-4.3(1.6),-0.1(2.7),-0.7(2.0)]\] \[\text{corr 3-state}=\begin{pmatrix}1.0&-0.782&0.422&0.265\\ -0.782&1.0&-0.802&-0.338\\ 0.422&-0.802&1.0&0.117\\ 0.265&-0.338&0.117&1.0\end{pmatrix}\,. \tag{100}\]
The fit parameters of the final curve, when we include the systematic error taken as the difference \(|a_{\text{2-state}}-a_{\text{3-state}}|\) that quantifies systematic uncertainties in the analysis of the excited states, are given by
\[\vec{a}_{\text{ final}}=[4.62(33),-2.2(2.5),-2.9(3.7),-1.2(2.4)]\] \[\text{corr }_{\text{ final}}=\begin{pmatrix}1.0&-0.804&0.435&0.14\\ -0.804&1.0&-0.825&0.217\\ 0.435&-0.825&1.0&-0.694\\ 0.14&0.217&-0.694&1.0\end{pmatrix}\,. \tag{101}\]
|
2309.13116 | Jordan-Wigner composite-fermion liquids in 2D quantum spin-ice | The Jordan-Wigner map in 2D is as an exact lattice regularization of the 2
pi-flux attachment to a hard-core boson (or spin-1/2) leading to a
composite-fermion particle. When the spin-1/2 model obeys ice rules this map
preserves locality, namely, local Rohkshar-Kivelson models of spins are mapped
onto local models of Jordan-Wigner/composite-fermions. Using this
composite-fermion dual representation of RK models, we construct spin-liquid
states by projecting Slater determinants onto the subspaces of the ice rules.
Interestingly, we find that these composite-fermions behave as ``dipolar"
partons for which the projective implementations of symmetries are very
different from standard ``point-like" partons. We construct interesting
examples of composite-fermion liquid states that respect all microscopic
symmetries of the RK model. In the six-vertex subspace, we constructed a
time-reversal and particle-hole-invariant state featuring two massless Dirac
nodes, which is a composite-fermion counterpart to the classic pi-flux state of
Abrikosov-Schwinger fermions in the square lattice. This state is a good ground
state candidate for a modified RK-like Hamiltonian of quantum spin-ice. In the
dimer subspace, we construct a state fearturing a composite Fermi surface but
with nesting instabilities towards ordered phases such as the columnar state.
We have also analyzed the low energy emergent gauge structure. If one ignores
confinement, the system would feature a U(1) x U(1) low energy gauge structure
with two associated gapless photon modes, but with the composite fermion
carrying gauge charge only for one photon and behaving as a gauge neutral
dipole under the other. These states are examples of pseudo-scalar U(1) spin
liquids where mirror and time-reversal symmetries act as particle-hole
conjugations, and the emergent magnetic fields are even under such
time-reversal or lattice mirror symmetries. | Leonardo Goller, Inti Sodemann Villadiego | 2023-09-22T18:00:11Z | http://arxiv.org/abs/2309.13116v1 | # Jordan-Wigner Composite-fermion liquids in 2D quantum spin-ice
###### Abstract
The Jordan-Wigner map in 2D is as an exact lattice regularization of the \(2\pi\)-flux attachment to a hard-core boson (or spin-\(1/2\)) leading to a composite-fermion particle. When the spin-\(1/2\) model obeys ice rules this map preserves locality, namely, local Rohkshar-Kivelson models of spins are mapped onto local models of Jordan-Wigner/composite-fermions. Using this composite-fermion dual representation of RK models, we construct spin-liquid states by projecting Slater determinants onto the subspaces of the ice rules. Interestingly, we find that these composite-fermions behave as "dipolar" partons for which the projective implementations of symmetries are very different from standard "point-like" partons. We construct interesting examples of these composite-fermi liquid states that respect all microscopic symmetries of the RK model. In the six-vertex subspace, we constructed a time-reversal and particle-hole-invariant state featuring two massless Dirac nodes, which is a composite-fermion counterpart to the classic \(\pi\)-flux state of Abrikosov-Schwinger fermions in the square lattice. This state is a good ground state candidate for a modified RK-like Hamiltonian of quantum spin-ice. In the dimer subspace, we construct a state featuring a composite Fermi surface but with nesting instabilities towards ordered phases such as the columnar state. We have also analyzed the low energy emergent gauge structure. If one ignores confinement, the system would feature a \(U(1)\times U(1)\) low energy gauge structure with two associated gapless photon modes, but with the composite fermion carrying gauge charge only for one photon and behaving as a gauge neutral dipole under the other. These states are examples of pseudo-scalar \(U(1)\) spin liquids where mirror and time-reversal symmetries act as composite-fermion particle-hole conjugations, and the emergent magnetic fields are even under such time-reversal or lattice mirror symmetries.
###### Contents
* I Equivalence of Jordan-Wigner Transformation and flux attachment in 2D
* II Jordan Wigner/Composite fermions as extended partons in 2D quantum spin-ice
* A 2+1D Quantum spin-ice and its Jordan-Wigner Composite Fermion representation
* B Review of Abrikosov-Schwinger parton states
* C Extended parton states for quantum spin-ice
* A.1 Symmetry implementation on JW composite fermions: general considerations
* II A specific implementation of symmetries of 2D quantum spin-ice on JW composite fermions.
* III Connection to pseudo-scalar spin liquids.
* III Gauge field fluctuations and effective low energy continuum field theory
* III.1 Review of Gauge field fluctuations for \(U(1)\) spin liquids from standard parton constructions
* B Gauge field fluctuations for \(U(1)\) spin liquids from extended parton constructions in 2D quantum spin-ice
* C Gauge field and matter couplings, low energy effective field theory and dipolar nature of composite fermions
* B.1 \(\mathbf{p}=(0,0)\) scattering terms
* B.2 \(\mathbf{p}=(\pi,\pi)\) scattering terms
* B.2 \(U(1)\times U(1)\) Gauge structure
* IV Summary and discussion
* A \(2\pi\) Flux attachment equivalence
* B Projective symmetry implementations
* B.1 \(\frac{\pi}{2}\) rotation enforcement
* B.2 Time-reversal enforcement
* B.3 Reflection symmetries enforcement
* B.4 Particle-Hole enforcement
* C Low energy effective theory derivation
* A.1 \(\mathbf{q}=(0,0)\) scattering processes analysis
* A.2 \(\mathbf{q}=(\pi,\pi)\) momentum scattering processes
* C
Introduction
The appearance of fermionic particles in systems whose microscopic building blocks are spins or bosons is a remarkable example of the emergence of non-local excitations in quantum states of matter. An exact and powerful map that allows to understand this phenomenon in one-dimension is the Jordan-Wigner transformation, which provides a re-writing of one-dimensional spin-\(\frac{1}{2}\) models in terms of fermions. In two-dimensions the Jordan-Wigner transformation is closely related to another celebrated statistical transmutation procedure known as flux attachment [1; 2; 3; 4; 5; 6]. More specifically, as we will review in detail in Sec. I, a standard Jordan-Wigner Fermion constructed by ordering spin-\(\frac{1}{2}\) operators in a 2D lattice is exactly equivalent to a hard-core boson carrying a fictitious solenoid of \(2\pi\)-flux. In this sense, the 2D Jordan-Wigner Fermion is an exact lattice regularized version of the composite Fermion particle that is commonly used to understand certain quantum Hall states emerging from microscopic bosons, such as those making the bosonic composite Fermi liquid state at filling \(\nu=1\)[7; 8; 9].
This 2D Jordan-Wigner/flux-attachment has been exploited in several studies of non-trivial quantum disordered states ("spin liquids") and their competition with traditional ordered phases [10; 11; 12; 13; 14; 15; 16; 17; 18]. One of the central challenges with the models investigated in these previous studies is that the Jordan-Wigner/flux-attachment map in 2D does not preserve space locality, in the sense that not all local spin-\(\frac{1}{2}\) operators appearing in the Hamiltonian are mapped onto local fermionic operators. This sharply contrasts with the situation in 1D, where simply imposing a global parity symmetry guarantees that local spin Hamiltonians map onto local fermionic Hamiltonians. In most of the 2D studies this difficulty is dealt with in a non-systematic manner by adding background magnetic fields that account for the relation between particle density and flux in an average fashion, similarly to how it is done in mean-field treatments of composite fermions in quantum Hall states [19; 20; 21; 22].
Recently, however, it has been emphasized that another kind of exact Jordan-Wigner-like maps in 2D are possible which in some sense preserve space locality [23; 24; 25; 26]. This is achieved by imposing local conservation of certain \(\mathbb{Z}_{2}\)-valued operators and thus endowing the Hilbert space of spin-\(\frac{1}{2}\) with a \(\mathbb{Z}_{2}\) lattice gauge theory structure. The gauge invariant spin operators (namely those commuting with the \(\mathbb{Z}_{2}\) conservation laws) can then be mapped exactly into bilinears of fermion operators. The single fermion creation operator remains non-local and can be explicitly constructed as a Jordan-Wigner-like string operator in 2D [23; 24; 25; 26]. This construction realizes an exact lattice regularization of a different kind of flux attachment, namely the one associated with a mutual Chern-Simons theory comprised of two U(1) gauge fields and the following \(K\)-matrix:
\[K=\begin{pmatrix}0&2\\ 2&0\end{pmatrix},\]
which is the Chern-Simons description of the topological order associated with Kitaev's Toric code model [23; 24; 25; 26], and the Jordan-Wigner fermions are the \(\epsilon\) particles, while the operator associated with the local \(\mathbb{Z}_{2}\) conservation law measures the parity of one of the other self-bosonic anyons, e.g. the \(e\) or \(m\) particle. For related constructions see also [27; 28; 29; 30; 31; 32; 33].
Motivated by these precedents, our current work investigates systems with a different kind of local conservation law that allows to preserve the locality of the usual Jordan-Wigner map in 2D associated with attaching \(2\pi\)-flux to a hard-core boson. Our local conservation laws will be associated with a two dimensional spin "ice rule" with a correspondingly conserved local operator that generates a U(1) gauge group, and the models of interest will be the classic 2D Rokshar-Kivelson-like (RK) Hamiltonians [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46]1. Similar to the situation in 1D, these models will remain local under Jordan-Wigner maps, however the price we will have to pay for this is that the resulting fermionic model will be necessarily interacting and endowed with local conservation laws. Despite this, the advantage of re-writing the RK models in terms fermionic variables is that they are much more flexible degrees of freedom to construct non-trivial quantum disordered states than the original spin degrees of freedom. For example, simple fermionic Slater determinant states would serve already as a mean-field approximation to describe quantum spin liquid states. There is however a crucial caveat to this mean field approach, which is that generically free fermion Slater determinant states will not obey the local spin-ice rules. In other words, the naive free fermion states would break the local gauge invariance and violate Elitzur's theorem [48]. To remedy this, we will project the free-fermion states onto the susbpace of the Hilbert space satisfying the ice rules, in an analogous fashion to how the Gutzwiller projection is employed in parton constructions of spin liquid states [49]. Despite the similarity of spirit, we have encountered that these states differ in crucial aspects from the standard parton constructions, such as the Abrikosov-Schwinger fermions [50; 51; 52].
One of the key distinctions, is that the Abrikosov-Schwinger fermion of standard parton constructions behaves as a point-like object under the parton gauge transformations, whereas the Jordan-Wigner fermion behaves as a dipole-like object under the local \(U(1)\) gauge transformations of the RK models. This is because the Abrikosov-Schwinger fermion operator at a given lattice site only transforms non-trivially under the parton gauge transformations acting on its site, but it transforms trivially under gauge transformations of different sites. In
other words, the creation of a single Abrikosov-Schwinger fermion would violate the constraint defining the physical Hilbert space only at a single lattice site, and in this sense it is point-like. The Jordan-Wigner fermion violates the spin-ice rule of two neighboring vertices, in such a way so as to create a dipole under the \(U(1)\) Gauss' law of the RK type Hamiltonians.
Because of the above, we will refer to our construction of spin-ice projected Slater determinants of the Jordan-Wigner fermions as an "extended parton construction". These differences between extended vs point-like partons lead to crucial physical differences between for the states constructed from them. Some of these differences will manifest as unconventional implementations of lattice symmetries. For example, we will show that \(\pi/2\) rotation symmetries2 do not admit an ordinary fermion implementation for the Jordan-Wigner fermions, but need to be dressed by a unitary transformation that is not part of the \(U(1)\) lattice gauge group.
But the most remarkable difference we have found between the extended partons and the point-like partons, is the nature of the gauge fluctuations around their mean field Slater determinant states. According to the principles of the projective symmetry group constructions for ordinary point-like partons, like Abrikosov-Schwinger fermions, a Slater determinant state which conserves the global particle number fermions will describe a \(U(1)\) spin liquid state, whenever it is stable against gauge confinement. The deconfined state has therefore an emergent \(U(1)\) photon gauge field, and the fermionic parton will carry charge under this field. As we will see, however, the a Slater determinant of the Jordan-Wigner extended partons will feature a \(U(1)\times U(1)\) gauge structure, namely two distinct gapless photons. The Jordan-Wigner fermion will carry a net gauge charge under one of these two photons, but it will be gauge neutral under the other photon, for which it will only carry a gauge dipole.
Moreover despite the fact that the Jordan-Wigner fermion is a composite fermion that can be viewed as a boson attached to \(2\pi\) flux, we will see that the expected action of the two \(U(1)\times U(1)\) gauge fields is an ordinary Maxwell-like action with no Chern-Simons terms, as a result of the enforcement of time reversal and microscopic mirror symmetries of the models in question. This is interesting because it demonstrates an explicit instance of the existence of a composite Fermi liquid-like states arising from flux attachment, for which the emergent gauge structure does not feature an explicit Chern-Simons term. This feature is somewhat reminiscent of the Dirac composite fermion theories of the half-filled Landau level [53; 54; 55], and of some of the more sophisticated composite Fermi liquid theories of bosons at \(\nu=1\)[7; 8; 9], which contrast from the more traditional explicit forms of flux attachment in the HLR description of composite fermions [56].
Footnote 2: These are \(\pi/2\) rotations that will be denoted by \(R_{\frac{\pi}{2}}\).
We will also construct interesting explicit examples of mean-field spin-liquid states for RK-like 2D quantum spin-ice Hamiltonians. As we will see, the sectors defined by different values of the spin-ice rules will correspond to different fillings of the Jordan-Wigner/composite-fermion bands. For example the sector with zero spin, which maps to the quantum six-vertex model [38; 39], will correspond to half-filling of a two-band model. We will construct an explicit mean-field state that satisfies all the space symmetries of the lattice, time-reversal and particle-hole transformations, and that features two gapless linearly dispersing Dirac fermion modes at low energy, which can be pictures as a composite fermion counterpart to the classic \(\pi\)-flux state of Abrikosov-Schwinger fermions [51; 57]. Ignoring compactification-driven instabilities (see below however), this would be therefore a specific kind of Dirac composite Fermi liquid state, with an emergent low energy \(U(1)\times U(1)\) gauge structure with two massless photons, with the fermions carrying charge under only one \(U(1)\) photon and neutral under the other \(U(1)\) photon.
Field theories of massless Dirac fermions coupled to a single \(U(1)\) compact gauge field are known to remain deconfined at low energies in the limit of large-\(N\) number of Dirac fermions flavors [58; 59] and to also avoid spontaneous chiral symmetry breaking [60; 61]. However, understanding the ultimate infrared fate of these field theories at finite \(N\) has remained challenging [62; 63; 64]. In our case we have \(N=2\) Dirac fermions and two photons (with the fermions carrying charge under only one of these photons). We will not address systematically the impact of gauge compactification, but we expect that at least the photon under which the fermions are neutral will undergo Polyakov-like confinement [65; 66], which will remove it from low energies, leaving possibly only two massless Dirac fermions coupled to a single \(U(1)\) photon at low energies, analogously to QED\({}_{3}\) with \(N=2\) (to the extend that this theory avoids confinement and other instabilities at low energies).
On the other hand, we will see that for the subspace of spin-ice that maps onto the quantum dimer model [38], the band structure of the Jordan-Wigner/composite-fermions will be at quarter-filling leading to the appearance of a composite Fermi-surface state. Moreover, the state arising when the composite fermions only hop between nearest neighbor sites will display a perfectly nested Fermi surface. This nesting is accidental in the sense that it can be removed by adding symmetry-allowed further-neighbor hopping terms. Nevertheless, such strong tendency for perfect nesting can be viewed as related to the tendency of the quantum dimer model systems to have ordinary gauge confined ground states (such as the resonant plaquette or the columnar phases [67; 68; 69; 70; 71; 72]), which would appear if the Fermi surface is fully gapped via a composite fermion particle-hole pair condensation. This nested state could be therefore useful as a mother state to understand the descending competing broken symmetry states of the quantum dimer
model and perhaps help understand the strong tendency towards the columnar phase of the classic RK model, which has been advocated in recent studies to take over the complete phase diagram on the side of the RK point where a unique ground state exists (\(v/t<1\)) [71; 72].
Our manuscript is organized as follows: Chapter I reviews the one-dimensional Jordan-Wigner transformation and its interpretation as flux-attachment in the 2D square lattice. Chapter II applies this construction to 2D quantum spin-ice models, and introduces the general extended parton construction of mean-field states. Then we apply this to the specific cases of Quantum Six Vertex and Quantum Dimer Models, and construct the mean-field states with two Dirac cones and a Fermi Surface for each of these models respectively. Chapter III develops a description of the gauge field fluctuations, and discusses the derivation of the effective low energy theories for these states, demonstrating the appearance of two U(1) gauge fields with two associated gapless photons, with the fermions being charged under only one of the U(1) fields. We close then with a summary and discussion where we also comment on future research directions.
## I Equivalence of Jordan-Wigner transformation and flux attachment in 2D
Let us consider a two-dimensional square lattice with a spin-\(\frac{1}{2}\) degree of freedom residing in each site denoted by \(\mathbf{r}\), as depicted in Fig.1. These spin-\(\frac{1}{2}\) degrees of freedom can also be viewed as hard-core bosons, according to the convention of Table 1. By choosing a convention for the ordering of sites, we can write the standard Jordan-Wigner fermion creation operators as follows (see Fig.1):
\[f^{\dagger}(\mathbf{r})=b^{\dagger}(\mathbf{r})\prod_{1\leq\mathbf{r^{\prime} }<\mathbf{r}}\sigma^{z}(\mathbf{r^{\prime}}). \tag{1}\]
We will order the sites using "western typing" convention, as depicted in Fig.1. Since, \(\sigma^{z}(\mathbf{r})=\exp(i\pi n(\mathbf{r}))\), it follows that for any pair of sites \(\mathbf{r},\mathbf{r^{\prime}}\) the boson hopping operators can be written as:
\[b^{\dagger}(\mathbf{r})b(\mathbf{r^{\prime}})=f^{\dagger}(\mathbf{r})e^{i\pi \sum_{\mathbf{r^{\prime}}\leq\mathbf{r^{\prime\prime}}<\mathbf{r}}n(\mathbf{r^ {\prime\prime}})}f(\mathbf{r^{\prime}}). \tag{2}\]
When \(\mathbf{r},\mathbf{r^{\prime}}\) are nearby, the above operator is clearly local in its physical bosonic representation, however is it is generally non-local in its dual fermion representation as illustrated in Fig.2.
Let us now demonstrate the equivalence of Eq.(2) to \(2\pi\)-flux attachment. Consider spin-less fermions, \(f^{\dagger}(\mathbf{r})\) located at the sites \(\mathbf{r}\) of the square lattice. We attach a thin solenoid to each of these fermions which we view as located in the center of the plaquette that is north-east to the site \(\mathbf{r}\) (see Fig.3). The solenoid carries a \(2\pi\)-flux and we choose a gauge that concentrates its vector potential, \(\mathbf{A}(\mathbf{x})\), into two strings, depicted as dotted lines in
Figure 1: Physical spin-\(\frac{1}{2}\) degrees of freedom reside at the blue sites of a 2D square lattice labeled by \(\mathbf{r}\). The light blue shaded region denotes the membrane operator made from the product of all the \(\sigma^{z}(\mathbf{r^{\prime}})\) spin operators in such region, associated with the Jordan-Wigner fermion creation operator at site \(\mathbf{r}\) (see Eq.(1)). The directed arrows illustrate our “western typing” convention for ordering the 2D lattice sites.
Figure 2: Solid directed arrows represent the local boson hopping operator between sites from site \(\mathbf{r^{\prime}}\) towards site \(\mathbf{r}\) from Eq.(2). Blue lines represent the Jordan Wigner strings associated with the fermion representation of these same operators. We see that for our convention (see Fig.1), the horizontal boson hoppings remain local in the fermion representation, whereas the vertical hoppings have a non-local fermion representation.
Fig.3. This gauge is chosen so that the flux attachment exactly matches our specific choice of "western typing" ordering convention of the Jordan-Wigner transformation, and different ordering conventions lead to different gauge choices for the flux-attachment (see e.g. [1; 2; 3; 4; 5; 6]). Here \(\mathbf{x}\) can be viewed as a coordinate on the ambient 2D space in which the lattice is embedded. Each one of these strings is chosen so that the line integral of the vector potential across a path that intersects the strings is exactly \(\pi\). Therefore, when another fermion hops across a bond that intersects one of these strings, its hopping amplitude will have an extra minus sign, relative to the hopping it has when the string is not present (namely each string acts as a "branch-cut" that dresses the fermion hopping phase by \(\pi\)).
Therefore, the above convention fixes the vector potential \(\mathbf{A}(\mathbf{x})\) to be a unique operator which is a function of all the fermion occupations operators, \(n(\mathbf{r})=f^{\dagger}(\mathbf{r})f(\mathbf{r})\). Therefore, establishing the equivalence of the above flux attachment procedure to the Jordan-Wigner transformation reduces to demonstrating that the following operator identity holds:
\[\exp\left(i\pi\sum_{\mathbf{r}^{\prime}\leq\mathbf{r}^{\prime\prime}<\mathbf{r }}n(\mathbf{r}^{\prime\prime})\right)=\exp\left(i\int_{\mathbf{r}^{\prime}}^{ \mathbf{r}}\mathbf{A}(\mathbf{x})\cdot d\mathbf{x}\right). \tag{3}\]
To demonstrate the above relation, let us first consider the line integral of \(\mathbf{A}(\mathbf{x})\) when \(\mathbf{r},\mathbf{r}^{\prime}\) are nearest neighbor sites. From Fig.4, we can see that the following holds for the horizontal and vertical nearest neighbor hoppings:
\[\frac{1}{\pi}\int_{\mathbf{r}}^{\mathbf{r}+\mathbf{e}_{x}}\mathbf{ A}(\mathbf{x})\cdot d\mathbf{x} =n(\mathbf{r}), \tag{4}\] \[\frac{1}{\pi}\int_{\mathbf{r}}^{\mathbf{r}-\mathbf{e}_{y}}\mathbf{ A}(\mathbf{x})\cdot d\mathbf{x} =\sum_{x\leq x^{\prime}}n(x^{\prime},y)+\sum_{x^{\prime}<x}n(x^{ \prime},y-1),\]
where \(\mathbf{r}=(x,y)\) are the coordinates of the lattice sites measured in units of lattice constant, and the integration path is chosen respectively to be the bonds \(\{\mathbf{r},\mathbf{r}+\mathbf{e}_{x}\}\) and \(\{\mathbf{r},\mathbf{r}-\mathbf{e}_{y}\}\) (see Fig.4). The relations in Eq. (4) are the same expected from Eq. (3).
Let us now show that the line integral of \(\mathbf{A}(\mathbf{x})\) in Eq.(3) is independent of the specific path that connects the points \(\mathbf{r},\mathbf{r}^{\prime}\), modulo \(2\pi\). Let us consider two paths \(\gamma_{1}\) and \(\gamma_{2}\) connecting \(\mathbf{r},\mathbf{r}^{\prime}\). These two paths define a closed path \(\gamma\) which is the boundary of a region \(\Omega\) (see Fig.5). From Stokes' theorem it follows that:
\[\oint_{\gamma}\mathbf{A}(\mathbf{x})\cdot d\mathbf{x}=\iint_{\Omega}(\nabla \times\mathbf{A})(\mathbf{x})\cdot d\sigma=2\pi\sum_{\mathbf{r}\in\Omega}n( \mathbf{r}). \tag{5}\]
The sum over \(\mathbf{r}\in\Omega\) in the above expression is perfomed over those sites \(\mathbf{r}\) for which the solenoid is strictly in the interior of \(\Omega\) (see Fig.5). Therefore since the fermion number operators \(n(\mathbf{r})\) are integer valued, it follows from Eq. (5), that:
\[\int_{\gamma_{1}}\mathbf{A}(\mathbf{x})\cdot d\mathbf{x}=\int_{\gamma_{2}} \mathbf{A}(\mathbf{x})\cdot d\mathbf{x}\ \ \mathrm{mod}(2\pi). \tag{6}\]
The above equation demonstrates that there is no ambiguity in the line integrals in the right hand side of Eq.(3). A detailed derivation that Eq. (3) holds for any pair \(\mathbf{r},\mathbf{r}^{\prime}\) is presented in Appendix A.
Therefore, we see that there is a precise equivalence between the notion of the statistical transmutation of a hard-core boson and a "composite fermion" carrying a solenoid of \(2\pi\) flux, and the statistical transmutation of spin-\(\frac{1}{2}\) degrees of freedom onto Jordan-Wigner fermions
Figure 3: Flux attachment is performed by binding to each boson (located in the sites marked by blue dots) a thin solenoid depicted by the star which is located in the plaquette northeast from the boson site. This thin solenoid carries \(2\pi\) flux, whose vector potential is chosen to be concentrated in the two dotted lines connected to the star. The hopping operators (depicted by solid black lines) that intersect such dotted lines are multiplied by \(-1\) when the solenoid is present, namely there is an extra \(\pi\) phase for hopping across the dotted lines.
Figure 4: A local boson hopping operator (depicted by the directed black solid line), can be equivalently represented as a fermion hopping operator with its hoppings dressed by the vector potentials that capture the \(2\pi\)-flux attachment, according to the rules depicted in Fig.3 (see Eq.(4)).
in 2D lattices. The non-locality of the Jordan-Wigner transformation in 2D should not be viewed as "bug" but rather as a "feature" that secretly encodes the natural non-locality associated with flux attachment. This equivalence could also be useful to understand the precise lattice versions of transformations discussed within the web of dualities [73].
## II Jordan Wigner/composite fermions as extended partons in 2D quantum spin-ice
Quantum spin-ice in the 2D square lattice is a classic example of a lattice gauge theory, namely, a model with a set of local conservation laws [20; 21; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. For different values for these local conservation laws the Hilbert space can be reduced to that of the Quantum Six Vertex Model (Q6VM) or the celebrated Quantum Dimer Model (QDM) introduced by Rokhsar and Kivelson [34]. In this chapter, we will develop a dual representation of these models in terms of Jordan-Wigner/composite fermions, and exploit the fact that these models of spins remain local in terms of their dual Jordan-Wigner/composite fermions. We will show that these Jordan-Wigner/composite fermions behave in certain sense like partons, such as Abrikosov-Scwhinger fermions [49; 74], but with crucial qualitative differences arising from the fact that they carry not only lattice gauge charge, but also a lattice gauge dipole moment.
### 2+1D Quantum spin-ice and its Jordan-Wigner Composite Fermion representation
To describe quantum spin-ice models it is convenient to introduce a different lattice convention relative to that of the previous section. We first divide the plaquettes of the 2D square lattice into two sublattices, that we will now call "vertices" and "plaquettes", so that the spin-\(\frac{1}{2}\) degrees of freedom are viewed as residing in the "links" connecting such vertices (see Fig.6). These links therefore form another square lattice which is rotated \(45^{\circ}\) relative to the orginal square lattice. The Bravais lattice of the spin-ice models is spanned by two vectors \(\mathbf{R}=n_{1}\mathbf{R}_{1}+n_{2}\mathbf{R}_{2}\), \(n_{1,2}\in\mathbb{Z}\), which can be viewed as the position of vertices (see Fig.6). Therefore, the unit cell has a basis with two spin-\(\frac{1}{2}\) degrees of freedom, which we will distinguish by subscripts \(a,b\). For example, \(\sigma_{a}^{i}(\mathbf{R})\) will denote the \(i\)-th Pauli matrix associated with the site \((\mathbf{R},a)\) (see Fig.6) 3. From here on, we will assume that the original square lattice has an even number of spin sites both in the x- and y-directions, because this is needed in order to make spin-ice lattice periodic in a torus (see fig. 6).
Footnote 3: We will also continue to label the spin sites with lower-case letter \(\mathbf{r}\) when there is no need to specify its detailed Bravais lattice label, namely \(\mathbf{r}\) is also understood to be the physical coordinate of the spin site with Bravais lattice label \((\mathbf{R},i)\) with \(i=a,b\).
For every vertex, we define an "ice charge operator" as the sum of the \(z\)-components of the spins in its four links:
\[Q_{\text{ice}}(\mathbf{R})\doteq\sigma_{a}^{z}(\mathbf{R})+\sigma_{b}^{z}( \mathbf{R})+\sigma_{a}^{z}(\mathbf{R}-\mathbf{R}_{1})+\sigma_{b}^{z}(\mathbf{ R}-\mathbf{R}_{2}). \tag{7}\]
The ice charge operators are the locally conserved quantities, and they are the generators of the following "UV lattice gauge group" of unitary transformations:
Figure 5: Illustration of the paths used to derive Eqs.(5) and (6).
Figure 6: The original square lattice of spins is spanned by vectors \(\hat{\mathbf{e}}_{x}\) and \(\hat{\mathbf{e}}_{y}\), whose sites (blue and red dots) are denoted by \(\mathbf{r}\). The “spin-ice” lattice is the Bravais lattice spanned by vectors \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\), and with a basis of two spin sites: the “a” sites (blue dots) and “b” sites (red dots). The plaquettes of the original square lattice are now separated into “vertices” located at \(\mathbf{R}\) and “plaquettes” (e.g. the square shaded in gray) of the “spin-ice lattice”. The plaquette resonance operator of the RK model, \(L_{\mathbf{R}}\) from Eq.(10), is also illustrated.
\[G[\{\theta(\mathbf{R})\}]=\exp{\left(i\sum_{\mathbf{R}}\theta(\mathbf{R})Q_{\rm ice} (\mathbf{R})\right)}. \tag{8}\]
where \(\theta(\mathbf{R})\) are arbitrary real numbers. The lattice gauge theory structure is imposed by demanding that the Hamiltonian, \(H\), is invariant under the UV lattice gauge group, or equivalently, that it commutes with all the ice charge operators:
\[[Q_{\rm ice}(\mathbf{R}),H]=0,\ \ \forall\mathbf{R}. \tag{9}\]
The subspace with \(Q_{\rm ice}(\mathbf{R})=2\) at every vertex is equivalent to that of the quantum dimer model (QDM), whereas the subspace with \(Q_{\rm ice}(\mathbf{R})=0\) is equivalent to the quantum six-vertex model (Q6VM) (see Figs. 7 and 8 for illustration of the allowed configurations). Gauge invariant operators include spin diagonal operators such as \(\sigma_{a}^{z}(\mathbf{R})\) (boson number), and products of spin/raising lowering operators (boson creation/ annihilation) over a sequence of links forming a closed loop, the smallest of which is the "plaquette flipping" operator:
\[L_{\mathbf{R}}=\sigma_{a}^{+}(\mathbf{R})\sigma_{b}^{-}(\mathbf{R}-\mathbf{R} _{2})\sigma_{a}^{+}(\mathbf{R}-\mathbf{R}_{2})\sigma_{b}^{-}(\mathbf{R}+ \mathbf{R}_{1}-\mathbf{R}_{2}). \tag{10}\]
The above operator can be viewed as centered around the plaquette that is neighboring to the right the vertex located at \(\mathbf{R}\) as shown in Fig.6. A classic gauge invariant Hamiltonian is the Rokhsar-Kivelson model:
\[H=-t\sum_{\mathbf{R}}L_{\mathbf{R}}+L_{\mathbf{R}}^{\dagger}+v\sum_{\mathbf{R }}L_{\mathbf{R}}L_{\mathbf{R}}^{\dagger}+L_{\mathbf{R}}^{\dagger}L_{\mathbf{R }}. \tag{11}\]
Additionally, when placed in a 2D torus each gauge invariant subspace splits into "winding" sectors, due to the existence of two conserved t'Hooft loop operators, one for each direction of the torus, defined as:
\[\ell_{x}\doteq\sum_{\mathbf{r}\in L_{x}}\sigma^{z}(\mathbf{r}),\ \ell_{y}\doteq \sum_{\mathbf{r}\in L_{y}}\sigma^{z}(\mathbf{r}). \tag{12}\]
Where \(\mathbf{r}\in L_{x,y}\) denotes a sum over the sites in the non-contractible loops of the torus depicted in Fig.9.
One of the remarkable properties of the lattice gauge structure of quantum spin-ice models, is that any local gauge invariant operator remains local in its dual Jordan-Wigner/composite-fermion representation. For example, the elementary plaquette flipping operator from Eq.(10), after using the the Jordan-Wigner transformation described in Sec. I, can be written as:
\[L_{\mathbf{R}}=f_{a}(\mathbf{R})f_{b}^{\dagger}(\mathbf{R}+\mathbf{R}_{1})f_{ a}(\mathbf{R}+\mathbf{R}_{2})f_{b}^{\dagger}(\mathbf{R}). \tag{13}\]
Therefore we see that the RK model can be equivalently represented as a local model of interacting Jordan-Wigner/composite fermions. For larger gauge invariant loop operators (e.g. those enclosing two adjacent plaquettes), the dual fermion operators would also include the products of the fermion parities for the links inside the loop, but in general any local gauge invariant operator of spins maps onto a local fermion operator without any left-over trace of the long-range part of the Jordan-Wigner strings 4.
Footnote 4: This follows from the fact that plaquette operators, \(L_{\mathbf{R}},L_{\mathbf{R}}^{\dagger}\), \(L_{\mathbf{R}},L_{\mathbf{R}}^{\dagger}\), \(L_{\mathbf{R}},L_{\mathbf{R}},L_{
The ice-charge operators are represented in terms of Jordan-Wigner/composite fermions as follows:
\[Q_{\text{ice}}(\mathbf{R})=4-2\,n_{\text{ice}}(\mathbf{R})=4-2\sum_{\mathbf{r} \in\mathbf{R}}f^{\dagger}(\mathbf{r})f(\mathbf{r}), \tag{14}\]
Where \(\mathbf{r}\in\mathbf{R}\) denotes the spin sites, \(\mathbf{r}\), in the four links connected to vertex \(\mathbf{R}\), and \(n_{\text{ice}}(\mathbf{R})\) is the total number of fermions in such links. From the above we see that the subspaces obeying with different values of ice charge correspond to different lattice fillings of the Jordan-Wigner/composite-fermions. The QDM and Q6VM spaces have \(\frac{1}{4}\) and \(\frac{1}{2}\) filling of the fermion sites respectively. Some representative configurations illustrating these fillings are shown in Fig.11.
In this work we will be interested in constructing spin-liquid states that are relevant not only for the RK model, but for the universality class that the RK Hamiltonian defines. This universality class is defined as the set of local spin Hamiltonians5, with the same spin-ice local conservation laws, and the same global symmetries of the RK model. Some of these global symmetries of the RK model are listed in Table 2, and the notation for some of its space symmetries is also depicted in figure 10. The particle-hole symmetry can only be enforced for the filling associated with the subspace of the Q6VM (see Table 2).
Footnote 5: The locality is defined with respect to the tensor product structure of the Hilbert space of underlying microscopic spin degrees of freedom.
### Review of Abrikosov-Schwinger parton states
Before introducing the extended parton construction of states for Jordan-Wigner/composite-fermions in quantum spin-ice, we would like to review some of the key ideas of the more traditional construction of states for Abrikosov-Schwinger fermions, which we will sometimes refer to as "point-like" partons (for more detailed discussions see e.g. Refs.[49, 74, 75, 76]). The same previously discussed physical spin-\(\frac{1}{2}\) degrees of freedom at the lattice site \(\mathbf{r}\) can be alternatively represented in terms of spinful Abrikosov-Schwinger fermions \(\psi_{s}^{\dagger}(\mathbf{r})\) (\(s=\uparrow,\downarrow\)):
\[\sigma^{i}(\mathbf{r})=\sigma^{i}_{ss^{\prime}}\psi_{s}^{\dagger}(\mathbf{r}) \psi_{s^{\prime}}(\mathbf{r}). \tag{15}\]
where \(\sigma^{i}_{ss^{\prime}}\) is the \(ss^{\prime}\) element of the \(i\)-th Pauli Matrix. The above representation enlarges the physical Hilbert space from a two dimensional \(\{\ket{\uparrow},\ket{\downarrow}\}\) into a four dimensional \(\{\ket{0},\ket{\uparrow},\ket{\downarrow},\ket{\downarrow},\ket{\uparrow \downarrow}\}\). In this case, the "UV lattice gauge group" is generated by the fermion number at each site:
\[n(\mathbf{r})=\sum_{s}f_{s}^{\dagger}(\mathbf{r})f_{s}(\mathbf{r}). \tag{16}\]
The above operator is the counterpart of the spin-ice charge for this lattice gauge structure. Gauge invariant operators are defined as those commuting with \(n(\mathbf{r})\), and in this case they are are the spin operators themselves, \(\sigma^{i}(\mathbf{r})\). The physical subspace is a gauge invariant subspace satisfying:
Figure 10: Illustration of the point group symmetry operations of the quantum spin-ice model centered on a spin-ice plaquette (corresponding to Dihedral Group \(D_{8}\)). The model also has a similar set of symmetry operations centered around the vertices, which we also consider for constructing states.
Figure 9: Non-contractible loops \(L_{x,y}\) for the definition t’Hooft operators in Eq.(12) for a lattice with periodic boundary conditions. Notice that in order to place the spin-ice lattice on a torus there needs to be an even number of spin along the \(x\) and \(y\) directions.
\[n(\mathbf{r})\left|\psi\right\rangle=\left|\psi\right\rangle. \tag{17}\]
Therefore, in this parton construction physical states are restricted to have \(\frac{1}{2}\) fermion filling of the lattice, which is already a crucial difference with respect to the Jordan-Wigner/composite-fermions.
When restricted to the physical subspace, \(n(\mathbf{r})=\mathbb{1}\), there is actually a \(SU(2)\) group that leaves all gauge invariant, and which is larger than the \(U(1)\) UV lattice gauge group generated by (17). Such larger group of operations that leave the gauge invariant operators invariant, is called the _parton Gauge group_ (PGG). To construct spin-liquid states it is convenient to introduce an auxiliary mean-field Hamiltonian that parametrizes a Slater of fermions:
\[H_{\text{MF}}=\sum_{ss^{\prime}}\sum_{\mathbf{r},\mathbf{r}^{\prime}}t_{ss^{ \prime}}(\mathbf{r},\mathbf{r}^{\prime})f_{s}^{\dagger}(\mathbf{r})f_{s^{ \prime}}(\mathbf{r}^{\prime}). \tag{18}\]
The hopping elements \(t_{ss^{\prime}}(\mathbf{r},\mathbf{r}^{\prime})\) in the mean-field Hamiltonian can be viewed as variational parameters of its Slater determinant ground state, that we will denote by \(\left|\Omega_{0}[t_{ss^{\prime}}(\mathbf{r},\mathbf{r}^{\prime})]\right\rangle\). The above mean field Hamiltonian conserves the total fermion number, and, therefore, it is invariant under a global \(U(1)\) subgroup of the PGG. More generally, the group that leaves the mean-field Hamiltonian invariant is called invariant gauge group (IGG). The importance of the IGG is that it determines the expected true low energy emergent gauge group of the spin liquid state [49; 74] (assuming it does not suffer from instabilities such as gauge confinement). For the above mean-field Hamiltonian with a \(U(1)\) IGG we expect then to have \(U(1)\) spin liquid [49; 74]6. For concreteness in this work we will be focusing on spin liquids with low energy emergent \(U(1)\) gauge groups.
Footnote 6: But had we chosen a BCS-like mean field state with a \(\mathbb{Z}_{2}\) IGG, we would expect a \(\mathbb{Z}_{2}\) spin liquid
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} \hline Symmetries & Symbol & & Q6VM & QDM & Linear & Antilinear & Action on \(b(\mathbf{r})\) & Action on \(Q_{\text{ice}}(\mathbf{R})\) \\ \hline Time Reversal & \(\Theta\) & & & & & & \(b(\mathbf{r})\) & \(Q_{\text{ice}}(\mathbf{R})\) \\ \hline \multirow{4}{*}{Spatial transformations} & \multirow{4}{*}{\(U_{d}\)} & \(R_{\frac{\pi}{2}}\) & & & & & & \\ & & \(S_{x}\) & & & & & & \\ & & \(S_{y}\) & & & & & \(b(U_{d}(\mathbf{r}))\) & \(Q_{\text{ice}}(U_{d}(\mathbf{R}))\) \\ & & \(S_{1}\) & & & & & & \\ & & \(S_{2}\) & & & & & & \\ \hline Particle-Hole & \(X\) & & & & & & \(b^{\dagger}(\mathbf{r})\) & \(-Q_{\text{ice}}(\mathbf{R})\) \\ \hline \end{tabular}
\end{table}
Table 2: Table of symmetries of the Rokhsar-Kivelson Hamiltonian of quantum spin-ice. \(U_{d}(\mathbf{r})\) and \(U_{d}(\mathbf{R})\) denote the image of the site \(\mathbf{r}\) and the vertex \(\mathbf{R}\) under the corresponding spatial transformation. See Fig.10 for a definition of the spatial transformations and Fig.13 for a depiction of the action of \(R_{\frac{\pi}{2}}\).
Figure 11: Left: depiction of a configuration with half of spins reversed (half-filling of JW/composite-fermions) relative to fully polarized state (denoted by solid bars) belonging to six-vertex subspace. Right: depiction of a configuration with one quarter of spins reversed (quarter-filling of JW/composite-fermions) relative to fully polarized state (denoted by solid bars) belonging to quantum dimer subspace.
The ground state, \(|\Omega_{0}[t_{ss^{\prime}}({\bf r},{\bf r^{\prime}})]\rangle\), of the above mean field Hamiltonian generically is not invariant under the UV gauge group and violates the constraint of Eq.(17). The correct physical mean-field state is obtained by projecting this state onto the physical gauge invariant subspace (Gutzwiller projection), as follows:
\[|\Omega[t_{ss^{\prime}}({\bf r},{\bf r^{\prime}})]\rangle=\prod_{\bf r}\left( \frac{1-(-1)^{n({\bf r})}}{2}\right)|\Omega_{0}[t_{ss^{\prime}}({\bf r},{\bf r ^{\prime}})]\rangle\,, \tag{19}\]
The Gutzwiller projection is a nontrivial operation that generally makes difficult the calculation gauge invariant operators. It is possible, however, to develop a precise understanding of the symmetry properties of the Gutzwiller projected physical state. To illustrate this, let us imagine that there is some global physical symmetry operation acting on the spins, denoted by \(S\) (e.g. a lattice translation or a mirror symmetry). We say that two operations \(S_{1}\) and \(S_{2}\) defined by their action on the parton fermions represent the same physical symmetry, if they have the same action on all gauge invariant operators. However, if \(S_{1}\) and \(S_{2}\) differ by an element of the parton gauge group, their enforcement on \(|\Omega_{0}\rangle\) can lead to two distinct physical states \(|\Omega\rangle\). In this case, then \(S_{1}\) and \(S_{2}\) are said to be two distinct projective symmetry group (PSG) implementations on the partons of the same underlying physical symmetry (for a recent discussion illustrating this, see e.g.[77]).
### Extended parton states for quantum spin-ice
We are now ready to present our extended parton construction of mean field states for composite fermions obtained from the JW transformation applied to the quantum spin-ice Hamiltonians. The idea is to parallel the construction for Abrikosov-Schwinger fermions, but for the UV gauge structure defined by the ice charge operators from Eq.(7). We begin by introducing an auxiliary mean-field Hamiltonian of composite fermions:
\[H_{\rm MF}=\sum_{{\bf r},{\bf r^{\prime}}}t({\bf r},{\bf r^{\prime}})f^{ \dagger}({\bf r})f({\bf r^{\prime}}). \tag{20}\]
here \(f^{\dagger}({\bf r})\) is the creation operator of the spinless Jordan-Wigner fermion at the spin site \({\bf r}\). The hopping amplitudes, \(t({\bf r},{\bf r^{\prime}})\), are again viewed as parametrizing the Slater determinant ground state of the mean field Hamiltonian, denoted by \(|\Phi_{0}[t({\bf r},{\bf r^{\prime}})]\rangle\). The physical spin orientation is encoded in the composite fermion occupation at each site, and therefore there is no enlargement of the full spin Hilbert space. Nevertheless, the composite fermion hopping bilinears in the above mean-field Hamiltonian generically do not commute with the generators of the UV lattice gauge transformations, and therefore its ground state, \(|\Phi_{0}\rangle\) violates the ice rules. However, this is forbidden by Elitzur's theorem: local gauge symmetries cannot be spontaneously broken. As a consequence, the naive ground state of the above mean-field Hamiltonian is not a satisfactory approximation to the true gauge invariant ground states of quantum spin-ice models. However, this deficiency can be cured in an analogous way as in the case of Abrikosov-Schwinger partons, by projecting \(|\Phi_{0}\rangle\) onto the Gauge invariant subspaces. Therefore, in analogy to the Gutzwiller projection, we introduce a projector into a Gauge invariant subspace, specified by the local ice charges \(Q_{\rm ice}({\bf R})\) and the t'Hooft operators \((\ell_{x},\ell_{y})\) (for the case of a torus), given by:
\[\begin{split}& P(\{Q_{\rm ice}({\bf R}),\ell_{x},\ell_{y}\}) \doteq P(\ell_{x})P(\ell_{y})\prod_{\bf R}P(Q_{\rm ice}({\bf R})).\\ &|\Phi[t({\bf r},{\bf r^{\prime}})]\rangle=P(\{Q_{\rm ice}({\bf R }),\ell_{x},\ell_{y}\})\left|\Phi_{0}[t({\bf r},{\bf r^{\prime}})]\right\rangle.\end{split} \tag{21}\]
The above projected state is also parametrized by the hoppings, \(t({\bf r},{\bf r^{\prime}})\), that could be in principle optimized as variational parameters to minimize the energy of RK-like Hamiltonians.
#### iv.3.1 Symmetry implementation on JW composite fermions: general considerations
Let us now consider the implementation of symmetries on these mean field states of Jordan-Wigner/composite-fermions. As in the case of Abrikosov-Schwinger fermions, the key idea is that the task of enforcing symmetries in the physical projected states is traded by the easier task of enforcing symmetries in the un-projected mean-field Hamiltonians. However, one needs to develop a set of consistency criteria for these implementations because there are multiple ways in which one given symmetry can be implemented in the un-projected state, leading to the rich structure of projective symmetry group implementations [49; 74].
At first glance it might appear as if there was no freedom on how to implement symmetries on the Jordan-Wigner fermions, because any prescription on how physical symmetries act on the underlying spin-\(\frac{1}{2}\) degrees of freedom would fix a unique symmetry action of the Jordan-Wigner/composite-fermion operators. We will refer to this underlying symmetry implementation as the "bare" symmetry action. However, this bare symmetry implementation cannot be suitably enforced in the mean field Hamiltonians from Eq.(20). This is because the specific choice for implementing the Jordan-Wigner ordering of the 2D lattice (e.g. the western typing convention of Sec. I) does not manifestly preserve the symmetries of the lattice, and thus, for example, the bare action of the bare implementation of a \(\pi/2\) lattice rotation would map the fermion bilinear mean field Hamiltonian from Eq.(20) onto a complex operator which is no longer fermion bilinear Hamiltonian and does not appear local in its dual fermion representation. Our goal in this subsection will be therefore to develop a precise but
more flexible notion of symmetry implementations on the Jordan-Wigner/composite-fermions that is amenable to enforcement on mean-field Hamiltonians.
Some of these difficulties of bare symmetry actions are not peculiar to the 2D Jordan-Wigner transformation but are also reminiscent of those appearing in the 1D Jordan-Wigner transformation, e.g. in the anomalous implementation of lattice translations, which we will now discuss in order to motivate the 2D construction. For example, consider a standard 1D finite lattice with periodic boundary conditions and a standard translational symmetry implemented on the microscopic spin operators located at site \(r\) as follows:
\[T\sigma^{i}(r)T^{\dagger}=\sigma^{i}(r+1) \tag{22}\]
However, when this "bare" symmetry is implemented on the JW fermions it does not act like a standard fermionic lattice translation, which we denote by \(\tau\), defined as:
\[\tau f^{\dagger}(r)\tau^{\dagger}=f^{\dagger}(r+1)\neq Tf^{\dagger}(r)T^{\dagger} \tag{23}\]
The above arises because the JW string becomes translated by \(T\) and therefore it does not follow the initial JW convention (it does not start at spin "1" any more)7. However while \(T\) and \(\tau\) are different operations when acting on the single fermion operator, they would act identically on fermion bilinear operators supported in the interior of the 1D chain:
Footnote 7: For a recent discussion of the connection between lattice translational symmetries, anomalies and dualities associated with the 1D JW transformation see Ref. [78].
\[\tau f^{\dagger}(r)f(r)\tau^{\dagger}=Tf^{\dagger}(r)f(r)T^{\dagger}=f^{ \dagger}(r+1)f(r+1) \tag{24}\]
Therefore we can say that when the symmetry operations \(T\) and \(\tau\) are restricted to act on parity even operators, they are essentially the same symmetry8. The parity restriction in 1D plays an analogous role to the spin gauge structure in 2D, in the sense that local spin operators that are invariant under the UV lattice gauge symmetry, remain local in their dual fermion representation after the JW map. In other words, after a quantum spin-ice model is mapped onto fermions via the JW map, it appears to be a bona fide local fermionic model, similar to how a parity even spin Hamiltonian looks like an ordinary fermionic model after 1D JW map.
Footnote 8: Up to corrections associated with boundary terms, but in this work we will focus on implementations of symmetry in the bulk.
Therefore we define a generalized notion of equivalence among symmetries of the 2D quantum spin-ice model when these are implemented on JW transformation, as follows:
_For a quantum spin-ice model, we say that two operators \(S_{1}\) and \(S_{2}\) that implement a symmetry action are equivalent, when they have the same action on all local operators that are invariant under the UV lattice gauge transformations defined in Eq.(8)_ (see Fig.12).
The usefulness of this notion of equivalent symmetries is that instead of enforcing the non-trivial "bare" action of a symmetry, \(S_{1}\), we can enforce instead a simpler but equivalent symmetry implementation, \(S_{2}\), which maps fermion bilinear Hamiltonians onto fermion bilinear Hamiltonians. By enforcing \(S_{2}\) on the fermion bilinear mean-field Hamiltonian from Eq.(20), then the expectation value of any gauge invariant operator computed from its corresponding Gutzwiller projected state from Eq.(21), will obey the same symmetry constraints as if we had enforced the bare symmetry action \(S_{1}\). In particular, if \(G\) is a lattice UV gauge transformation from Eq.(8), then \(S\) and \(GS\) are equivalent implementations of a symmetry. Interestingly, as we will see, enforcing symmetries that differ by such a pure UV lattice gauge transformation, \(G\), on the mean-field Hamiltonian of Eq.(20), can lead to physically distinct states after the generalized Gutzwiller projection of Eq.(21). This situation is analogous to that of PSG implementations of symmetry on the Abrikosov-Schwinger partons (see discussion following Eq.(19)). However several interesting qualitative differences will appear between these two cases, and this is partly why we call our construction an extended projective symmetry group implementation (see Table 3).
\begin{table}
\begin{tabular}{c|c|c} \hline \hline & Abrikosov-Schwinger Fermions & Jordan-Wigner Composite-Fermions \\ \hline Local Hilbert Space Enlargement & YES & NO \\ \hline Internal degrees of freedom & Spin \(\frac{1}{2}\) & spinless \\ \hline UV Gauge Transformations generators & \(n(\mathbf{r})=\sum_{s}f_{s}^{\dagger}(\mathbf{r})f_{s}(\mathbf{r})\) & \(Q_{\text{ice}}(\mathbf{R})=4-2\sum_{\mathbf{r}\in\mathbf{R}}f^{\dagger}( \mathbf{r})f(\mathbf{r})\) \\ \hline Physical lattice fillings & \(n(\mathbf{r})=1\) & Any (e.g. \(\langle f^{\dagger}(\mathbf{r})f(\mathbf{r})\rangle=\frac{1}{4}\) for QDM) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between traditional point-like Abrikosov-Schwinger fermion partons and the extended Jordan-Wigner/composite-fermion parton constructions for 2D quantum spin-ice. Here \(s\in\{\uparrow,\downarrow\}\), \(\mathbf{r}\) denotes spin sites, \(\mathbf{r}\in\mathbf{R}\) denotes the four spins adjacent to a quantum spin-ice vertex located at \(\mathbf{R}\). See Eq.(14) and Fig.6 for definitions and depictions.
#### iii.2.2 A specific implementation of symmetries of 2D quantum spin-ice on JW composite fermions.
We will now construct a concrete example of extended projective symmetry implementation for the symmetries of the quantum spin-ice model (see Table 2). Our objective is to illustrate the general ideas by constructing interesting and perhaps even energetically competitive spin liquid states (although we will not compute explicitly their energy). It is clear, in analogy to ordinary parton constructions [49; 74], that there is a large landscape of possible extended projective symmetry implementations beyond the ones we will illstrate concretely. We leave to future work the development of a more global understanding and classification of the large and colorful landscape of extended projective symmetry group implementations.
Let us begin by considering a \(\pi/2\) spatial rotation centered on a plaquette (see Fig.13), denoted by \(R_{\frac{\pi}{2}}\). We define its action on the microscopic degrees of freedom from an implementation that is natural when viewed as spinless bosons, namely as follows:
\[R_{\frac{\pi}{2}}b^{\dagger}(\mathbf{r})R^{\dagger}_{\frac{\pi}{2}}=b^{ \dagger}(R_{\frac{\pi}{2}}\mathbf{r}). \tag{25}\]
Here \(b^{\dagger}(\mathbf{r})\) is the hard-core boson equivalent of the spin lowering operator (see Table 1), and \(R_{\frac{\pi}{2}}\mathbf{r}\) is the image of site \(\mathbf{r}\) under the rotation. The action of \(R_{\frac{\pi}{2}}\) on the gauge invariant plaquette operator from Eq.(10) is thus simply:
\[R_{\frac{\pi}{2}}(b^{\dagger}_{2}b_{1}b^{\dagger}_{4}b_{3})R^{ \dagger}_{\frac{\pi}{2}}=b^{\dagger}_{4^{\prime}}b_{1^{\prime}}b^{\dagger}_{2 ^{\prime}}b_{3^{\prime}}, \tag{26}\]
where \(1,2,3,4\) denote the sites in the plaquette from figure 13 and \(1^{\prime},2^{\prime},3^{\prime},4^{\prime}\) their images after the \(\frac{\pi}{2}\) rotation. As discussed in Eq.(13), this same plaquette operator can be alternatively written as product of JW/composite-fermion operators. However, while the action of \(R_{\frac{\pi}{2}}\) is simple on this four fermion operator, it is complex and cumbersome on JW/composite-fermion operators themselves, as it involves a \(\pi/2\) rotation of the JW-strings. More importantly \(R_{\frac{\pi}{2}}\) does not map fermion bilinear operators onto fermion bilinears, because, for example, it maps a fermion horizontal hopping into a fermion vertical hopping dressed by JW-strings (see Fig.2). Therefore, we would like to find an alternative but gauge equivalent implementation of \(R_{\frac{\pi}{2}}\) to overcomes this difficulty.
To do so, we define a collection of auxiliary operators associated with each of the microscopic symmetries listed in Tables 2 and 4, whose action is defined by replacing the boson operator, \(b^{\dagger}(\mathbf{r})\) with the fermion operator \(b^{\dagger}(\mathbf{r})\) in Table 2. For example, for the microscopic symmetry \(R_{\frac{\pi}{2}}\), we associate the auxiliary fermion operator \(P_{\frac{\pi}{2}}\), whose action is obtained from Eq.(25) by replacing \(b^{\dagger}(\mathbf{r})\to f^{\dagger}(\mathbf{r})\), leading to:
\[P_{\frac{\pi}{2}}f^{\dagger}(\mathbf{r})P^{\dagger}_{\frac{\pi}{2}}=f^{ \dagger}(R_{\frac{\pi}{2}}\mathbf{r}). \tag{27}\]
Thus the idea is that these auxiliary fermion operators are intuitive and natural symmetry implementations on fermions, but they are not necessarily equivalent implementations of the microscopic symmetries on gauge invariant operators, as we now explain. This auxiliary fermion rotation acts on the same plaquette operator from Eq.(26), which can be equivalently represented with fermions using Eq.(13), as follows:
\[P_{\frac{\pi}{2}}(f^{\dagger}_{2}f_{1}f^{\dagger}_{4}f_{3})P^{ \dagger}_{\frac{\pi}{2}}=-f^{\dagger}_{4^{\prime}}f_{1^{\prime}}f^{\dagger}_{ 2^{\prime}}f_{3^{\prime}} \tag{28}\]
Therefore the fermion rotation, \(P_{\frac{\pi}{2}}\), is not an equivalent implementation of the the underlying physical symmetry,
Figure 12: Illustration of the notion of equivalence of symmetry actions of operators. Two operators \(X\) and \(\Xi\) are equivalent implementations of a symmetry, if their action is identical on all the operators that are invariant under the spin-ice UV lattice gauge transformations (defined in Eq.(8)). This notion allows us to trade the possibly complicated “bare” action of the microscopic symmetry, \(X\), on the JW/composite-fermions, by a simpler but equivalent symmetry implementation, \(\Xi\), which maps JW fermion bilinears onto JW fermion bilinears. This is a natural extension of the notion of symmetry equivalence in standard parton constructions Abrikosov-Schwinger fermions (see e.g. Refs.[49; 74]).
Figure 13: Action of a 90\({}^{*}\) rotation (denoted by \(R_{\frac{\pi}{2}}\)) centered on the plaquette where the dashed dotted lines intersect, acting on the plaquette resonace operators marked by thick squares (according to convention in fig. 6).
\(R_{\frac{\pi}{2}}\), because it additionally multiplies the gauge invariant plaquette operator by a global minus sign. The extra minus sign can be removed by dressing \(P_{\frac{\pi}{2}}\) with a staggered \(U(1)\) transformation that we call \(U_{\frac{1}{4}}\), which rotates the phase of bosons with opposite signs in the \(a\) and \(b\) sublattices (see Fig.14\((a)\)) as follows:
\[U_{\frac{1}{4}}b_{a}^{\dagger}(\mathbf{R})U_{\frac{1}{4}}^{\dagger} =e^{i\frac{\pi}{4}}b_{a}^{\dagger}(\mathbf{R}),\] \[U_{\frac{1}{4}}b_{b}^{\dagger}(\mathbf{R})U_{\frac{1}{4}}^{\dagger} =e^{-i\frac{\pi}{4}}b_{b}^{\dagger}(\mathbf{R}).\]
where we are using the Bravais labels of the sites of the model (see Sec. II.1 for the convention). Notice that the action of \(U_{\frac{1}{4}}\) on boson and JW fermion operators is the same. Therefore, its action on the plaquette operator is:
\[U_{\frac{1}{4}}(f_{2}^{\dagger}f_{1}f_{4}^{\dagger}f_{3})U_{\frac{1}{4}}^{ \dagger}=-f_{2}^{\dagger}f_{1}f_{4}^{\dagger}f_{3}. \tag{29}\]
Therefore the action of \(R_{\frac{\pi}{2}}\) and \(U_{\frac{1}{4}}P_{\frac{\pi}{2}}\) on plaquette operators is identical. Moreover, their action is also identical on \(\sigma^{z}(\mathbf{r})\) (or equivalently the on-site fermion number operator). Since these operators together with the plaquette operators form a complete algebraic basis for all local gauge invariant operators, it follows that \(U_{\frac{1}{4}}P_{\frac{\pi}{2}}\) is an equivalent implementation of the underlying physical symmetry \(R_{\frac{\pi}{2}}\) on gauge invariant operators:
\[R_{\frac{\pi}{2}}\equiv U_{\frac{1}{4}}P_{\frac{\pi}{2}}. \tag{30}\]
Table 4 presents a list of the microscopic symmetries of the quantum spin-ice model and a corresponding equivalent symmetry operation acting on the JW-fermions. We see that in addition to the rotations, the natural fermionic implemention of diagonal mirrors \(S_{1}\) and \(S_{2}\) (see Fig.10) also need to be dressed by \(U_{\frac{1}{4}}\) in order to make them equivalent to the underlying microscopic symmetries. We will also enforce Bravais lattice translational symmetries, which are understood to act identically on bosons and fermions (up to boundary terms) and thus are not listed explicitly in Table 4. Details of the derivations for these additional symmetries can be found in Appendix B.
This set of equivalent symmetries listed under the Jordan-Wigner fermion column of Table 4 maps fermion bilinears onto fermion bilinears. Therefore, any such equivalent symmetry implementation, denoted by \(S\), can be used to enforce the symmetry on the fermion mean-field Hamiltonian \(H_{\text{MF}}\) of Eq.(20), by determining the hoppings that are satisfy by the following relation:
\[SH_{\text{MF}}S^{-1}=H_{\text{MF}}. \tag{31}\]
Interestingly one can show that after enforcing all the equivalent symmetries from Table 4 and Bravais lattice translations, there are no allowed nearest neighbor fermion hoppings in the lattice. For typical RK models, one expects that short distance correlations determine a sizable portion of the energy-density of the state and one would therefore expect that this projective symmetry implementation from Table 4 is not a very energetically favorable choice for reasonably simple microscopic Hamiltonians. However, as mentioned before, the Fermionic symmetries listed in Table 4 are only one choice among a large set of possibilities.
It is therefore interesting to consider the following question: can we construct an alternative equivalent symmetry implementations that impose all the symmetries of the RK spin-ice model but which allow for the nearest neighbor hoppings to be non-zero? We have found two modified symmetry implementations that allow for non-zero nearest neighbor hoppings, which we shall denote as \(\Theta_{x}\) and \(\Theta_{y}\) implementations, and on which we focus for the remainder of the paper. These projective symmetry implementations are obtained by dressing the implementations of Table 4 with the operations listed in Table 5, which are obtained after composition with the following UV lattice gauge group elements \(G_{x}\) and \(G_{y}\):
\[G_{x}b^{\dagger}(\mathbf{r})G_{x}^{\dagger} =(-1)^{x}b^{\dagger}(\mathbf{r}). \tag{32}\] \[G_{y}b^{\dagger}(\mathbf{r})G_{y}^{\dagger} =(-1)^{y}b^{\dagger}(\mathbf{r}).\]
Here we write the sites as \(\mathbf{r}=(x,y)\), where \(x,y\) are understood to be integers. These two transformations can be viewed as generated by the UV lattice gauge transformations from Eq. (8), by choosing \(\theta(\mathbf{r})\) so that it takes the values depicted respectively in Fig.16\((a)\) and 16\((b)\).
#### iii.2.3 Connection to pseudo-scalar spin liquids.
So far we have used an implementation of microscopic symmetries which is more natural when we view the microscopic degrees of freedom as hard-core bosons, but which is not necessarily natural when we view them as spin-\(\frac{1}{2}\). However, thanks to the large set of microscopic symmetries of the RK model of spin-ice, we are implicitly also enforcing symmetries whose action is the natural one when we view the microscopic degrees of freedom as spins.
For example the time-reversal operator \(\Theta\), defined in Table 2, acts as complex conjugation in the standard choice of Pauli matrices where only \(\sigma^{y}\) is imaginary, and \(\sigma^{x,z}\) are real. Therefore it does not square to \(-1\). The more standard time-reversal operator of spin-\(\frac{1}{2}\), would act on the spin at site \(\mathbf{r}\), as \(\mathcal{T}=i\sigma^{y}(\mathbf{r})\Theta\). However, the operator \(i\sigma^{y}(\mathbf{r})\) is equivalent to composition \(U(\pi)\sigma^{x}(\mathbf{r})\), where \(U(\pi)\) is a \(\pi\) spin rotation around the z-axis, which acts on the fermions as: \(U(\pi)f^{\dagger}(\mathbf{r})U^{\dagger}(\pi)=-f^{\dagger}(\mathbf{r})\). Therefore \(i\sigma^{y}(\mathbf{r})\) is equivalent to a composition of the particle-hole \(X\), implemented \(\sigma^{x}(\mathbf{r})\), and a boson global U(1) symmetry operation, implemented by \(U(\pi)\), which we are already enforcing, namely, we have:
\[\mathcal{T}=U(\pi)X\Theta. \tag{33}\]
Therefore, we are also implicitly enforcing such standard time-reversal action, \(\mathcal{T}\), on spin-\(\frac{1}{2}\), and one can similarly understand other natural spin symmetries of the RK model, as products of the natural boson symmetries that we are already enforcing.
To determine the explicit action of \(\mathcal{T}\) on JW/composite-fermions, let us first describe the action of \(\Theta\). On spin raising/lowering operators this acts as a trivial antiunitary operator (complex conjugation):
\[\Theta\sigma^{+}(\mathbf{r})\Theta^{-1}=\sigma^{+}(\mathbf{r}).\]
Therefore this operator acts similarly on JW/composite-fermions:
\[\Theta f^{\dagger}(\mathbf{r})\Theta^{-1}=f^{\dagger}(\mathbf{r}). \tag{34}\]
Let us now describe the implementation of the particle-hole conjugation of hard-core bosons, denoted by
Figure 14: Phases gained by creation operators \(f^{\dagger}(\mathbf{r})\) or \(b^{\dagger}(\mathbf{r})\) under the action of the site dependent \(U(1)\) transformations: (a) \(U_{\frac{1}{2}}\) (from Eq.(29)), (b) \(V_{\frac{1}{2}}\) (from Eq.(40)).
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline Symmetry & Bare microscopic boson & Auxiliary fermion transformation & Equivalent JW fermion symmetry \\ \hline Time Reversal & \(\Theta\) & \(\Theta\) & \(\Theta\) \\ \hline \multirow{4}{*}{Spatial} & \(R_{\frac{\pi}{2}}\) & \(P_{\frac{\pi}{2}}\) & \(U_{\frac{1}{2}}P_{\frac{\pi}{2}}\) \\ & \(S_{x}\) & \(\Sigma_{x}\) & \(\Sigma_{x}\) \\ \cline{1-1} & \(S_{y}\) & \(\Sigma_{y}\) & \(\Sigma_{y}\) \\ \cline{1-1} & \(S_{1}\) & \(\Sigma_{1}\) & \(U_{\frac{1}{2}}\Sigma_{1}\) \\ \cline{1-1} & \(S_{2}\) & \(\Sigma_{2}\) & \(U_{\frac{1}{2}}\Sigma_{2}\) \\ \hline Particle-Hole & \(X\) & \(\Xi\) & \(\Xi\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of RK Hamiltonian symmetries and their implementation on bosons (spins) and JW fermions. The operations listed under the column “bare microscopic boson” are the underlying bare microscopic symmetries implemented on the boson creation operators, for example the \(90^{\circ}\) rotation \(R_{\frac{\pi}{2}}\) acts as defined in Eq.(25). For each of these we introduce an “auxiliary fermion transformation” which acts in a simple way as expected for spinless fermions, such as the fermion rotation \(P_{\frac{\pi}{2}}\) defined in Eq.(27). However, this auxiliary fermion transformation is not always equivalent to the “bare microscopic boson” (see Fig.12 for notion of equivalence), and might need to be dressed by an extra site dependent \(U(1)\) gauge transformation to make it equivalent, as listed under “equivalent JW fermion symmetry” (see Eq.(30) as an example for the \(R_{\frac{\pi}{2}}\) rotation and Fig.14 for a definition of \(U_{\frac{1}{2}}\)). The above “equivalent JW fermion symmetries” define only one possible extended projective symmetry group implementation on the JW/composite-fermions. Two other examples, that are the focus of this work, are described Table 5. In all the examples we implement the translations by Bravais vectors \(\mathbf{R}_{1},\mathbf{R}_{2}\) in the standard trivial non-projective way for bosons and fermions without dressing the auxiliary fermion transformations by gauge transformations.
\(\prod_{\mathbf{r}}\sigma^{x}(\mathbf{r})\). From the action of this operator on spin operators, \(X\sigma^{+}(\mathbf{r})X^{\dagger}=\sigma^{-}(\mathbf{r})\), one obtains the action on the JW/composite-fermions:
\[Xf^{\dagger}(\mathbf{r})X^{\dagger}=(-1)^{L_{s}(\mathbf{r})}f(\mathbf{r}). \tag{35}\]
Where \(L_{s}(\mathbf{r})\) is the length of the JW string. The factor \((-1)^{L_{s}(\mathbf{r})}\) can be viewed as pure UV gauge transformation, and therefore \(X\) is gauge equivalent to the natural JW/composite-fermion particle-hole conjugation, denoted by \(\Xi\) (see Table 4), and defined as:
\[\Xi f^{\dagger}(\mathbf{r})\Xi^{\dagger}=f(\mathbf{r}). \tag{36}\]
The spin time-reversal symmetry, \(\mathcal{T}\), reverses the direction of all the spin components, and in particular the z-direction: \(\mathcal{T}\sigma^{z}(\mathbf{r})\mathcal{T}^{-1}=-\sigma^{z}(\mathbf{r})\). Therefore, since \(\sigma^{z}(\mathbf{r})\) encodes the information of the JW/composite-fermion, it is clear that \(\mathcal{T}\) maps a fermion particle into a hole and viceversa. Therefore, we see that \(\mathcal{T}\) is therefore a type of anti-unitary particle hole conjugation on the JW/composite-fermion operator, which explicitly reads as:
\[\mathcal{T}f^{\dagger}(\mathbf{r})\mathcal{T}^{-1}=(-1)^{L_{s}(\mathbf{r})+1 }f(\mathbf{r}). \tag{37}\]
Where \(L_{s}(\mathbf{r})\) is the length of the JW string, and the factor \((-1)^{L_{s}(\mathbf{r})}\) can be viewed as pure UV gauge transformation, and therefore \(X\) is gauge equivalent to the natural JW/composite-fermion particle-hole conjugation, denoted by \(\Xi\) (see Table 4), and defined as:
\[\Xi f^{\dagger}(\mathbf{r})\Xi^{\dagger}=f(\mathbf{r}). \tag{38}\]
The spin time-reversal symmetry, \(\mathcal{T}\), reverses the direction of all the spin components, and in particular the z-direction: \(\mathcal{T}\sigma^{z}(\mathbf{r})\mathcal{T}^{-1}=-\sigma^{z}(\mathbf{r})\). Therefore, since \(\sigma^{z}(\mathbf{r})\) encodes the information of the JW/composite-fermion, it is clear that \(\mathcal{T}\) maps a fermion particle into a hole and viceversa. Therefore, we see that \(\mathcal{T}\) is therefore a type of anti-unitary particle hole conjugation on the JW/composite-fermion operator, which explicitly reads as:
\[\mathcal{T}f^{\dagger}(\mathbf{r})\mathcal{T}^{-1}=(-1)^{L_{s}(\mathbf{r})+1 }f(\mathbf{r}). \tag{39}\]
Where \(L_{s}(\mathbf{r})\) is the length of the JW string, and the factor \((-1)^{L_{s}(\mathbf{r})+1}\) is a pure UV gauge transformation identical to \(G_{x}\) defined in Eq.(32)9. Because of the above we see the JW/composite-fermion behaves under \(\mathcal{T}\) as a pseudoscalar spinon, in the sense defined in Ref. [77].
We have introduced other space symmetries in their natural boson representation in Tables 2 and 4 that also would act as particle-hole conjugations on the JW/composite-fermions when implemented as standard spin-\(\frac{1}{2}\) symmetries. For example, for the space mirror operations \(S_{x},S_{y},S_{1},S_{2}\)\(\sigma^{z}(\mathbf{r})\) transform as a scalar, e.g.: \(S_{y}\sigma^{z}(\mathbf{r})S_{y}^{-1}=\sigma^{z}(S_{y}\mathbf{r})\). However, its spin spin version would include an additional boson particle-hole conjugation, leading to the standard action of spins which are pseudo-vectors under mirrors, and which would reverse \(\sigma^{z}(\mathbf{r})\) because it is parallel to these mirror planes. Therefore, these mirrors act as unitary particle-hole conjugations on the JW/composite-fermions, and the spin liquid states that we will be discussing in this paper can be viewed as pseudo-scalar spin liquids with respect to spin implementations of time-reversal and space mirror symmetries in the sense defined in Ref.[77].
Footnote 9: Assuming the lattice has an even number of sites in the x-direction, which is natural for quantum spin-ice in a torus (see Fig.9).
#### iii.2.4 Dirac and Fermi surface mean-field states for the six-vertex model and quantum dimer models
The procedure described in the previous section allows us to fix the nearest neighbour hopping amplitudes in Eq.(20). The resulting pattern of hoppings is illustrated in fig. 15, and the corresponding mean-field hamiltonian reads as:
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Symmetry & \(\Theta_{x}\) (extended PSG) & \(\Theta_{y}\) (extended PSG) \\ \hline Time Reversal & \(G_{x}\Theta\) & \(G_{y}\Theta\) \\ \hline \multirow{4}{*}{Spatial} & \(U_{\frac{1}{2}}P_{\frac{x}{2}}\) & \(U_{\frac{1}{2}}P_{\frac{x}{2}}\) \\ & \(G_{x}\Sigma_{x}\) & \(G_{y}\Sigma_{x}\) \\ & \(G_{x}\Sigma_{y}\) & \(G_{y}\Sigma_{y}\) \\ & \(G_{y}U_{\frac{1}{4}}\Sigma_{1}\) & \(G_{x}U_{\frac{1}{4}}\Sigma_{1}\) \\ & \(G_{y}U_{\frac{1}{4}}\Sigma_{2}\) & \(G_{x}U_{\frac{1}{4}}\Sigma_{2}\) \\ \hline Particle-Hole & \(G_{y}\Xi\) & \(G_{x}\Xi\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The two distinct extended projective symmetry group implementations on the JW/composite-fermions that are the focus of this work. These symmetries are all equivalent to the microscopic symmetries listed in Tables 2 and 4 (see Fig-12 for summary of notion of equivalence).
Figure 15: Right: Depiction of the allowed nearest neighbour JW/composite-fermion hoppings associated with \(\Theta_{x}\) and \(\Theta_{y}\) extended projective symmetry groups from Table 5. Here the amplitude \(t\) is real for \(\Theta_{x}\) and purely imaginary for \(\Theta_{y}\). Left: The phase of fermion hopping around a vertex or plaquette of the original square lattice (dotted line) is \(\pi\). This \(\pi\) phase is behind the Dirac spectrum of JW/composite-fermions for these state (see Fig.17).
\[H_{\text{MF}}= \sum_{\mathbf{R}}it^{*}\big{(}f_{a}^{\dagger}(\mathbf{R}-\mathbf{R}_{ 1}+\mathbf{R}_{2})f_{b}(\mathbf{R})+f_{a}^{\dagger}(\mathbf{R})f_{b}(\mathbf{R}) \big{)}+t\big{(}f_{a}^{\dagger}(\mathbf{R}-\mathbf{R}_{1})f_{b}(\mathbf{R})+f_{ a}^{\dagger}(\mathbf{R}+\mathbf{R}_{2})f_{b}(\mathbf{R})\big{)}+h.c. \tag{38}\]
Which in crystal momentum basis can be re-expressed as:
\[H_{\text{MF}}=\sum_{\mathbf{q}\in\text{BZ}}\Big{(}f_{a}^{\dagger}(\mathbf{q}) \;\;f_{b}^{\dagger}(\mathbf{q})\Big{)}\begin{pmatrix}0&h_{ab}(\mathbf{q})\\ h_{ab}^{*}(\mathbf{q})&0\end{pmatrix}\begin{pmatrix}f_{a}(\mathbf{q})\\ f_{b}(\mathbf{q})\end{pmatrix},\]
where we are using the crystal momentum basis \(f_{a}^{\dagger}(\mathbf{R})=N_{\Lambda}^{-1/2}\sum_{\mathbf{q}\in\text{BZ}}e ^{-i\mathbf{q}\cdot\mathbf{R}}f_{a}^{\dagger}(\mathbf{q})\), and the matrix entry is:
\[h_{ab}(\mathbf{q})=2e^{\frac{i}{2}(q_{1}-q_{2})}\bigg{[}it^{*}\,\cos\big{(} \frac{q_{1}-q_{2}}{2}\big{)}+t\,\cos\big{(}\frac{q_{1}+q_{2}}{2}\big{)}\bigg{]}.\]
Where \(q_{i}=\mathbf{q}\cdot\mathbf{R}_{i}\), \(i=1,2\). The associated band energy dispersions is:
\[\epsilon_{\pm}(\mathbf{q})=\pm 2|t|\sqrt{\cos\big{(}\frac{q_{1}-q_{2}}{2} \big{)}^{2}+\cos\big{(}\frac{q_{1}+q_{2}}{2}\big{)}^{2}}. \tag{39}\]
These bands are illustrated in Fig.17. The two extended projective symmetry implementations \(\Theta_{x}\) and \(\Theta_{y}\) (see table 5) impose different constraints on the hopping amplitude \(t\) be either purely real or purely imaginary:
\[\begin{cases}t=t^{*}&\text{for the }\Theta_{x}\\ t=-t^{*}&\text{for the }\Theta_{y}\end{cases}\]
Details on how the above follows from implementing the symmetries from Table 5 are shown in Appendix B.
Despite their similarity, the extended projective symmetry group implementations \(\Theta_{x}\) and \(\Theta_{y}\) (see Table 5) are inequivalent. This can be seen by considering the action of a particular unitary transformation, denoted by \(V_{\frac{1}{2}}\), which acts on the fermion operator, \(f^{\dagger}(\mathbf{r})\), as a local site dependent U(1) transformation multiplying it by the specific phases shown in Fig.14(b). It turns out that \(V_{\frac{1}{2}}\) maps the \(\Theta_{x}\) mean-field Hamiltonian onto the \(\Theta_{y}\) mean-field Hamiltonian, as can be seen from its action of the following fermion bilinears (the \(1,2,3,4\) sub-indices below are the sites shown in Fig.15):
\[\begin{split} tf_{3}^{\dagger}f_{1}&\to itf_{3}^{ \dagger}f_{1}\\ -itf_{1}^{\dagger}f_{2}&\to tf_{1}^{\dagger}f_{2}\\ tf_{2}^{\dagger}f_{4}&\to itf_{2}^{\dagger}f_{4}\\ -itf_{4}^{\dagger}f_{3}&\to tf_{4}^{\dagger}f_{3}\end{split} \tag{40}\]
On the other hand, the operators \(L_{\mathbf{R}}\) and \(L_{\mathbf{R}}^{\dagger}\) that enter in the microscopic RK Hamiltonian (see Eq.(10)) can be shown to be odd under the action \(V_{\frac{1}{2}}\). Since \(L_{\mathbf{R}}\) is a invariant under the UV Lattice Gauge group, it follows that \(V_{\frac{1}{2}}\) is not a pure gauge transformation but a transformation with non-trivial action within the gauge
Figure 16: Phases gained by creation operators \(f^{\dagger}(\mathbf{r})\) or \(b^{\dagger}(\mathbf{r})\) under the action of the UV gauge transformations: (a) \(G_{x}\), (b) \(G_{y}\). The transformations are obtained by choosing \(\theta(\mathbf{r})=\frac{\pi}{2}\) in Eq.(8) over the vertices contained in the gray regions and zero in the remainder.
invariant subspaces, and therefore the \(\Theta_{x}\) and \(\Theta_{y}\) mean-field Hamiltonians are not gauge equivalent, but rather realizing to two physically distinct generalized projective symmetry group implementations. This implies that only one them will have lower energy as a trial ground state for a specific microscopic RK Hamiltonian. Since the plaquette resonance term \(L_{\mathbf{R}}\) is odd under \(V_{\frac{1}{2}}\), the one that is more energetically favorable will be determined by the sign of the plaquette resonance term in the microscopic Hamiltonian10.
Footnote 10: Notice that \((V_{\frac{1}{2}})^{2}\) would map both the \(\Theta_{x}\) and \(\Theta_{y}\) mean-field Hamiltonians into minus themselves. However, \((V_{\frac{1}{2}})^{2}\) leaves all the gauge invariant operators unchanged, and it is therefore an element of UV gauge group. Therefore we see that changing the global sign of \(t\) in either the \(\Theta_{x}\) or \(\Theta_{y}\) mean-field Hamiltonians leads to the same physical state.
As described in Sec.II.1, for the cases of the \(\mathcal{H}_{\text{QWM}}\) and \(\mathcal{H}_{\text{QDM}}\), the system is respectively at half-filling and quarter filling, therefore, as depicted in Fig.17, these systems have a mean field dispersion featuring two massless Dirac cones and a Fermi surface, respectively. The Dirac points are located at \(\mathbf{q}_{0}=(q_{1},q_{2})=(\pi,0)\) and \(\mathbf{q}_{0}=(q_{1},q_{2})=(0,\pi)\). By writing \(\mathbf{q}=\mathbf{q}_{0}+\mathbf{p}\) and expanding the mean-field Hamiltonian to linear order in the momentum \(\mathbf{p}\), we obtain the following effective Dirac Hamiltonian for the \(\Theta_{x}\) extended PSG (\(t\in\mathbb{R}\)) is:
\[h(\mathbf{q}_{0}+\mathbf{p})\simeq v\begin{cases}p_{x}\tau^{x}+p_{y}\tau^{y}& \text{for }\mathbf{q}_{0}=(\pi,0)\\ p_{x}\tau^{x}-p_{y}\tau^{y}&\text{for }\mathbf{q}_{0}=(0,\pi)\end{cases} \tag{41}\]
where \(\tau^{x,y}\) are Pauli matrices in the \(a/b\) sublattice space, \(v=\sqrt{2}t|\mathbf{R}_{1}|\), and \(p_{x}=(\mathbf{p}\cdot\hat{\mathbf{R}}_{1}-\mathbf{p}\cdot\hat{\mathbf{R}}_{ 2})/\sqrt{2}|\mathbf{R}_{1}|,p_{y}=(\mathbf{p}\cdot\hat{\mathbf{R}}_{1}+ \mathbf{p}\cdot\hat{\mathbf{R}}_{2})/\sqrt{2}|\mathbf{R}_{1}|\). On the other hand, for the \(\Theta_{y}\) extended PSG (\(t\in i\mathbb{R}\)) the linearized Hamiltonian is:
\[h(\mathbf{q}_{0}+\mathbf{p})\simeq v\begin{cases}p_{x}\tau^{y}+p_{y}\tau^{x}& \text{for }\mathbf{q}_{0}=(\pi,0)\\ p_{x}\tau^{y}-p_{y}\tau^{x}&\text{for }\mathbf{q}_{0}=(0,\pi)\end{cases} \tag{42}\]
where \(v=-i\sqrt{2}t|\mathbf{R}_{1}|\).
Now for the case of the subspace of the QDM model which corresponds to a quarter filling of the bands by the JW/composite fermions, there is a Ferrmi surface that consists of straight lines that are perfectly nested by \((\pi,0)\) and \((0,\pi)\) vectors (see Figs. 17,19). This indicates that such putative composite Fermi liquid state would be highly unstable towards forming a state which spontaneously breaks the lattice translational symmetry and gaps the Fermi surface. This perfect nesting occurs only for the strict nearest neighbor mean-field Hamiltonian, and therefore can be removed by adding longer range hoppings which are allowed by the extended projective symmetry implementations under consideration (\(\Theta_{x},\Theta_{y}\) from Table 5). To illustrate this, we consider the further neighbor hoppings depicted in Fig.18. One can show that the second neighbor hopping, denoted by \(t^{\prime}\) and depicted by blue arrows in Fig.18, vanishes for the \(\Theta_{x},\Theta_{y}\) symmetry implementations. The further neighbor hopping denoted by \(t_{2}\) and depicted by green arrows in Fig.18 is allowed by \(\Theta_{x},\Theta_{y}\) symmetry implementations, and it leads to the following sublattice-diagonal entries to the mean-field Hamiltonian:
\[\Delta h_{2}(\mathbf{q})_{ab}\sim t_{2}[\cos(q_{1}+q_{2})+\cos(q_{1}-q_{2})] \delta_{ab}. \tag{43}\]
However, since \(\cos(q+\pi/2)+\cos(q-\pi/2)=0\), the above correction vanishes exactly along the lines that define the nested Fermi surface (see Figs.17 and 19), and therefore does not remove the perfect nesting. Nevertheless there are symmetry-allowed hoppings that lifts the nesting. One of them is denoted by \(t_{4}\) and depicted in Fig.18 by the red arrows. This hopping adds the following sublattice-diagonal entries to the mean-field Hamiltonian:
\[\Delta h_{4}(\mathbf{q})_{ab}\sim t_{4}[\cos(2q_{1}+2q_{2})+\cos(2q_{1}-2q_{2})] \delta_{ab}.\]
Figure 19 illustrates how the perfect nesting is destroyed as \(t_{4}\) increases relative to \(t\), leading to a composite Fermi liquid state with two Fermi surfaces centered around \((\pi,0)\) and \((0,\pi)\). The above illustrates that a composite fermmi liquid state at \(\frac{1}{4}\)-filling could be in principle a true stable spin liquid ground state. However the strong resilience of the nesting to near neighbor hopping corrections is indicative that the Fermi surface has a strong tendencies to be gapped out and destroyed via instabilities of composite-fermion particle-hole pair condensation with finite crystal momentum, leading to ordinary confined states with spontaneously broken lattice translational symmetries, such as the columnar, staggered and resonant plaquette phases that are believed to be typically realized for RK Hamiltonians of quantum dimer models.
We would like to close this subsection by noting that our mean-field states associated with the \(\Theta_{x},\Theta_{y}\) extended projective symmetry groups have a resemblance to the classic \(\pi\)-flux state of standard Abrikosov-Schwinger fermions introduced in Refs. [57; 51]. In fact from Fig.15, we see that the fermions are hopping around every plaquette of the original square lattice (which are now subdivided into vertices and plaquettes of the "spin-ice" lattice) accumulating a phase \(\pi\) over the closed loop. There are, however, several crucial physical differences with the classic \(\pi\)-flux state of Abrikosov-Schwinger fermions. First, the classic \(\pi\)-flux state is a spin singlet in which each spin specie of Abrikosov-Schwinger fermions has the same hoppings in the square lattice, whereas in our construction the JW/composite-fermions are spin-less with only one fermion specie hopping around the plaquette, in a state that is not a spin singlet11. More fundamentally,
in spin space and far from having SU(2) symmetry.
Generally speaking symmetries are much more constraining for the JW/composite-fermions relative to Abrikosov-Schwinger fermions, as they fix the phases of hopping and different phase might lead to physically distinct states.
Nevertheless, the fact that our mean-field hamiltonians of JW/composite-fermions can be viewed as states with \(\pi\)-flux in each plaquette of the original square lattice is useful for understanding the properties of the mean-field states. For example, for a \(\pi\)-flux mean-field state there exists an intra-unit cell magnetic translation that is not part of the spin-ice Bravais lattice, which can be taken to be a translation by \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\) (see Fig.15). This magnetic translation would anti-commute with the ordinary elementary translations along either of the two basis vectors of the Bravais lattice \(\mathbf{R}_{1},\mathbf{R}_{2}\), because the parallelogram spanned by \(\mathbf{R}_{1}\) and \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\) encloses \(\pi\)-flux, and similarly for the parallelogram spanned by \(\mathbf{R}_{2}\) and \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\). As a consequence this magnetic translation boosts the standard crystal momentum by \((q_{1},q_{2})\rightarrow(q_{1}+\pi,q_{2}+\pi)\), and this explains why the mean-field fermion dispersions that we have found display this translational symmetry in momentum space (see Figs.17,18). However, while this magnetic translation by \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\) is a symmetry of the unprojected mean-field Hamiltonian, this cannot be a symmetry of the microscopic RK-model of quantum spin-ice, because a translation by \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\) would map spin-ice vertices onto spin-ice plaquettes, which are clearly distinct in the RK model, and in any typical model with the same spin-ice rules, since the ice rules themselves are incompatible with a symmetry that would exchange vertices and plaquettes (except for trivial models without quantum fluctuations). However this symmetry of the bare-unprojected mean-field state would not be present for the full physical trial state obtained after the spin-ice Gutzwiller projection, because the Gutzwiller projection operator from Eq.(21) is not invariant unders such translation by \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\), since it is defined by projecting onto the spin-ice rules associated with the vertices. As we will see the effective Hamiltonian capturing the gauge field fluctuations that we will discuss in the next section, in fact does not have any associated translational symmetry by \((\mathbf{R}_{1}+\mathbf{R}_{2})/2\), and thus this symmetry of the bare mean-field state will be lifted by gauge fluctuations.
## III Gauge field fluctuations and effective low energy continuum field theory
The Gutzwiller projection is a non-trivial operation that substantially changes the character of the un-projected mean-field state. Computing analytically the properties of the projected state is a however a highly non-trivial task. In a sense, this projection can be viewed as giving rise to the appearance of strong gauge field fluctuations around the mean-field state [49; 74; 21], and, accounting for such fluctuations is necessary to capture, even qualitatively, the correct behavior of the phase of matter in question at low energies.
The mean-field description introduced in the previous section still conceals the emergence of low-energy dynamical gauge fields which can be viewed as arising from fluctuations of the hopping amplitudes of the mean-field state in Eq. (20). While a desciption of these gauge field fluctuations is often performed by enforcing constraints and performing saddle point expansions in a path integral representation (see e.g. Ref.[80; 79]), we will device here a more phenomenological approach to infer the field content and emergent gauge structure of the low energy field theory that describes the phase of matter for the itinerant liquids of JW/composite-Fermions associated with the mean field states constructed in the previous section.
We will include only the fluctuations of the phases of the complex hoppings \(t(\mathbf{r},\mathbf{r}^{\prime})\) of the mean field Hamiltonian (see Eq.(20)) but not the fluctuations their amplitudes, because we assume that the latter can be viewed as being gapped and thus not important at low energies. To capture the fluctuations of such phases, we introduce additional bosonic degrees of freedom associated with the non-zero hopping, \(t(\mathbf{r},\mathbf{r}^{\prime})\), of the mean field state from Eq.(20), that connect a pair of fermion lattice sites \(\mathbf{r},\mathbf{r}^{\prime}\). We denote the deviation of the phase from its mean field value by \(A(\mathbf{r},\mathbf{r}^{\prime})\), and we promote the mean field Hamiltonian to a new Hamiltonian including this phase fluctuation variables as follows:
\[\begin{split} H[t]&\mapsto H[t,A],\\ t(\mathbf{r},\mathbf{r}^{\prime})f^{\dagger}(\mathbf{r})f( \mathbf{r}^{\prime})&\to t(\mathbf{r},\mathbf{r}^{ \prime})f^{\dagger}(\mathbf{r})e^{iA(\mathbf{r},\mathbf{r}^{\prime})}f( \mathbf{r}^{\prime}).\end{split} \tag{44}\]
The scalar phase \(A(\mathbf{r},\mathbf{r}^{\prime})\) can be interpreted as a lattice version of \(\int_{\mathbf{r}^{\prime}}^{\mathbf{r}}\mathbf{A}\cdot d\mathbf{x}\). Notice that hermiticity demands that \(A(\mathbf{r},\mathbf{r}^{\prime})=-A(\mathbf{r},\mathbf{r}^{\prime})\) and \(t(\mathbf{r},\mathbf{r}^{\prime})=t^{*}(\mathbf{r}^{\prime},\mathbf{r})\). The above Hamiltonian describes the coupling of the matter fields to the gauge fields, and therefore we need to provide another Hamiltonian for the "pure" gauge field sector. This Hamiltonian can be otained by demanding invariance under a generalized version of lattice UV gauge structure and simple symmetry considerations. In the next section III.1, we will review this construction first for the case of usual Abrikosov-Schwinger partons and subsequently in section III.1 we will apply it to the case of the extended parton constructions for quantum spin-ice.
### Review of Gauge field fluctuations for \(U(1)\) spin liquids from standard parton constructions
In this section we will derive the effective field theory governing a U(1) spin liquid associated with the standard Abrikosov-Schwinger parton mean field states (see
section 51). The conclusion in this section will be simple and well established before, namely, when the spin-liquid state associated with a given mean-field parton state is stable, the low-energy de-confined gauge structure will be given by the invariant gauge group (IGG) [49; 74]. We will illustrate this for a mean-field parton state with a global U(1) particle-conservation symmetry, and thus a U(1) IGG, leading, therefore, to a low-energy U(1) gauge group minimally coupled to the parton fermions (i.e. an standard U(1) spin-liquid). We wish, however, here to rederive these results in what is hopefully a more conceptually intuitive construction, so that we can use it as a template of reasoning for deriving the new results of the emergent low energy gauge structure of our extended parton constructions of JW/composite-fermion states in the next section.
We begin by promoting the phases of the hoppings into dynamical degrees of freedom and the mean field Hamiltonian from Eq.(18), into the following Hamiltonian capturing the field matter coupling:
\[H[t,A]\doteq\sum_{s,s^{\prime}}\sum_{\mathbf{r},\mathbf{r}^{\prime}}t_{ss^{ \prime}}(\mathbf{r},\mathbf{r}^{\prime})e^{iA(\mathbf{r},\mathbf{r}^{\prime}) }f^{\dagger}_{s}(\mathbf{r})f_{s^{\prime}}(\mathbf{r}^{\prime}), \tag{45}\]
Here \(A(\mathbf{r},\mathbf{r}^{\prime})\) is viewed as a dynamical compact periodic phase taking values \(A(\mathbf{r},\mathbf{r}^{\prime})\in[0,2\pi)\). We would like now to define an extension of the local UV parton gauge symmetry, but which acts not only on the fermions but also on the dynamical gauge fields \(A(\mathbf{r},\mathbf{r}^{\prime})\). As discussed in section 51, the local U(1) transformations of the parton gauge group are generated by the local fermion occupations \(n(\mathbf{r})\), which transform the fermion bilinears as (see Eq.(18)):
\[f^{\dagger}_{s}(\mathbf{r})f_{s}(\mathbf{r}^{\prime})\xrightarrow{\text{ Gauge}}e^{-i[\theta(\mathbf{r})-\theta(\mathbf{r}^{\prime})]}f^{\dagger}_{s}( \mathbf{r})f_{s}(\mathbf{r}^{\prime}) \tag{46}\]
Therefore, in order to leave the Hamiltonian from Eq. (45) invariant, we demand that these transformations act on the dynamical phase gauge degrees of freedom as follows:
\[A(\mathbf{r},\mathbf{r}^{\prime})\xrightarrow{\text{Gauge}}A(\mathbf{r}, \mathbf{r}^{\prime})+\theta(\mathbf{r})-\theta(\mathbf{r}^{\prime}) \tag{47}\]
For simplicity, from now on we will assume that the hoppings only connect nearest neighbour sites \(\mathbf{r}\) and \(\mathbf{r}^{\prime}=\mathbf{r}+\mathbf{e}_{i}\) (with \(i=x,y\)) and we will label the bond connecting them by \((\mathbf{r},i)\), and the gauge fields by \(A(\mathbf{r},i)\). To implement the transformation from Eq.(47) quantum-mechanically, we introduce a canonically conjugate variable to the vector potentials denoted by \(E(\mathbf{r},i)\), and take these variables to satisfy the following commutation relations:
\[\begin{cases}[A(\mathbf{x},i),E(\mathbf{y},j)]&=-i\delta_{\mathbf{x},\mathbf{ y}}\delta_{ij}\\ [E(\mathbf{x},i),E(\mathbf{y},j)]&=0\\ [A(\mathbf{x},i),A(\mathbf{y},j)]&=0\end{cases} \tag{48}\]
Since \(A(\mathbf{x},i)\) is an angle, \(E(\mathbf{x},i)\) is an angular momentum with integer-valued spectrum. It is then easy to show, that the generalized UV gauge transformations acting on matter and dynamical phase gauge fields are generated by exponentials of the following operators:
\[G(\mathbf{r})=n(\mathbf{r})-\nabla\cdot E(\mathbf{r}). \tag{49}\]
where:
\[\nabla\cdot E(\mathbf{r})=E(\mathbf{r},x)+E(\mathbf{r},y)-E(\mathbf{r}- \mathbf{e}_{x},x)-E(\mathbf{r}-\mathbf{e}_{y},y) \tag{50}\]
We will demand that the combined effective Hamiltonian of matter and gauge fields is invariant under the above local gauge group, and we will interpret then the values of \(G(\mathbf{r})\) as a constraint that can be consistently imposed on the states in order to represent the subspace of physical interest. The subspace of physical interest will be that for which \(G(\mathbf{r})=0\) for all \(\mathbf{r}\), and therefore this constraint can be viewed as a lattice version of Gauss's law (see Fig.20 for a depiction).
Let us now determine the simplest operators that are made only from gauge fields which commute with every \(G(\mathbf{r})\). It is easy to verify that one of them is the magnetic field operator associated with the curl of \(A\) around a plaquette:
\[B(\mathbf{r})\doteq A(\mathbf{r},x)+A(\mathbf{r}+\mathbf{e}_{x},y)-A( \mathbf{r}+\mathbf{e}_{y},x)-A(\mathbf{r},y) \tag{51}\]
Here we view the plaquette of interest as being northeast from lattice site \(\mathbf{r}\), and thus we are using this as a label of the plaquette as well. The canonically conjugated variable to \(B(\mathbf{r})\) can be shown to be the lattice curl of \(E\):
\[\nabla\times E(\mathbf{r})\doteq E(\mathbf{r},x)+E(\mathbf{r}+\mathbf{e}_{x},y)-E(\mathbf{r}+\mathbf{e}_{y},x)-E(\mathbf{r},y)\]
Figure 20: Depiction of generator of generalized gauge transformations (see Eq.(49)) acting on the matter (residing on blue sites) and vector potentials (residing on links), relevant for the emergent lattice U(1) gauge theory of standard Abrikosov-Schwinger partons.
Notice that at any site \(\mathbf{r}\), \(\nabla\times E(\mathbf{r})\) and \((\nabla\cdot E)(\mathbf{r})\) are two indipendent degrees of freedom. Following an analogous reasoning to the one we did to define the action of Gauge transformations on gauge fields, we extend the symmetries in IV onto gauge fields by requiring that the interaction hamiltonian (46) remains invariant. Importantly, the action of symmetries on gauge fields is independent of the specific extended projective symmetry group implementation for the fermionic matter, because the "projective" factors are already fully taken into account in the fermion transformation rules and the choice of mean-field hopping amplitudes. Moreover, under space transformations, the vector potential \(A(\mathbf{r},i)\) transforms as a vector directed along the bond \((\mathbf{r},i)\). Its transformation under time reversal, \(\Theta\), can be fixed by demanding that the exponent in Eq.(45) is left invariant:
\[e^{iA(\mathbf{r},i)}=\Theta e^{iA(\mathbf{r},i)}\Theta^{-1}=e^{-i\Theta A( \mathbf{r},i)\Theta^{-1}}, \tag{52}\]
thus, \(\Theta A(\mathbf{r},i)\Theta^{-1}=-A(\mathbf{r},i)\).
So far we have kept track of the compactification of the \(A\) field. When the low energy phase is deconfined, it is appropriate to simplify the description by neglecting the compactification and view the fields \(A\) as taking values on the real axis. With this simplification and after enforcing the symmetries it is easy to see that the simplest Hamiltonian that is bilinear in the local gauge-invariant fields, \(E\) and \(B\), is the standard Maxwell Hamiltonian in the lattice, given by:
\[H_{\text{Gauge}}=\frac{\epsilon}{2}\sum_{\mathbf{r}}(E^{2}(\mathbf{r},x)+E^{ 2}(\mathbf{r},y))+\frac{1}{2\mu}\sum_{\mathbf{r}}B^{2}(\mathbf{r}). \tag{53}\]
Where \(\epsilon\) and \(\mu\) are constants. The above Hamiltonian can be diagonalized in terms of "normal modes" of the pure-gauge in the absence of coupling to fermionic matter. Since we have two independent scalar degrees of freedom per unit cell, associated with \(A(\mathbf{r},x)\) and \(A(\mathbf{r},y)\), but one non-dynamical constraint per unit cell (since \(\nabla\cdot E(\mathbf{r})\) commutes with \(H\)), there is only one truly dynamical Harmonic oscillator degree of freedom per unit cell, associated with the magnetic field \(B(\mathbf{r})\). Its equations of motion can be determined easily from the Hamiltonian using the commutators from eq.(48), and read as follows:
\[\begin{split}\frac{dB(\mathbf{r})}{dt}&=-\nabla \times E(\mathbf{r})\\ \frac{d}{dt}\nabla\times E(\mathbf{r})&=\frac{4}{\mu \epsilon}B(\mathbf{r})-\frac{1}{\mu\epsilon}\sum_{\begin{subarray}{c}\mathbf{ r}=\pm\pi_{y}\\ \mathbf{\xi}=\pm\pi_{y}\end{subarray}}B(\mathbf{r}+\mathbf{\xi}).\end{split} \tag{54}\]
The above can be solved by expanding fields in crystal momentum basis (Fourier transform), to obtain:
\[\frac{d^{2}B(\mathbf{q})}{dt^{2}}+\frac{1}{\mu\epsilon}[4-2\cos(q_{x})-2\cos(q _{y})]B(\mathbf{q})=0, \tag{55}\]
and therefore the dispersion of the normal modes is (illustrated in Fig.22):
\[\omega^{2}(\mathbf{q})=\frac{1}{\epsilon\mu}[4-2\cos(q_{x})-2\cos(q_{y})]. \tag{56}\]
The above dispersion features a linearly dispersing photon-like mode centered at momentum \((q_{x},q_{y})=(0,0)\) with a speed of (see Fig.22):
\[v=\frac{1}{\sqrt{\mu\epsilon}}.\]
This photon would be minimally coupled to the fermionic matter through Eq.(44). We see, therefore, that our phenomenological procedure is able to describe the low energy field content expected at low energies for a \(U(1)\) spin liquid associated with the standard Abrikosov-Schwinger parton construction [49; 74]. Let us pause to consider what protects the gaplessness of this photon mode?. Once deconfinement is presumed, so that it is valid to replace vector potentials by continuum real-valued variables, the lattice Faraday law from Eq.(54), can be re-interpreted as a continuity equation:
\[\frac{\partial B}{\partial t}(\mathbf{r},t)+\nabla\cdot\varepsilon=0, \tag{57}\]
where \(\varepsilon\) is a dual electric field. It is a rotated version of the previously defined electric field, so that its lattice
Figure 21: The solid black lines depict the sum of vector potentials that enter the definition of the magnetic flux operator \(B(\mathbf{r}-\mathbf{e}_{x})\) (from Eq.(51)). The orange lines depict the sums of electric fields that enter the divergence operator (from Eq.(50)). The fact that these two operators commute can be visualized by noting that the number of segments in which parallel black and orange arrows overlap equals the number of segments in which anti-parallel arrows overlap.
divergence is centered on the plaquettes, and is defined as:
\[\varepsilon_{i}=\epsilon_{ij}E_{j},\]
where \(\epsilon_{ij}\) is the 2D Levi-civita symbol. The photon can be viewed as a Goldstone mode of a spontaneously broken global U(1) 1-form symmetry associated with the conservation of magnetic flux, as it is usually discussed in boson-vortex dualities in 2+1D [81, 82, 83]. In the absence of gapless fermionic matter and due compact nature of the gauge fields, the above photon would ultimately become gapped at low energies due to Polyakov confinement [84], because the global conservation law of magnetic flux would be explicitly broken by fluctuations associated with local magnetic flux creation and destruction events.
Gauge field fluctuations for \(U(1)\) spin liquids from extended parton constructions in 2D quantum spin-ice
Let us now generalize the previous construction to try to elucidate the low energy emergent gauge structure associated with the extended parton composite fermi liquid states of quantum spin-ice models discussed in Sec.II.3. Just as we did for the Abrikosov-Schwinger fermions, we begin by writing the mean field Hamiltonian of the composite fermions and introduce a real-valued variable that captures the fluctuations of the phase of the hopping amplitude connecting a pair of fermion sites \((\mathbf{r},\mathbf{r}^{\prime})\) and denote it by \(A(\mathbf{r},\mathbf{r}^{\prime})\). For concreteness we will focus on the fluctuations of the mean-field states described in Sec.II.3.4 which had non-zero hoppings only for \((\mathbf{r},\mathbf{r}^{\prime})\) being nearest neighbour sites, so that the resulting mean field Hamiltonian, analogously to (45), reads as:
\[\begin{split} H(t,A)=\sum_{\mathbf{R}}& it^{*}e^{-iA_{1}(\mathbf{R}+\mathbf{R}_{2})}f_{a}^{\dagger}( \mathbf{R}-\mathbf{R}_{1}+\mathbf{R}_{2})f_{b}(\mathbf{R})+it^{*}e^{iA_{3}( \mathbf{R})}f_{a}^{\dagger}(\mathbf{R})f_{b}(\mathbf{R})\\ +& te^{-iA_{4}(\mathbf{R})}f_{a}^{\dagger}(\mathbf{ R}-\mathbf{R}_{1})f_{b}(\mathbf{R})+te^{iA_{2}(\mathbf{R}+\mathbf{R}_{2})}f_{a}^{ \dagger}(\mathbf{R}+\mathbf{R}_{2})f_{b}(\mathbf{R})+h.c.\end{split} \tag{58}\]
where the convention for the labelling of Gauge fields is depicted in Fig.23. As before we promote the above phases into angular quantum-rotor bosonic degrees of freedom, with an associated canonically conjugate degree of freedom denoted by \(E(\mathbf{r},\mathbf{r}^{\prime})\), with the same commutation relations described in Eqs.(48). However, the first crucial difference that appears for the extended partons is that the UV \(U(1)\) gauge transformations are not acting as in Eq.(46) for the Abrikosov-Schwinger fermions. Instead the UV \(U(1)\) gauge transformation are gener
Figure 22: Left: Dispersion relation of the standard emergent photon of a U(1) spin liquid of Abrikosov-Schwinger fermions, from Eq.(56). Right: Cut of the dispersion relations along \(q_{y}=0\), with the dashed line illustrating the linearized photon dispersion near \((q_{x},q_{y})=(0,0)\).
ated by the spin-ice charge operators from Eqs.(7),(14), or equivalently by the total number of fermions in the links connected to vertex \(\mathbf{R}\), denoted by \(n_{\rm ice}(\mathbf{R})\), and which in Bravais lattice notation reads as (see Fig.23):
\[n_{\rm ice}(\mathbf{R})=n_{a}(\mathbf{R})+n_{b}(\mathbf{R})+n_{a}(\mathbf{R}- \mathbf{R}_{2})+n_{b}(\mathbf{R}-\mathbf{R}_{1}). \tag{59}\]
Therefor, the generator of the generalized lattice gauge transformations analogous to the one from Eq.(49), that also acts on the dynamical phase degrees of freedom, is a sum of the corresponding four generators from Eq.(49), and is given by:
\[G_{\rm ice}(\mathbf{R})\doteq n_{\rm ice}(\mathbf{R})-\sum_{\mathbf{r}\in \mathbf{R}}(\nabla\cdot E)(\mathbf{r}), \tag{60}\]
where \(\mathbf{r}\in\mathbf{R}\) denotes the four spin sites that contribute to the ice-rule associated to the vertex \(\mathbf{R}\), as depicted in Fig.23, and the lattice divergence \((\nabla\cdot E)(\mathbf{r})\) is defined in the same way as in Eq.(50).
As before we demand that \(G_{\rm ice}\) commutes with every term in the Hamiltonian and interpret the physical Hilbert space as the one satisfying the constraint \(G_{\rm ice}(\mathbf{R})=0\) for every \(\mathbf{R}\), which can be re-written as a Gauss law of the form:
\[(\nabla\cdot E)_{\rm ice}(\mathbf{R})=n_{\rm ice}(\mathbf{R}), \tag{61}\]
where \((\nabla\cdot E)_{\rm ice}\) is given by (see Fig.24):
\[(\nabla\cdot E)_{\rm ice}(\mathbf{R})=\sum_{\mathbf{r}\in\mathbf{R}}(\nabla \cdot E)(\mathbf{r}). \tag{62}\]
We can also write a canonically conjugate partner to the above gauge constraint operator, given by:
\[(\nabla\cdot A)_{\rm ice}(\mathbf{R})=\sum_{\mathbf{r}\in\mathbf{R}}(\nabla \cdot A)(\mathbf{r}). \tag{63}\]
Let us now construct the analogue of the Maxwell Hamiltonian from Eq.(53). To do so, we need to find all the linearly independent gauge field operators that commute with the gauge field part of the constraint operator \(G_{\rm ice}(\mathbf{R})\) from Eq.(60), namely with \((\nabla\cdot E)_{\rm ice}(\mathbf{R})\) and its canonical partner \((\nabla\cdot A)_{\rm ice}(\mathbf{R})\). Since the Bravais unit cell contains four scalar vector potential degrees of freedom (see Fig.23), but there is one Gauss law constraint per unit cell, we expect three independent harmonic oscillator modes per cell and therefore three dynamical gauge field bands. To find a basis for such modes, we notice that since \((\nabla\cdot E)_{\rm ice}(\mathbf{R})\) is a sum of divergences from the previous section on Abrikosov-Schiwinger fermions (see Eq.(50)), the gauge invariant operators we discussed in the previous section, would also be gauge invariant in the new spin-ice construction. These include the magnetic operators \(B(\mathbf{r})\) from Eq.(51), but now there are two such operators per spin-ice Bravais unit cell, one associated with the spin-ice vertex and one with the spin-ice plaquette, which we denote respectively by \(B_{V}(\mathbf{R})\), \(B_{P}(\mathbf{R})\), and together with their canonically conjugate partners are given by are explicitly given by (see Fig.23):
\[\begin{split} B_{V}(\mathbf{R})=& A_{1}(\mathbf{R})+A_{2}( \mathbf{R})-A_{3}(\mathbf{R})-A_{4}(\mathbf{R}),\\ (\nabla\times E)_{V}(\mathbf{R})=& E_{1}(\mathbf{R})+ E_{2}(\mathbf{R})-E_{3}(\mathbf{R})-E_{4}(\mathbf{R}),\\ B_{P}(\mathbf{R})=& A_{3}(\mathbf{R}-\mathbf{R} _{2})+A_{4}(\mathbf{R}+\mathbf{R}_{1}-\mathbf{R}_{2})\cdots\\ &-A_{1}(\mathbf{R}+\mathbf{R}_{1})-A_{2}(\mathbf{R}),\\ (\nabla\times E)_{P}(\mathbf{R})=& E_{3}(\mathbf{R}- \mathbf{R}_{2})+E_{4}(\mathbf{R}+\mathbf{R}_{1}-\mathbf{R}_{2})\cdots\\ &-E_{1}(\mathbf{R}+\mathbf{R}_{1})-E_{2}(\mathbf{R}),\end{split} \tag{64}\]
where \(B_{V}(\mathbf{R})\) can viewed as a lattice curl centered around vertex \(\mathbf{R}\) and \(B_{P}(\mathbf{R})\) as a curl centered around the plaquette which is neighboring to the right the vertex \(\mathbf{R}\) (see Fig.25). However, there are certain additional operators containing only gauge fields, that commute with every \((\nabla\cdot E)_{\rm ice}(\mathbf{R})\), but which would not be gauge invariant under the convention of previous section, namely they would not commute with all the divergences of electric fields defined in Eq.(50). These operators and their canonically conjugate partners (see Fig.25) can be taken to be:
\[\begin{split} B_{x}(\mathbf{R})&=A_{3}(\mathbf{R}- \mathbf{R}_{2})-A_{1}(\mathbf{R}+\mathbf{R}_{1}),\\ E_{x}(\mathbf{R})&=E_{3}(\mathbf{R}-\mathbf{R}_{2} )-E_{1}(\mathbf{R}+\mathbf{R}_{1}),\\ B_{y}(\mathbf{R})&=A_{4}(\mathbf{R}+\mathbf{R}_{1} -\mathbf{R}_{2})-A_{2}(\mathbf{R}),\\ E_{y}(\mathbf{R})&=E_{4}(\mathbf{R}+\mathbf{R}_{1} -\mathbf{R}_{2})-E_{2}(\mathbf{R}),\end{split} \tag{65}\]
where the \(B_{x}(\mathbf{R}),B_{y}(\mathbf{R})\) fields can viewed as centered around the plaquette of the spin-ice model which is neighboring to the right the vertex \(\mathbf{R}\) (see Fig.25). Notice
Figure 23: Bottom left: convention for labeling gauge fields. These reside at the links (blue lines) that connect the JW/composite fermion sites (located at the solid dots). Top right: depiction of the operators entering in the generator of generalized spin-ice gauge transformations, \(n_{\rm ice}(\mathbf{R})\) from Eq.(59), which is ceneterd at the spin-ice vertices.
that \(B_{P}(\mathbf{R})=B_{x}(\mathbf{R})+B_{y}(\mathbf{R})\). Therefore, the set of linearly independent dynamical fields could in principle be chosen to be \(B_{x}(\mathbf{R}),B_{y}(\mathbf{R}),B_{V}(\mathbf{R})\). There is however a much better choice of local gauge invariant fields that will highly simplify the dynamics and the final physical picture. The idea is that instead of \(B_{V}(\mathbf{R})\), we would like to construct a local magnetic field strength that fits more naturally within the spin-ice gauge structure, which we will denote by \(B_{\rm ice}(\mathbf{R})\). This quantity and its canonical partner can be chosen as follows:
\[\begin{split} B_{\rm ice}(\mathbf{R})&=B_{x}( \mathbf{R})+B_{x}(\mathbf{R}+\mathbf{R}_{1}+\mathbf{R}_{2})+B_{y}(\mathbf{R}+ \mathbf{R}_{2})+B_{y}(\mathbf{R}+\mathbf{R}_{1})+2B_{V}(\mathbf{R}+\mathbf{R}_ {1}),\\ (\nabla\times E)_{\rm ice}(\mathbf{R})&=E_{x}( \mathbf{R})+E_{x}(\mathbf{R}+\mathbf{R}_{1}+\mathbf{R}_{2})+E_{y}(\mathbf{R}+ \mathbf{R}_{2})+E_{y}(\mathbf{R}+\mathbf{R}_{1})+2(\nabla\times E)_{V}( \mathbf{R}+\mathbf{R}_{1}).\end{split} \tag{66}\]
Figure 24 illustrates the terms that enter into \(B_{\rm ice}(\mathbf{R})\) making more clear why it has a natural interpretation of a spin-ice lattice curl. Notice that \(B_{\rm ice}(\mathbf{R})\) is naturally viewed as centered around the vertex \(\mathbf{R}+\mathbf{R}_{1}\) (see Fig.24), but it will be convenient to keep its position label as \(\mathbf{R}\) as we will see later on. The three fields \(B_{x}(\mathbf{R}),B_{y}(\mathbf{R}),B_{\rm ice}(\mathbf{R})\) and their canonical conjugate partners \(E_{x}(\mathbf{R}),E_{y}(\mathbf{R}),(\nabla\times E)_{\rm ice}(\mathbf{R})\), commute with the gauge constraint field, \((\nabla\cdot E)_{\rm ice}(\mathbf{R})\), and its canonical partner, \((\nabla\cdot A)_{\rm ice}(\mathbf{R})\), and thus form a basis for the three independent modes of physical gauge fluctuations. The advantage of this basis over the \(B_{x}(\mathbf{R}),B_{y}(\mathbf{R}),B_{V}(\mathbf{R})\) basis, is that these fields form a sets of decoupled canonical coordinates, namely their mutual commutators vanish:
\[\begin{split}[B_{x}(\mathbf{R}),E_{y}(\mathbf{R}^{\prime})]& =[B_{x}(\mathbf{R}),(\nabla\times E)_{\rm ice}(\mathbf{R}^{\prime })]=0,\\ [B_{y}(\mathbf{R}),E_{x}(\mathbf{R}^{\prime})]&=[B_{ y}(\mathbf{R}),(\nabla\times E)_{\rm ice}(\mathbf{R}^{\prime})]=0,\\ [B_{\rm ice}(\mathbf{R}),E_{x}(\mathbf{R}^{\prime})]& =[B_{\rm ice}(\mathbf{R}),E_{y}(\mathbf{R}^{\prime})]=0.\end{split}\]
The action of the microscopic lattice space symmetries on these fields is the same as in the case of Abrikosov-Schwinger fermions, and the additional pure gauge group transformation that enter into the extended projective symmetry group implementation on the fermions do not affect the gauge fields, therefore the fields \(A_{i}(\mathbf{R})\) transform as ordinary vectors according to the directions specified by the sites they connect, which is depicted in Fig.23. From this the transformations of dynamical fields under space symmetries follow easily. The action of time
Figure 24: Left: depiction of generalized spin-ice electric field divergence operator, \((\nabla\cdot E)_{\rm ice}=\sum_{\mathbf{r}\in\mathbf{R}}(\nabla\cdot E)( \mathbf{r})\) from Eqs.(60),(62). The red arrows depict the convention for adding electric fields. Right: depiction of spin-ice magnetic field operator \(B_{\rm ice}(\mathbf{R}-\mathbf{R}_{1})\) from Eq.(66). The blue arrows depict the convention for adding vector potentials. The blue crosses mark the location of the spin-ice vertices.
reversal (\(\Theta\) in Table 2) can be also inferred analogously to Eq.(52), and one concludes that:
\[\begin{split}\Theta A_{i}(\mathbf{R})\Theta^{-1}&=-A_{ i}(\mathbf{R})\\ \Theta E_{i}(\mathbf{R})\Theta^{-1}&=E_{i}(\mathbf{R} )\end{split} \tag{67}\]
where \(i=1,2,3,4\) are the components depicted in Fig.23, and the transformations of \(E_{i}(\mathbf{R})\) can be inferred from its canonical commutator with \(A_{i}(\mathbf{R})\). Let us now consider the action of the microscopic particle-hole conjugation of hard-core bosons, denoted by \(X\) (see Table 2). From its action on JW/composite-fermions (see Eq.(35)) we obtain that the phases dressing the mean-field Hamiltonian should transform as:
\[Xe^{iA(\mathbf{r},\mathbf{r}^{\prime})}X^{\dagger}=e^{iA(\mathbf{r}^{\prime}, \mathbf{r})}=e^{-iA(\mathbf{r},\mathbf{r}^{\prime})} \tag{68}\]
where we used that \(A(\mathbf{r}^{\prime},\mathbf{r})=-A(\mathbf{r},\mathbf{r}^{\prime})\) (hermiticity). Therefore the fields transform as:
\[\begin{split}XA_{i}(\mathbf{R})X^{\dagger}&=-A_{i} (\mathbf{R})\\ XE_{i}(\mathbf{R})X^{\dagger}&=-E_{i}(\mathbf{R} )\end{split} \tag{69}\]
It is interesting to note that under the natural microscopic time-reversal symmetry of spin-\(\frac{1}{2}\) denoted by \(\mathcal{T}\) (see Sec.II.3.3), it follows from Eq.(33) and Eqs.(67),(70) that the gauge fields transform as:
\[\begin{split}\mathcal{T}A_{i}(\mathbf{R})\mathcal{T}^{-1}& =A_{i}(\mathbf{R})\\ \mathcal{T}E_{i}(\mathbf{R})\mathcal{T}^{-1}&=-E_{i} (\mathbf{R})\end{split} \tag{70}\]
and therefore, interestingly, all the magnetic fields, \(B_{x}(\mathbf{R}),B_{y}(\mathbf{R}),B_{\text{ice}}(\mathbf{R})\), are even and the electric fields are odd under this time-reversal, which is opposite to the standard situation in QED. This is a manifestation of the psedo-scalar transformation of the JW/composite-fermions under this symmetry, as discussed in Sec.II.3.3 and Ref.[77]. Similar considerations also apply to other space symmetries such as mirrors, which in order to be implemented as natural spin-\(\frac{1}{2}\) symmetries need to be dressed by the hard-core boson particle-hole conjugation \(X\), which would lead to transformations on gauge fields opposite to those of ordinary QED (e.g. the electric field transforming as a pseudo-vector under mirrors).
We are now in a position to write a simple bilinear Maxwell-like model Hamiltonian for the pure gauge field part invariant under all microscopic symmetries of the RK model, which we write as:
\[H_{\text{Gauge}}=\frac{\epsilon}{2}\sum_{\mathbf{R}}\sum_{i=1}^{4}E_{i}^{2}( \mathbf{R})+\frac{\chi_{B}}{2}\sum_{\mathbf{R}}B_{\text{ice}}^{2}(\mathbf{R} )+\frac{\chi_{P}}{2}\sum_{\mathbf{R}}(B_{x}^{2}(\mathbf{R})+B_{y}^{2}(\mathbf{ R})) \tag{71}\]
Here we have ignored again for simplicity the compactification of gauge fields, and \(\epsilon,\chi_{B},\chi_{P}\) are phenomenolog
Figure 25: Left: depiction of \(B_{x}\) operators (from Eq.(65)), as pairs of blue arrows. Notice that whenever the blue arrows (vector potentials) of a \(B_{x}\) operator overlap with the red arrows (electric fields) of a \((\nabla\cdot E)_{\text{ice}}\) operator, there is always an equal number of parallel and antiparallel arrows, illustrating that these operators commute. Right: analgous depictions for the \(B_{y}\) operators (from Eq.(65)). The blue crosses mark the location of the spin-ice vertices.
ical coupling constants. The equations of motion for the Hamiltonian from Eq.(71) are:
\[\begin{split}\frac{d^{2}B_{x}}{dt^{2}}(\mathbf{R})=&-2 \frac{\chi_{P}}{\epsilon}B_{x}(\mathbf{R}),\\ \frac{d^{2}B_{y}}{dt^{2}}(\mathbf{R})=&-2\frac{\chi_{ P}}{\epsilon}B_{y}(\mathbf{R}),\\ \frac{d^{2}B_{\rm ice}}{dt^{2}}(\mathbf{R})=&-\frac{ 2\chi_{B}}{\epsilon}\big{[}4B_{\rm ice}(\mathbf{R})-\sum_{\begin{subarray}{c} \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ }}}}}}}}}}}}\\ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{\boldsymbol{ \boldsymbol{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\,\}\}\}\,\}\;\
\[H(t,A) =\sum_{\mathbf{R}}it^{*}(1-iA_{1}(\mathbf{R}+\mathbf{R}_{2}))f_{a}^ {\dagger}(\mathbf{R}-\mathbf{R}_{1}+\mathbf{R}_{2})f_{b}(\mathbf{R})+it^{*}(1+iA _{3}(\mathbf{R}))f_{a}^{\dagger}(\mathbf{R})f_{b}(\mathbf{R}) \tag{75}\] \[\qquad+t(1-iA_{4}(\mathbf{R}))f_{a}^{\dagger}(\mathbf{R}- \mathbf{R}_{1})f_{b}(\mathbf{R})+t(1+iA_{2}(\mathbf{R}+\mathbf{R}_{2}))f_{a}^{ \dagger}(\mathbf{R}+\mathbf{R}_{2})f_{b}(\mathbf{R})+h.c+O(A^{2})\]
As discussed in Sec. II.3.4, the fermions have gapless Dirac nodes at the two valleys \((\pi,0)\) and \((0,\pi)\) while the gauge field has gapless photon-like modes at \((0,0)\) and \((\pi,\pi)\). Therefore we expect that the dominant effects at low energy include:
1. Intra-valley scattering within each Dirac cone mediated by exchange of long-wavelength gauge field fluctuations with momenta near \((0,0)\).
2. Inter-valley scattering process connecting the two Dirac cones mediated by the exchange of gauge fluctuations with momenta near \((\pi,\pi)\).
These two kind of processes are depicted in Fig.27. Therefore, in the spirit of \(k\cdot p\) theory, we define the following fields by expanding the fermion and gauge fields around their different respective gapless points:
\[\Psi(\mathbf{q}) \doteq\begin{pmatrix}f_{a}(\mathbf{p}+(\pi,0))\\ f_{b}(\mathbf{p}+(\pi,0))\\ f_{a}(\mathbf{p}+(0,\pi))\\ f_{b}(\mathbf{p}+(0,\pi))\end{pmatrix} \tag{76}\] \[A_{j}^{0}(\mathbf{p}) \doteq A_{j}(\mathbf{p}) j\in\{1,2,3,4\}\] \[A_{j}^{\pi}(\mathbf{p}) \doteq A_{j}(\mathbf{p}+(\pi,\pi)) j\in\{1,2,3,4\}\]
where \(\mathbf{p}\) is understood to be "small" with respect to the size of the Brillouin zone, so that we can expand the hamiltonian (75) to the first order (for the convention sublattice indices see Fig.23). Details of the derivations of the small momentum expansion can be found in appendix C, we will here summarize the final results next.
#### iii.3.1 \(\boldsymbol{p}=(0,0)\) scattering terms
The hamiltonian density describing processes of the first type for the state with \(\Theta_{x}\) extended PSG (\(t\in\mathbb{R}\)) is:
\[H=v\Psi^{\dagger}(\mathbf{x})\left[(p_{x}-A_{x}^{0}(\mathbf{x}))\tau^{x}+(p_{ y}-A_{y}^{0}(\mathbf{x}))\tau^{y}\rho^{z}\right]\Psi(\mathbf{x}),\]
and for the state with \(\Theta_{y}\) extended PSG (\(t\in i\mathbb{R}\)) is:
\[H=v\Psi^{\dagger}(\mathbf{x})\left[(p_{x}-A_{x}^{0}(\mathbf{x}))\tau^{y}+(p_{ y}-A_{y}^{0}(\mathbf{x}))\tau^{x}\rho^{z}\right]\Psi(\mathbf{x}), \tag{77}\]
where the convention of momenta is the same as in Eq.(41), and \(\tau^{i}\), \(\rho^{i}\) denote Pauli matrices in \(\{a,b\}\) sublattice and on \(\{(\pi,0),(0,\pi)\}\) valley spaces respectively, and we have defined continuum vector potential fields as follows:
\[A_{x}^{0}(\mathbf{x}) \doteq\frac{A_{1}^{0}(\mathbf{x})+A_{3}^{0}(\mathbf{x})}{\sqrt{2} |\mathbf{R}_{1}|}, \tag{78}\] \[A_{y}^{0}(\mathbf{x}) \doteq\frac{A_{2}^{0}(\mathbf{x})+A_{4}^{0}(\mathbf{x})}{\sqrt{2} |\mathbf{R}_{1}|}.\]
Therefore, we see that the long-wavelength fluctuations gauge fluctuations that are gapless near \((0,0)\), simply behave as the standard minimal coupling of a photon-like mode to the matter fields (compare with mean field Hamiltonian from Eq.(41)).
#### iii.3.2 \(\boldsymbol{p}=(\pi,\pi)\) scattering terms
The contribution to the hamiltonian density accounting processes of the second type, for the state with \(\Theta_{x}\) extended PSG (\(t\in\mathbb{R}\)) is:
\[\delta H=-v\Psi^{\dagger}(\mathbf{x})\left[B_{x}^{\pi}(\mathbf{x})\tau^{x} \rho^{1}+B_{y}^{\pi}(\mathbf{x})\tau^{x}\rho^{2}\right]\Psi(\mathbf{x}), \tag{79}\]
and for the state with \(\Theta_{y}\) extended PSG (\(t\in i\mathbb{R}\)) is:
\[\delta H=-v\Psi^{\dagger}(\mathbf{x})\left[B_{x}^{\pi}(\mathbf{x})\tau^{y} \rho^{1}-B_{y}^{\pi}(\mathbf{x})\tau^{y}\rho^{2}\right]\Psi(\mathbf{x}), \tag{80}\]
Figure 27: Illustration of the two type of fermion scattering processes arising from their coupling to gauge fields. The gauge modes near \(q=(0,0)\) mediate “intra-valley” fermion scattering processes (depicted by orange circles), and the gauge modes near \(q=(\pi,\pi)\) mediate “inter-valley” scattering processes (depicted by red straight arrows).
where the continuum vector potential fields are defined as follows:
\[\begin{split} B^{\pi}_{x}(\mathbf{x})&=\frac{A^{\pi}_{ 3}(\mathbf{x})-A^{\pi}_{1}(\mathbf{x})}{\sqrt{2}|\mathbf{R}_{1}|},\\ B^{\pi}_{y}(\mathbf{x})&=\frac{A^{\pi}_{4}(\mathbf{ x})-A^{\pi}_{2}(\mathbf{x})}{\sqrt{2}|\mathbf{R}_{1}|}.\end{split} \tag{79}\]
Notice that the fields \(B^{\pi}_{x},B^{\pi}_{y}\) are the continuum limits of the fields defined in Eq. (65) expanded around momentum \((\pi,\pi)\). Therefore, remarkably, what we are finding here is that to linear order in vector potentials, there is no coupling to the linearly dispersing gapless photon modes near \((\pi,\pi)\), but instead the inter-valley scattering process are only mediated by gauge fields associated with the \(B_{x}\) and \(B_{y}\) modes, which are fully gapped throughout the entire Brillouin. Therefore, at low energies compared to the gap of the \(B_{x},B_{y}\) modes and the band-width of the photon modes, we have two emergent massless photon modes, and two massless Dirac fermions. But the fermions are only carry gauge charge under the \((0,0)\) photon but appear as gauge neutral under the \((\pi,\pi)\) photon.
### \(U(1)\times U(1)\) Gauge structure
In this section we will explain why the occurrence of two gapless photon modes and the gauge coupling of the Dirac composite fermions to only one of them, that we encountered in sections III.2 and III.2, is not accidental. We will show there are two emergent \(U(1)\) gauge structures with independent local Gauss laws and two global flux conservation, as if we had two copies of ordinary lattice QED.
To see this it is convenient to split the Bravais lattice of vertices of the spin-ice model, which are located at vectors \(\mathbf{R}\), into two sublattices denoted by \(\Lambda_{A}\) and \(\Lambda_{B}\), as depicted in Fig. 29. The sublattice \(\Lambda_{B}\) can be obtained by displacing the \(\Lambda_{A}\) by either the Bravais vector \(\mathbf{R}_{1}\) or \(\mathbf{R}_{2}\), and vice-versa. Therefore, the Bravais unit vectors of the lattice \(\Lambda_{A}\) can be taken to be \(\{\mathbf{R}_{1}-\mathbf{R}_{2},\mathbf{R}_{1}+\mathbf{R}_{2}\}\), and similarly for \(\Lambda_{B}\) (see Fig. 29). Notice that the operators that measure the divergence of the dynamical emergent fields, \((\nabla\cdot E)_{\text{ice}}(\mathbf{R})\) defined in Eq. (62) and illustrated in Fig.24, behave as two independent divergences obeying separate Gauss' laws. Namely when we sum \((\nabla\cdot E)_{\text{ice}}(\mathbf{R})\) over \(\mathbf{R}\) restricted to region of points residing only on sublattice \(\Lambda_{A}\), we will get a sum of electric fields residing only at the boundary of such region and normal to the boundary, as expected for a lattice divergence, and similarly for regions of points contained only in sublattice \(\Lambda_{B}\). Moreover, we can also restrict the operators \(G_{\text{ice}}(\mathbf{R})\) to reside over either of the sublattices and in this way we can view the \(U(1)\) gauge group as a product \(U(1)_{A}\times U(1)_{B}\). We can assign a pair charges \((q_{A},q_{B})\) with \(q_{A,B}\in\mathbb{Z}\) to matter operators (namely those constructed as products of fermion creation/destruction operators) under this \(U(1)_{A}\times U(1)_{B}\) gauge group. In particular the JW/composite-fermion creation operator, would transforms as a charge \((q_{A},q_{B})=(1,1)\) under such sublattice gauge groups.
There are also two independent global flux conservation symmetries (when ignoring gauge field compactification), one associated with sublattice \(\Lambda_{A}\) and the other with sublattice \(\Lambda_{B}\), which are responsible for the gaplessness of the two photons. This can be seen by adding the operators \(dB_{\text{ice}}(\mathbf{R})/dt\), defined in Eq.(66) and illustrated in Fig.24, over some region of \(\mathbf{R}\) that only contains points in the sublattice \(\Lambda_{A}\), resulting into a boundary operator, that can be viewed as a line integral of the operator \((\nabla\times E)_{\text{ice}}(\mathbf{R})\) from Eq.(66). This can be interpreted as a conservation law analogous to the lattice Faraday law of QED from Eq.(57), except that now there are two such conservation laws, one for the \(\Lambda_{A}\) and another one for the \(\Lambda_{B}\) sublattice.
Moreover, our choice of Maxwell Hamiltonian in Eq.(71) has actually been made so that the two photons of the \(U(1)_{A}\times U(1)_{B}\) gauge structure, are also dynamically decoupled. This can be seen by noticing that the commutator of \(B_{\text{ice}}(\mathbf{R})\) and \((\nabla\times E)_{\text{ice}}(\mathbf{R}^{\prime})\) vanishes whenever \(\mathbf{R}\) and \(\mathbf{R}^{\prime}\) belong to different \(\Lambda_{A}\),\(\Lambda_{B}\) sublattices. The set of operators \(B_{\text{ice}}(\mathbf{R})\) and their canonical partner \((\nabla\times E)_{\text{ice}}(\mathbf{R})\) with \(\mathbf{R}\) restricted to a given sublattice \(\Lambda_{A}\),\(\Lambda_{B}\) have indeed exactly the same equations of motion of ordinary QED with a single photon that we reviewed in Sec.III.1. If we expand these operators in the crystal momentum basis of each \(\Lambda_{A}\), \(\Lambda_{B}\) sublattice, associated with Bravais vectors \(\{\mathbf{R}_{1}-\mathbf{R}_{2},\mathbf{R}_{1}+\mathbf{R}_{2}\}\), then we would obtain the following decoupled equations of motion:
Figure 28: Depiction of the expected infrared effective low energy theory, which is a U(1) compact QED in 2+1 dimensions minimally coupled to two massless Dirac fermions. The Dirac fermions are centered at \((0,\pi)\) and \((\pi,0)\), and they are minimally coupled to the single U(1) photon gapless at \((0,0)\). The photon at \((\pi,\pi)\) likely undergoes Polyakov style confinement, hence disappearing at low energies.
\[\frac{d^{2}B_{\rm ice}^{A}({\bf k})}{dt^{2}}=-\omega^{2}({\bf k})B_{\rm ice }^{A}({\bf k})\] \[\frac{d^{2}B_{\rm ice}^{B}({\bf k})}{dt^{2}}=-\omega^{2}({\bf k})B_{ \rm ice}^{B}({\bf k})\]
where each of the above equations is now identical to the ordinary Maxwell theory in the square lattice from Eq.(55), with the dispersion \(\omega^{2}({\bf k})\) given by the same expression as in Eq.(56). Here the wavevector \(k\) is defined for the Bravais vectors spanned by vectors \(\{{\bf R}_{1}-{\bf R}_{2},{\bf R}_{1}+{\bf R}_{2}\}\), and therefore its Brillouin zone is half of the size of the Brillouin zone associated with the full translational symmetry of the lattice.
We therefore see that we have two decoupled copies of standard lattice QED, featuring linearly dispersing photon modes at \({\bf k}=(0,0)\) for each of the sublattices \(\Lambda_{A}\) and \(\Lambda_{B}\). The underlying model has a translational symmetry that exchanges these two sublattices. Therefore in the lattice momentum convention that exploits the full lattice translational symmetry that was employed in deriving the dispersions from Eq.(71), those two modes combines into a symmetric one and an antisymmetric one 12, to give rise respectively to the \({\bf q}=(0,0)\) mode and \({\bf q}=(\pi,\pi)\) mode in Fig.26. Now since the fermion carries charge \((q_{A},q_{B})=(1,1)\) for the gauge fields associated with the two sublattices, it will therefore carry total gauge charge under the sublattice symmetric combination of those fields, associated with the photon \({\bf q}=(0,0)\), and carry zero charge under their sublattice antisymmetric (staggered) combination, associated with the photon \({\bf q}=(\pi,\pi)\), explaining the result we encountered in the previous section by direct calculation.
Footnote 12: Namely with staggered alternating signs \(+1,-1\) the \(\Lambda_{A}\) and \(\Lambda_{B}\) sublattices.
While the above structure is certainly remarkable, its appearance can be intuitively understood by simply appealing to the interplay of the local conservation laws of the spin-ice models and the nature of the Jordan-Wigner composite fermion. Notice that the creation of a Jordan-Wigner composite fermion, which involves the reversal of the z-direction of a single spin, violates necessarily two ice rules associated to two vertices that are connected to by the link in which such spin resides. One of these vertices is located in the \(\Lambda_{A}\) sublattice and the other in the \(\Lambda_{B}\) sublattice. Thus it is natural to see the Jordan-Wigner composite fermion as an extended dipole-like object which has two charges located at the end of the link that connects the two vertices (see Fig.(23)), which will be charged under the sublattice symmetry local gauge transformations, but will be a neutral dipole under the staggered antisymmetric gauge transformation13. This is why we have called it an "extended parton", to emphasize the distinction with a "point-like parton", such as the Abrikosov-Schwinger fermion.
Footnote 13: Notice that interestingly the global operator associated with the staggered sum of the spin-ice charges over all the lattice in a periodic torus is identically zero. This global subgroup of the staggered gauge group acts therefore trivially within the physical Hilbert space.
## IV Summary and discussion
We have build upon the idea that the standard Jordan-Wigner transmutation that maps spin-\(\frac{1}{2}\) degrees of freedom onto spinless fermions in a 2D lattice is exactly equivalent to another celebrated statistical transmutation of attaching a \(2\pi\) flux to a spinless hardcore boson that maps these onto spinless composite fermions. In one dimensional chains, such Jordan-Wigner transformation has the property that it maps local Hamiltonians of spins that are symmetric under a global parity onto local Hamiltonian of fermions. However, in 2D models simply imposing a global symmetry is not enough to preserve locality on both the _physical side_ (the spin representation) and the _dual side_ (the fermion representation). Nevertheless, this should not be viewed as a _bug_ but rather as a _feature_ of the mapping: the non-locality is expressing the fact that the fermion is not the underlying microscopic local particle of the Hilbert space of interest, but instead it is a non-local composite fermion object obtained from attaching a \(2\pi\) flux to the underlying microscopic particles.
One ad-hoc approach to handle the above inherent non-locality of Jordan-Wigner/Composite-Fermions in 2D, that is often used in mean-field treatments, is to simply ignore the detailed structure of non-locality by replacing the gauge fields associated with the flux attachment by averaged "smeared" values that can be chosen to match the net background magnetic field which is given by the composite fermion density. However, in this work we have advanced a completely different route to capture this non-locality of the Jordan-Wigner/Composite-fermions. Namely, we have exploited the fact that Hamiltonians of spin-\(\frac{1}{2}\) degrees of freedom that respect certain local symmetries do remain local in their dual Jordan-Wigner/Composite-Fermions representation. The local symmetries that we have focused on are the \(U(1)\) symmetries associated with ice rules in 2D quantum spin-ice models which allow to map Rokshar-Kivelson-like models of spins onto local models of Jordan-Wigner/Composite-fermions. The local gauge symmetry structure in these 2D models therefore plays an analogue role to the global symmetries in 1D that allows to keep the models local in the _physical_ (spin) and _dual_ (fermion) representations.
The main difficulty for constructing interesting quantum disordered 2D states within our approach, is that quantum spin-ice models with RK-like Hamiltonians would necessarily map onto interacting Hamiltonians of fermions (e.g. the plaquette resonance term maps onto a quartic fermion interaction). Therefore, we don't have the luxury of 1D where non-trivial spin models can be exactly mapped onto purely free fermion models. More fundamentally, we have seen that even though Slater deter
minants of fermions can be viewed as a zeroth order mean field approximations to the ground states of quantum spin-ice Hamiltonians (which only satisfy the ice rules in a global averaged sense), such Slater determinants necessarily violate the exact local ice rules, and therefore are not satisfactory approximations to their true ground states satisfying the local ice rules. This obstacle can, however, be naturally overcome by acting on these Slater determinants with a Gutzwiller projector that enforces the local ice rules, making such projected states satisfactory trial ground states of 2D quantum spin-ice Hamiltonians. Computing local spin operators exactly, such as those that enter the RK Hamiltonian, is however a hard analytic task, but it should be possible to efficiently implement these constraints numerically, as it has been done succesfully in previous studies of the more common Gutzwiller projected states of Abrikosov-Schwinger fermions (see e.g. [85; 86; 87; 88]). This is an interesting direction that we hope future studies will further explore.
However, while explicit analytic calculations of ground state energies for these states is challenging, it is possible to develop a precise understanding of the implementation of the global physical symmetries of the spin model in their dual Jordan-Wigner/Composite-Fermion representation, which is one of the central themes in this study. For the RK-like models such global symmetries include lattice space symmetries, time reversal and on-site spin symmetries (e.g. unitary particle-hole conjugation of hard-core bosons). While the implementation of these symmetries is simple and standard in the physical spin-\(\frac{1}{2}\) degrees of freedom, their implementations in the dual Jordan-Wigner/composite-Fermion degrees of freedom can look fairly unusual, which is not surprising because of the non-local nature of the operators creating the composite-Fermion particles. However, because the Jordan-Wigner transformation is an explicit operator map, it is straightforward to determine the exact symmetry action on the Jordan-Wigner/composite-Fermions.
Nevertheless, as a result of the additional local symmetry structure that we have imposed on the Jordan-Wigner/composite-fermions in spin-ice models, a kind of freedom appears on how the symmetry is implemented that bears a resemblance to the problem of implementing physical symmetries on the standard parton constructions of Abrikosov-Schwinger fermions. Such implementation of symmetries on Gutzwiller projected states of Abrikosov-Schwinger fermions leads naturally to the notions the projective-symmetry groups [49; 74]. A remarkable fact about such projective symmetry group implementations is that a given specific microscpic symmetry acting on the physical spins can be implemented in many inequivalent ways on the parton fermions, but these distinct implementations can lead to sharply physically distinct quantum disordered spin liquids of the underlying physical spins (all still obeying the same microscopic symmetries) [49; 74]. We have seen that an analogous situation arises in our construction of Jordan-Wigner/composite-fermions states that are Gutzwiller projected to satisfy the ice rules of 2D RK-like models. Namely, gauge inequivalent symmetry implementations
Figure 29: Separation of the lattice of vertices into \(\Lambda_{A}\) and \(\Lambda_{B}\) sub-lattices, which allows to understand the \(U(1)_{A}\times U(1)_{B}\) gauge structure. The JW/Composite-fermions are located at the dots and carry equal charge \((q_{A},q_{B})=(1,1)\) under these \(U(1)_{A}\times U(1)_{B}\) gauge transformations. The photon that is gapless near \((0,0)\) (see Fig.26) corresponds to the sublattice symmetric gauge transformation, for which the fermions are charged. This is why the JW/composite-fermions are minimimally coupled to this photon at low energies (see Fig.28). The photon that is gapless near \((\pi,\pi)\) (see Fig.26) corresponds to the sublattice asymmetric gauge transformations (i.e. staggered). For which these asymmetric gauge transformations the JW/composite-fermions is not charged (behaves instead as a gauge dipole) and this is why it is not minimally coupled to the \((\pi,\pi)\) photon.
on the Jordan-Wigner/composite-fermions can act identically on all the gauge invariant operators within a given subspace with definite values of the ice rules, but which will lead to sharply physically distinct quantum disordered states of the Jordan-Wigner/composite-fermions. This freedom of symmetry implementations turns also to be a very valuable resource. For example, it is very difficult to enforce the \(\pi/2\) rotational symmetry on the mean field states, by using the fully microscopically explicit "bare" action of this symmetry on the Jordan-Wigner creation operators that include the full string ordering of the 2D lattice. However we have seen that there are alternative projective symmetry implementations of the \(\pi/2\) rotation symmetry that act as effectively local operations on the Jordan-Wigner/composite-fermions, and which have exactly the same action on all the spin-ice gauge invariant operators, which lead therefore to satisfactory and much simpler implementations of this microscopic symmetry on the Gutzwiller projected states.
We have not attempted to classify all the possible spin liquid states that can result from this _extended parton construction_ of Jordan-Wigner/composite-fermions. From the precedents in Abrikosov-Schwinger fermions [49; 74] it is only natural to expect that it will also have a diverse and colorful variety of possibilities, which we hope future studies can investigate. We have instead focused on constructing interesting concrete examples that satisfy the following criteria: (1) a projective symmetry implementation of all the physical global symmetries of the classic RK model for 2D quantum spin-ice applicable to the six-vertex and quantum dimer subspaces, (2) that the implementation allows for non-zero value of the nearest neighbor hopping of fermions. The first demand guarantees that the composite fermion liquid is a fully symmetric spin liquid that does not break any of the symmetries of the model. Notice in particular that we have enforced time reversal for both six-vertex and quantum dimers and also the particle-hole symmetry of the six-vertex model, which often are neglected in ad-hoc mean-field constructions of composite fermions based on flux smearing at fractional filling of the lattice. The second requirement is a desirable requirement to make the states potentially energetically competitive trial ground states of microscopic RK-like Hamiltonians with short-range couplings, since in these models a big portion of the energy density is determined typically by optimal short distance correlations.
We have successfully constructed two explicit examples of projective symmetry implementations of Jordan-Wigner/composite-fermions based on this _extended parton construction_. For the quantum six-vertex model (realized when the Jordan-Wigner/composite-fermions are at half-filling of the lattice) these states feature two massless Dirac cones centered at \((\pi,0)\) and \((0,\pi)\), and thus the state is a putative composite fermion Dirac spin liquid. For the quantum dimer model (realized when the Jordan-Wigner/composite-fermions are at quarter-filling of the lattice) these states display a Fermi-surface of the size of half of the Brillouin zone, and thus the state is a putative composite fermi liquid state. This Fermi surface is perfectly nested when the mean field state only includes nearest neighbor composite fermion hopping, but further neighbor hoppings remove the perfect nesting and could stabilize this state. Because of this strong tendency to being unstable from nesting, this composite Fermi liquid could be a useful parent state to understand the descending ordered states and their competitions in the RK model, which is another interesting direction for future studies.
We have also developed a simplified description of the gauge field fluctuations around these mean field states, aimed at qualitatively capturing the nature of the low energy field theories emerging in the infrared limit (i.e. low energy and long wave-lengths compared to lattice scales) and particularly the nature of the potentially deconfined low energy gauge structure. As it is well known from Abrikosov Schwinger fermions, the low energy gauge structure can be different from the UV parton gauge. We have seen that the low energy gauge structure differs from the UV spin-ice gauge structure, although they have some precise relations. To determine this structure, we have performed an analysis in two stages.
In the first stage, for a given bare mean field hamiltonian (not Gutzwiller projected) which has some specific non-zero hopping elements Jordan-Wigner/composite-fermions in the lattice, we consider trial Hamiltonians in which only the phases of these non-zero hopping elements are allowed to fluctuate. The spirit behind this is that the amplitude fluctuations should be generically gapped but the fluctuations of the phases could possibly be soft. We then promote these fluctuating phases to become local quantum bosonic degrees of freedom residing on the links connecting the spin sites (or equivalently the links associated with Jordan-Wigner/composite-fermion hopping). Such fluctuating phases of hopping can be viewed as an emergent vector potentials and their canonically conjugate momentum as emergent electric fields. We then generalize the action of UV gauge symmetry group, which is generated by the local operators determining the spin-ice rule constraints, to act not only on the Jordan-Wigner/composite-fermions but also on these bosonic phases/vector potential degrees of freedom, by demanding that the combinantion of the fermion bilinear operators and exponential of the vector potential degrees of freedom associated with hoppings are invariant under the UV spin-ice gauge group, and therefore, in a sense, the phase fluctuations dress the mean field state so as to become locally gauge invariant and thus become a more satisfactory approximation to the fully Gutzwiller projected trial state. With these gauge transformation rules, we then write a simple lattice model for the leading bilinear order Hamiltonian in powers of gauge fields, namely the analogue of the usual Maxwell action, that is consistent with the microscopic global symmetries of the model, while neglecting their compactification (whose potential impact is to be re-considered at the end of the analysis,
see below.). For the RK-like models of 2D spin-ice this pure gauge field Hamiltonian features four vector potentials per Bravais unit cell, but there is a local constraint analogous to the zero divergence of electric field, leading to three truly dynamical gauge fields with associated energy bands. Out of these three, two are fully gapped over the entire Brillouin zone (and thus unimportant at low energies) while one band features two linearly dispersing \(U(1)\) photon-like modes that are gapppless at \((0,0)\) and \((\pi,\pi)\) momentum in the Brillouin zone. This suggests a \(U(1)\times U(1)\) low energy gauge structure when compactification is neglected (but see below for discussion of its important impacts).
In the second stage we determine the minimal coupling of the Jordan-Wigner/composite-fermion to this \(U(1)\times U(1)\) low energy gauge structure. To do so, we Taylor expand the exponential coupling of the Jordan-Wigner/composite-fermions to the gauge fluctuation fields to linear order in vector potentials, giving us the lattice analogue of the minimal \(j\cdot A\) coupling. We have found that the fermions couple minimally to only one of the \(U(1)\) gauge fields associated with the massless photon at \((0,0)\) momentum, while they have zero gauge charge under the \((\pi,\pi)\) photon. There is a simple intuitive picture, closely related to the UV gauge structure of the spin-ice, that sheds light on this seemingly peculiar low energy structure. The creation of a composite fermion, which locally reverses one spin along z, violates the two ice rules associated to the vertices connected to such reversed spin. In the lattice gauge theory convention, the lattice of spin-ice vertices is separated into two sublattices and the Gauss law is conventionally taken as a staggered ice rule with alternating signs in the sublattices. In this convention, the spin reversal is viewed as creating a dipole pair of gauge charges but which is in total gauge neutral. Similarly, in our construction we could separate the spin-ice rules into two sublattices, and we could say that the Jordan-Wigner/composite-fermion is charged under a symmetric combination of the ice rules of the two sublattices and neutral dipole-pair object under a staggered antisymmetric combination. These two sublattices are related by a lattice translational symmetry, and since we enforce such symmetry, we encounter that the photon at \((0,0)\) is associated with the sublattice symmetric combination of these ice rules while the sublattice staggered antisymmetric combination is associated with the photon at \((\pi,\pi)\). Because the creation of the Jordan-Wigner/composite-fermion violates the ice rules symmetrically, the particle only carries gauge charge under the symmetric photon at \((0,0)\), but behaves as a neutral gauge dipolar object with respect to the antisymmetric photon \((\pi,\pi)\).
We have therefore obtained a low energy gauge structure of two massless Dirac fermions minimally coupled to a \(U(1)\) massless photon, and neutral under another \(U(1)\) massless photon. We would like now re-consider qualitatively the impact of gauge field compactification on this low energy structure. The photon that "sees" the fermions as neutral dipoles, propagates in this medium as if it was an insulating dielectric liquid. This is not very different from a how a photon would "see" a gapped insulating charged fermionic matter. Monopole fluctuations are therefore expected to be relevant and lead to Polyakov-style confinement for this \(U(1)\) sublattice antisymmetric photon at \((\pi,\pi)\), as it is generically expected in 2+1D for a compact \(U(1)\) gauge field when the matter that carries its gauge charge is gapped. We thus expect that compactification gaps out this \((\pi,\pi)\) photon and that it is not relevant at low energies. However, since the fermion is already a short distance neutral dipole object with respect to this gauge field, such gauge confinement is not expected to confine the fermions themselves. We are thus left with a low energy structure in which we have two gapless Dirac nodes minimally coupled to a single \(U(1)\) photon, the one gapless at momentum \((0,0)\). The ultimate details of the infrared behavior and strict stability of such \(N=2\) QED in 2+1 dimensions, is still an open problem [62; 63; 64], but provided such theory flows to a stable fixed point, our state would be therefore an analogue of a Dirac spin liquid of composite fermions (but with a pseudoscalar symmetry implementation [77], see below). We hope that future studies can investigate in more detail this qualitative arguments on the nature of the low energy effective field theory.
Finally, we would like comment on the connections between our construction and the pseudo-scalar spin liquids introduced in Ref. [77]. The Jordan-Wigner/composite-fermion can be naturally understood to behave as a pseudo-scalar spinon under symmetries that reverse the direction of the \(z\)-spin, because this is equivalent to the Jordan-Wigner/composite-fermion occupation on a site. Therefore the ordinary spin time reversal symmetry that squares to \(-1\) and space operations such as mirrors that reverse the \(z\)-spin, would act as particle-hole conjugations on the Jordan-Wigner/composite-fermion. The emergent magnetic and electric fields would also have opposite transformation laws to those of ordinary QED, for example being even and odd respectively under such spin time-reversal operation. Therefore we see that the \(U(1)\) gauge structure is pseudo-scalar in the sense described in Ref.[77]. This can also be understood in very direct microscopic terms. For example, magnetic field operator associated with the simplest gauge invariant loop of quantum spin-ice, is the four spin operator composed of alternating spin raising and lowering operators in a plaquette (properly symmetrized so as to make the analogue of \(\sin(B)\) combination of lattice gauge theory). It is easy to see that this operator is even under the previously mentioned mirrors and time reversal operations. This magnetic field operator can be viewed as a measure of a local correlation for the XY projection of the spins to spiral around a small closed loop, and thus is physically very different from those of more traditional U(1) spin liquids, such as the triple product correlator associated spin chirality around triangles [89; 90; 91] which transforms in the similar way as the usual magnetic field
experienced by electrons under point group and time-reversal. Therefore, our current construction of Jordan-Wigner/composite-fermion provides a different and perhaps more intuitive way to describe certain pseudoscalar spin liquids which illuminates on the kind of correlations associated the appearance of their magnetic fields, namely a kind of tendency towards short-distance spin-spiraling on loops. Understanding more precisely these connections is another interesting avenue of future research, which could help understand how to realize such spin liquids in real materials, such as \(\alpha\)-RuCl\({}_{3}\), where the oscillations of thermal conductivity seen in experiments [92; 93; 94; 95] are consistent with the expected quantum oscillations of pseudo-scalar U(1) spin-liquid [77].
###### Acknowledgements.
We are thankful to Hong-Hao Tu, Debasish Banerjee, Karlo Penc, Nic Shannon, Xue-Feng Zhang and Roderich Moessner for stimulating discussions. I.S. is specially thankful to Zhenjiu Wang for discussions and performing unpublished analysis in a preliminary stage of the project. L.G. would like to thank Davide Morgante for his help with Ti\(k\)Z pictures. We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG) through research grant project number 518372354.
|
2309.15573 | The Maximum Cover with Rotating Field of View | Imagine a polygon-shaped platform $P$ and only one static spotlight outside
$P$; which direction should the spotlight face to light most of $P$? This
problem occurs in maximising the visibility, as well as in limiting the
uncertainty in localisation problems. More formally, we define the following
maximum cover problem: "Given a convex polygon $P$ and a Field Of View (FOV)
with a given centre and inner angle $\phi$; find the direction (an angle of
rotation $\theta$) of the FOV such that the intersection between the FOV and
$P$ has the maximum area". In this paper, we provide the theoretical foundation
for the analysis of the maximum cover with a rotating field of view. The main
challenge is that the function of the area $A_{\phi}(\theta)$, with the angle
of rotation $\theta$ and the fixed inner angle $\phi$, cannot be approximated
directly. We found an alternative way to express it by various compositions of
a function $A_{\theta}(\phi)$ (with a restricted inner angle $\phi$ and a fixed
direction $\theta$). We show that $A_{\theta}(\phi)$ has an analytical solution
in the special case of a two-sector intersection and later provides a
constrictive solution for the original problem. Since the optimal solution is a
real number, we develop an algorithm that approximates the direction of the
field of view, with precision $\varepsilon$, and complexity
$\mathcal{O}(n(\log{n}+(\log{\varepsilon})/\phi))$. | Igor Potapov, Jason Ralph, Theofilos Triommatis | 2023-09-27T11:06:07Z | http://arxiv.org/abs/2309.15573v1 | # The Maximum Cover with Rotating Field of View
###### Abstract
Imagine a polygon-shaped platform \(P\) and only one static spotlight outside \(P\); which direction should the spotlight face to light most of \(P\)? This problem occurs in maximising the visibility, as well as in limiting the uncertainty in localisation problems. More formally, we define the following maximum cover problem: "Given a convex polygon \(P\) and a Field Of View (FOV) with a given centre and inner angle \(\phi\); find the direction (an angle of rotation \(\theta\)) of the FOV such that the intersection between the FOV and \(P\) has the maximum area". In this paper, we provide the theoretical foundation for the analysis of the maximum cover with a rotating field of view. The main challenge is that the function of the area \(A_{\phi}(\theta)\), with the angle of rotation \(\theta\) and the fixed inner angle \(\phi\), cannot be approximated directly. We found an alternative way to express it by various compositions of a function \(A_{\theta}(\phi)\) (with a restricted inner angle \(\phi\) and a fixed direction \(\theta\)). We show that \(A_{\theta}(\phi)\) that has an analytical solution in the special case of a two-sector intersection and later provides a constrictive solution for the original problem. Since the optimal solution is a real number, we develop an algorithm that approximates the direction of the field of view, with precision \(\varepsilon\), and complexity \(\mathcal{O}(n(\log n+(\log\varepsilon)/\phi))\).
_Keywords--_ Computational Geometry, Area Optimisation, Rotated FOV, Maximum Cover
## 1 Introduction
The use of antennas, sensors and cameras in "smart" or autonomous systems motivates the study of various visibility problems [11, 14, 16, 21] with applications in computer graphics, motion planning, and other areas. The most known visibility problems are the art gallery problem, region visibility, point or edge visibility, viewshed, see [1, 5, 7, 8, 10, 15, 23]. Point or edge visibility is the decision problem of checking whether these objects are visible from a viewpoint in a context of a given set of obstacles. In the art gallery problem, the objective is to find the minimal number of locations to place guards (with restricted or unrestricted Field of View, FOV) within a polygon room to observe the room's whole area [2, 22].
In this paper, we study the problem of finding the maximal visibility area from a viewpoint with a rotating FOV. Imagine a polygon-shaped platform \(P\) and only one static spotlight outside of \(P\). Which direction should the spotlight face to light most of \(P\)? More formally, we define the following problem: "Given a polygon \(P\) and a Field Of View (FOV) with a given centre and inner angle \(\phi\); find the direction (as an angle \(\theta\)) of the FOV such that the intersection between the FOV and \(P\) has the maximum area". This problem occurs in maximising the visibility, as well as in limiting the uncertainty in localisation problems. The occurrence in the former is straightforward to understand. However, the occurrence in the latter is more subtle because we assume inside the polygon an object which we need to detect by the maximising probability of detection in the following scan without prior knowledge of its position. In [26], the geometric approach for passive localisation of static emitters is based on the problem of finding the maximum intersection of a polygon and a rotating FOV. For a passive sensor, a measurement is an angle with an error that points to the direction of a transmission's origin point. The angle with its angular error creates a cone of possible locations for the emitter. After consecutive iterations, a sensor computes a polygon by intersecting multiple measurements from different positions. A sensor needs to make a decision to move to its next position from a given finite set. The choice is made by evaluating all the available positions according to an objective function. In a myopic (greedy) decision-making strategy, a sensor moves by minimising the maximum uncertainty on its subsequent measurement, achieved
by evaluating the maximal intersection of polygons that contain the emitters' position and FOVs with centres that represent sensors' available positions, see Figure 1. Experimental results in [26] were based on a heuristic to estimate the intersection. Here we provide an algorithm with proven guarantee and precision.
There are also several related problems in the literature. One example is finding the intersection between two static polyhedra in the three-dimensional space, which has a linear-time algorithm on the number of vertices [9]. In [12], authors allow some flexibility and aim to compute the maximum overlap of two convex polygons under translations. The problem of approximating the intersection in the general case under the operation of translation has been recently solved in [17]. The closest formulation to our problem is the Maximum Cover under Rotation (MCR): Given a set of finite points \(S\), a point \(r\) on the plane, compute an angle \(\theta\in[0,2\pi)\) such that, after counterclockwise rotation of a polygon \(P\) by \(\theta\) around \(r\), the number of points of \(S\) contained in \(P\) is maximized. The problem is 3SUM-hard, but it has efficient solutions with respect to the number of points in \(S\) and vertices in \(P\)[3].
However, the problem we study is quite different to the one mentioned above. On the one hand, we consider a polygon essentially an infinite set of points instead of a finite one, but on the other hand, the Field of View is a cone in 2D, a specific shape, and the centre of rotations is its vertex. One might assume that expressing the area of the intersection as a function of rotations would be enough to provide an approximation through the use of a numerical method. Unfortunately, a naive application of numerical methods to find the maximum of \(A_{\phi}(\theta)\), with the angle of rotation \(\theta\) and the fixed inner angle \(\phi\), would not guarantee the maximum as we do not know the number and distribution of its extreme points.
In this paper, we design an algorithm with a mathematical guarantee and provide the theoretical foundation for analysis of the maximum cover with a rotating field of view. We show an alternative way to express the maximum cover by various compositions of a function \(A_{\theta}(\phi)\) (with a variable inner angle \(\phi\) and a fixed direction \(\theta\)) that has an analytical solution. The core component of the solution is to find the maximal intersection of a fixed sector (field of view with infinite radius) and a rotated one under a restricted rotation angle 1. Surprisingly, the function of the area, even in such a restricted case, is non-monotonic. Nonetheless, it is possible to find the maximal intersection as shown in Section 3 by using functions that calculate the area with a fixed rotation angle and the inner angle as a variable. Later, we show how to express more complex shapes of the intersection of a polygon and a rotated sector as a combination of multiple two-sector intersections. Finally, we complete the solution by identifying how an infinite number of intersections can be decomposed into a finite number of equivalence classes and propose at the same time a partitioning algorithm as well as a solution for each equivalence class. Moreover, our solution can be directly applied to special cases of non-convex polygons. Since the optimal solution is a real-value number, we develop an algorithm that approximates the direction of the field of view, with precision \(\varepsilon\), and complexity \(\mathcal{O}(n(\log n+(\log\varepsilon)/\phi))\).
Footnote 1: We consider to be restricted, that is the domain of the angle of rotations to be a closed interval, a proper subset of \([0,2\pi]\) since the area of intersection of two sectors without restrictions can be infinite.
## 2 The Maximum Intersection Problem
To begin with we introduce the notations we will use throughout the paper. Let \(P\) and \(Q\) be two points, we will notate with \(P\!Q\) the slope's angle of the line that \(P\) and \(Q\) define. In other words the slope of the line that \(P\) and \(Q\) define is \(\tan\left(P\!Q\right)\). Throughout this paper, when we mention angles we mean the positive (counterclockwise) angles and we will use the notation \(A\!BC\) to denote the positive angle with apex \(B\). Moreover, denote a convex
Figure 1: In (a) and (b), the sensor (blue triangle) calculates the worst-case uncertainty, which corresponds to the maximum cover, for two positions of “East” and “North” (green triangles) to select the one which minimises it, which in this case is “North”. In (c) and (d), the sensors explore by minimising the maximum uncertainty in each step which makes the areas of uncertainty (the polygons) smaller.
polygon \(\mathcal{P}=(P_{1},\ldots,P_{n})\) as the list of its vertices \(P_{i}\in\mathbb{R}^{2}\), \(i\in\{1,\ldots,n\}\), in counter-clockwise order. A field of view in the 3D space is in essence a cone, and we assume that its height tends to infinity. Since we study the problem on a 2D plane, the field of view is actually a sector of a circle with a radius that tends to infinity. So we formulate the sector in the following way:
**Definition 1**.: _A **sector**\(S[C,\varepsilon_{r},\varepsilon_{\ell}]\) is the set of points that lie inside an angle \(0<R\mathcal{C}L<\pi\) that is formed by two half lines \(\varepsilon_{r}\), and \(\varepsilon_{\ell}\), \(R\in\varepsilon_{r}\) and \(L\in\varepsilon_{\ell}\), that share a common endpoint \(C\), called the centre of the sector. We will call \(\varepsilon_{r}\), and \(\varepsilon_{\ell}\) the **right** and the **left semi-line** of the sector respectively._
As we are interested in studying the sector under rotation we introduce an alternative definition that is based on the angles of the arrays' slopes.
**Definition 2**.: _A **sector**\(S[C,\varepsilon_{r},\varepsilon_{\ell}]\) defined by two semi-lines \(\varepsilon_{r},\varepsilon_{\ell}\) with gradients \(\theta\), \(\theta+\phi\) and a common endpoint \(C\) can be represented by another triplet \(S(C,\theta,\phi)\), where the angle \(\phi\) is the **inner angle of the sector** and the angle \(\theta\) is the **direction of the sector**._
Note that \(\varepsilon_{r}\) is a half line that extends from \(C\), and the direction \(\theta\in[0,2\pi]\) corresponds to exactly one semi-line because if \(x\) is the horizontal line that passes through \(C\), then \(\theta=X\mathcal{C}R\) where \(X\in x\) and \(R\in\varepsilon_{r}\). Now we are ready to formulate the problem properly.
**Problem 1**.: _Given a convex polygon \(\mathcal{P}=(P_{1},\ldots,P_{n})\), a point \(C=(x_{0},y_{0})\) outside of the polygon on the Euclidean plane and \(0<\phi<\pi\), find the direction \(\theta\in[0,2\pi]\) such that the intersection \(S(C,\theta,\phi)\cap\mathcal{P}\) has the maximum area._
Let \(\mathcal{P}\) be a convex set, and a sector \(S(C,\theta,\phi)\) with its centre \(C\) outside of \(\mathcal{P}\). We will say that the sector \(S(C,\theta,\phi)\)**contains**\(\mathcal{P}\) if \(\mathcal{P}\cap S(C,\theta,\phi)=\mathcal{P}\); **fully intersects**\(\mathcal{P}\) if both semi-lines of \(S(C,\theta,\phi)\) intersect an edge or a vertex of \(\mathcal{P}\); **partially intersects**\(\mathcal{P}\) if only one of the two semi-lines of \(S(C,\theta,\phi)\) intersects an edge or a vertex of \(\mathcal{P}\); **does not intersect**\(\mathcal{P}\) if none of the two semi-lines of \(S(C,\theta,\phi)\) intersect an edge or a vertex of \(\mathcal{P}\).
## 3 Studying the Area of Intersection
As Figure 3 presents, it is intuitive to think that if a sector is rotated towards a "corner", then the area of intersection should decrease. In other words, in many cases, it is easy to assume that the area of intersection as a function of rotations is monotonic. But this is not the case, as there are examples where the function has local extreme points, a crucial fact especially when the domain of the rotations is restricted (is a bounded interval). In this section, we study some fundamental cases under restricted rotations to extract the formulae of the area of the respective intersection. We show that the function of the area of the intersection of a rotating sector and a static one is \(A(\theta,\phi)\), which depends on two values - the direction of the rotating field of view \(\theta\) and its inner angle \(\phi\). The straightforward approach would be to consider the function \(A_{\phi}(\theta)\), where \(\phi\) is constant and \(\theta\) is variable. However, maximising the area through function \(A_{\phi}(\theta)\) leads to the analysis of polynomials of trigonometric functions with rational exponents. The direct maximisation of this non-convex function is difficult as there are no constructive criteria to check the number of possible solutions that would guarantee finding the maximum value. Instead, we found a more elegant way to solve the problem by expressing the function \(A_{\phi}(\theta)\) by a composition of \(A_{\theta}(\phi)\) functions with an inner angle \(\phi\) and a fixed direction \(\theta\). The key is that function \(A_{\theta}(\phi)\) has two local extreme points calculated analytically and by expressing \(A_{\phi}(\theta)\) as a composition of \(A_{\theta}(\phi)\) functions allows us to identify the intervals with only one solution in each one, where the application of classical numerical algorithms yields the maximum. Finally, we prove that the function of any intersection's area is expressed as \(A(\theta,\phi)\) or as a linear combination of \(A_{\theta}(\phi)\) functions.
Figure 2: The intersection between a polygon \(P\) and a sector \(S(C,\theta,\phi)\) with centre \(C\), inner angle \(\phi\) and direction \(\theta\). The intersection becomes a quadrilateral from a pentagon as \(S\) is rotated clockwise.
### The Intersection Area Function
The intersection of a fixed sector and a rotating one, when the rotating sector fully intersects the other one, is given in the following theorem.
**Theorem 1**.: _Let two sectors on the plane, \(S(C,\theta,\phi)\), \(S(K,\theta_{K},\phi_{K})\) with \(C\notin S(K,\theta_{K},\phi_{K})\), and \(\mathcal{R}\subseteq[0,2\pi]\times(0,\pi)\). The area of the bounded intersection \(S(C,\theta,\phi)\cap S(K,\theta_{K},\phi_{K})\) is_
\[A(\theta,\phi)=\frac{d_{1}\sin\phi\,\cos^{2}(\theta_{K}+\phi_{K})}{2\sin{( \theta+\phi-\theta_{K}-\phi_{K})}\sin{(\theta-\theta_{K}-\phi_{K})}}-\frac{d_ {2}\sin\phi\,\cos^{2}(\theta_{K})}{2\sin{(\theta+\phi-\theta_{K})}\sin{(\theta -\theta_{K})}} \tag{1}\]
_for every \((\theta,\phi)\in\mathcal{R}\), where \(\phi_{K},\theta_{K}\in(-\pi/2,\pi/2)\), and \(d_{1},d_{2}\in\mathbb{R}\), are constants representing distances (as in Figure 4(a))._
The proof of Theorem 1 can be found in Section 6. We derive this equation by expressing two of the four points of the intersection which is a quadrilateral, as the intersection of the left semi-line of the rotating sector with the left and right semi-lines of the static one. We do the same for the other two points of the quadrilateral, by using the right semi-line of the rotating sector. Then we use the shoelace formula to calculate the quadrilateral's area using the four points we identified and simplify the expression. 2 In the above theorem, if \(\varepsilon_{y}\) is the vertical line that passes through \(C=(x_{C},y_{C})\), and \(E^{\prime}\), \(E\) are the intersections of \(\varepsilon_{y}\) with the left and the right semi-line of \(S(K,\theta_{K},\phi_{K})\) respectively, then \(d_{1}=sign(x_{C}-x_{K})|CE|^{2}\), \(d_{2}=sign(x_{K}-x_{C})|CE^{\prime}|^{2}\), where \(sign(x)=1\) if \(x\geq 0\), and \(sign(x)=-1\), if \(x<0\). Alternatively, a static sector consists of two intersecting lines. If a rotating sector intersects two parallel lines, then the analysis of Theorem 1 is sound.
Footnote 2: Apart from the analytical proof of equation (1), various tests have been performed, in a simulation environment, to affirm its validity.
Figure 4: (a) The area \(A(\theta,\phi)\) of intersection of two sectors \(S(C,\theta,\phi)\cap S(K,\theta_{K},\phi_{K})\). The function \(A(\theta,\phi)\) is defined between the lines \(\varepsilon_{\theta_{max}}\), and \(\varepsilon_{\theta_{min}}\).Finally, if \(\varepsilon_{y}\) is a vertical line that passes through \(C\), then \(E^{\prime}=\varepsilon_{y}\cap KP_{2}\), \(E=\varepsilon_{y}\cap KP_{1}\) and \(d_{1}=|CE|^{2}\), \(d_{2}=|CE|^{2}\).
(b) The intersection of a rotating sector with two parallel lines, which can be considered a special case of Theorem 1, where equation (1) hols.
Figure 3: An example that the area of the intersection \(A(S(O,\theta,\pi/12)\cap S^{\prime})\) is not a monotonic function. One might assume intuitively that as the sector \(A(S(O,\theta,\pi/12))\) rotates counterclockwise then the area of intersection should decrease but there are cases where the function has local extreme points.
**Corollary 1** (Intersection of Two Parallel Lines and a Sector).: _If two parallel lines intersect with a sector, then equation (1) holds for \(\phi_{K}=0\) and \(\mathcal{R}=(\theta_{k},\pi+\theta_{k}-\phi)\times(0,\pi)\) that is_
\[A(\theta,\phi)=\frac{(d_{1}-d_{2})\sin\phi\ \cos^{2}(\theta_{K})}{2\sin( \theta+\phi-\theta_{K})\sin(\theta-\theta_{K})} \tag{2}\]
In Proposition 1, we show that the original function \(A_{\phi}(\theta)\) could be standardised and expressed as an exponential polynomial function (polynomials with non-integer powers). However, the direct maximisation of these non-convex functions is difficult, see [18]. The main difficulty is that there are no constructive criteria to check the number of possible solutions of \(dA_{\phi}/d\theta=0\), which means there is no guarantee of finding the global maximum value within a given precision and computation time by applying naively general numerical methods. The domain of function \(A(\theta,\phi)\) is \(\mathcal{R}=\mathcal{D}\times\mathcal{I}\subseteq[0,2\pi]\times(0,\pi)\). We will denote the restriction of \(A(\theta,\phi)\) at \(\mathcal{D}\) and \(\mathcal{I}\) respectively, as \(A_{\phi}:\mathcal{D}\rightarrow\mathbb{R}\), and \(A_{\theta}:\mathcal{I}\rightarrow\mathbb{R}\).
**Proposition 1**.: _The function \(A_{\phi}(\theta)\) is a rational function of the form \(P(x)/Q(x)\), where \(P(x)\), and \(Q(x)\) are exponential polynomials (polynomials with non-integer powers)._
The proof of Proposition 1 can be found in Section 6. Even though it is hard to find the local extreme points of \(A_{\phi}(\theta)\), we found a way to identify its global maximum indirectly following the analysis of the function \(A_{\theta}(\phi)\). The function \(A_{\theta}(\phi)\) has a symmetry that allows the cancellation of terms and gives us the possibility to calculate the explicit analytical form of its extreme points, see Lemma 1.
**Lemma 1**.: _The function \(A_{\theta}(\phi)\) has at most two local extreme points in \(\mathcal{I}\subseteq(0,\pi)\), and they can be explicitly calculated._
Proof.: \[\frac{\partial A}{\partial\phi}= \frac{d_{1}^{2}\cos^{2}\omega\sin(\theta-\omega)(\cos\phi\sin( \theta+\phi-\omega)-\sin\phi\cos(\theta+\phi-\omega))}{2\sin^{2}(\theta+\phi- \omega)\sin^{2}(\theta-\omega)}-\] \[-\frac{d_{2}^{2}\cos^{2}\beta\sin(\theta-\beta)(\cos\phi\sin( \theta+\phi-\beta)-\sin\phi\cos(\theta+\phi-\beta))}{2\sin^{2}(\theta+\phi- \beta)\sin^{2}(\theta-\beta)}\]
\[\frac{\partial A}{\partial\phi}= \frac{d_{1}^{2}\cos^{2}\omega\sin^{2}(\theta-\omega)}{2\sin^{2} (\theta+\phi-\omega)\sin^{2}(\theta-\omega)}-\frac{d_{2}^{2}\cos^{2}\beta\sin^ {2}(\theta-\beta)}{2\sin^{2}(\theta+\phi-\beta)\sin^{2}(\theta-\beta)}\] \[\frac{\partial A}{\partial\phi}= \Rightarrow\frac{d_{1}^{2}\cos^{2}\omega}{2\sin^{2}(\theta+\phi- \omega)}=\frac{d_{2}^{2}\cos^{2}\beta}{2\sin^{2}(\theta+\phi-\beta)}\]
\[\frac{\sin^{2}(\theta+\phi-\beta)}{\sin^{2}(\theta+\phi-\omega)}=\frac{d_{2}^ {2}\cos^{2}\beta}{d_{1}^{2}\cos^{2}\omega}\Rightarrow\left|\frac{\sin(\theta+ \phi-\beta)}{\sin(\theta+\phi-\omega)}\right|=\frac{d_{2}\cos\beta}{d_{1}\cos\omega}\]
\[\frac{\sin\phi\cos(\theta-\beta)+\cos\phi\sin(\theta-\beta)}{\sin\phi\cos( \theta-\omega)+\cos\phi\sin(\theta-\omega)}=\pm\frac{d_{2}\cos\beta}{d_{1} \cos\omega}\]
If \(\phi\neq\pi/2\) we have the following two solution
\[\phi_{1}=\arctan\left(\frac{d_{2}\cos\beta\sin(\theta-\omega)-d_{1}\cos \omega\sin(\theta-\beta)}{d_{1}\cos\omega\cos(\theta-\beta)-d_{2}\cos\beta\cos( \theta-\omega)}\right)\]
\[\phi_{2}=\arctan\left(-\frac{d_{2}\cos\beta\sin(\theta-\omega)+d_{1}\cos \omega\sin(\theta-\beta)}{d_{1}\cos\omega\cos(\theta-\beta)+d_{2}\cos\beta\cos( \theta-\omega)}\right)\]
Note that the tangent function defined in \([0,\pi/2)\cup(\pi/2,\pi]\) is an injection which guarantees that \(\phi_{1}\) and \(\phi_{2}\) are unique. Moreover, if \(\phi=\pi/2\) then
\[\frac{\partial A}{\partial\phi}=0\Leftrightarrow\frac{\cos(\theta-\beta)}{\cos (\theta-\omega)}=\pm\frac{d_{2}\cos\beta}{d_{1}\cos\omega}\]
From Lemma 1 we calculate the roots of the equation \(dA_{\theta}/d\phi=0\), \(\phi_{1}<\phi_{2}\). Now in the interval \((\phi_{1},\phi_{2})\) function \(dA_{\theta}/d\phi\) is either positive or negative, which means that \(A_{\theta}(\phi)\) is strictly increasing or decreasing. In the following section, we express the area of intersection as a linear combination of \(A_{\theta}(\phi)\) functions. Using the local extreme points of each \(A_{\theta}(\phi)\), we can identify the intervals where there is at most one solution of the linear combination, which leads to an effective method of finding a global maximum for the original \(A_{\phi}(\theta)\) function (see Lemma 2).
### Approximating the Maximum Area Under Restricted Rotations
The objective of maximising the area of the intersection of two sectors without restricting the domain of rotations is an ill-posed optimisation problem because there are unbounded intersections which means that the maximum is infinity. Even if we disregard those, the natural domain of rotations where equation (1) is well-defined is an open set \((\theta_{min},\theta_{max})\), which means that we can create a strictly increasing sequence of \(A_{\phi}(\theta_{i})\) which tends to infinity as \(\theta_{i}\) tends to either \(\theta_{min}\) or \(\theta_{max}\). For these reasons, we study the maximisation of \(A(\theta,\phi)\) under restricted rotations, i.e. we consider that \(\theta\) belongs in a closed and bounded subset of \(\mathbb{R}\), which guarantees the existence of a maximum.
**Problem 2**.: _Given an interval \([a,b]\), a fixed sector \(S(K,\theta_{K},\phi_{K})\) and a sector \(S(C,\theta,\phi)\) with the centre \(C\notin S(K,\theta_{K},\phi_{K})\), calculate the area of the intersection when \(\theta\in[a,b]\), and \(S(C,\theta,\phi)\) fully intersects \(S(K,\theta_{K},\phi_{K})\)._
From equation (1), one can verify that \(A_{\phi}(\theta)\) is not a convex function which means that this function may have multiple extreme points inside a given interval \([a,b]\). Since finding the local extreme points of \(A_{\phi}(\theta)\) analytically is a non-trivial problem (see Proposition 1), we first conceptualise the change in direction as an increase or a decrease of two different sectors to express \(A_{\phi}(\theta_{i})\) as a combination of \(A_{\theta}(\phi)\) functions. Secondly, we apply numerical methods to approximate the solutions of equations. The method taken into consideration is the Newton Raphson method [4, 13, 24]. It is easy to modify Newton Raphson to return a value equivalent to a negative value if it does not converge after a constant number of iterations. Keep in mind that if Newton Raphson converges, then the time complexity needed to approximate the solution of an equation \(f(x)=0\) up to \(\varepsilon>1\) accuracy is \(F(\varepsilon)\cdot\log\varepsilon\), where \(F(\varepsilon)\) is the complexity of computing \(f^{\prime}/f\), up to \(\varepsilon\) precision, that is \(|apx-opt|<10^{-\varepsilon}\).
We initially focus on the case where the angle of the rotation \(\theta\) is bounded by the inner angle \(\phi\), i.e. \(\theta\in[\theta_{0}-\phi,\theta_{0}+\phi]\) because it allows us to express \(A_{\phi}(\theta)\) with only two functions \(A_{\theta}\) and a constant \(A(\theta_{0},\phi)\), see Lemma 2 and Figure 5. In the following lemma not only do we express the function \(A_{\phi}\) as a summation of \(A_{\theta}\) but we also divide the domain \([\theta_{0},\theta_{0}+\phi]\) into a finite number of intervals where in each there exists at most one point that is a root of the first derivative of the function of the area to identify every possible local maximum. In the end, we obtain the maximum by selecting the maximum out of all local maximums.
**Lemma 2**.: _For \(\phi\in(0,\pi)\), and \(\theta\in[\theta_{0},\theta_{0}+\phi]\), the function \(A_{\phi}(\theta)\) is expressed as:_
\[A_{\phi}(\theta)=A(\theta_{0},\phi)+A_{\theta}^{\perp}(\theta- \theta_{0})-A_{\theta}^{\textsc{b}}(\theta-\theta_{0}), \tag{3}\]
_where \(A(\theta_{0},\phi)\) is constant. The maximum value of \(A_{\phi}(\theta)\) can be approximated with precision \(\varepsilon>1\), in time \(\mathcal{O}(\log\varepsilon)\)._
Proof.: Let \(S^{\prime}\) be a fixed sector, a rotating one at direction \(\theta_{0}\), \(S(C,\theta_{0},\phi)\) with right and left semi-lines \(\varepsilon_{r_{1}}\), \(\varepsilon_{\ell_{1}}\) and its rotation at \(\theta\in[\theta_{0},\theta_{0}+\phi]\), \(S(C,\theta,\phi)\) with right and left semi-lines \(\varepsilon_{r_{2}}\), \(\varepsilon_{\ell_{2}}\), respectively. We can express the rotated intersection at direction \(\theta\) by using the initial one, see Figure 5,
\[S[C,\varepsilon_{r_{2}},\varepsilon_{\ell_{1}}]\cap S^{ \prime}=\big{(}\left(S[C,\varepsilon_{r_{1}},\varepsilon_{\ell_{1}}]\cap S^{ \prime}\right)\ \cup\ \left[S[C,\varepsilon_{\ell_{2}},\varepsilon_{\ell_{1}}]\right)\big{)}\setminus S[C, \varepsilon_{r_{2}},\varepsilon_{r_{1}}]\] \[S(C,\theta,\phi)\cap S^{\prime}=\big{(}\left(S(C,\theta_{0},\phi) \cap S^{\prime}\right)\ \cup\ \left[S(C,\theta_{0}+\phi,\theta-\theta_{0})\right)\big{)}\setminus S(C, \theta_{0},\theta-\theta_{0})\]
This means that the area of the intersection is expressed
\[A_{\phi}(\theta)=A(\theta_{0},\phi)+A_{(\theta_{0}+\phi)}(\theta- \theta_{0})-A_{\theta_{0}}(\theta-\theta_{0}), \theta\in[\theta_{0},\theta_{0}+\phi]\]
Figure 5: The sector \(S(C,\theta_{0},\phi)\) has as borders the blue lines \(\varepsilon_{\ell_{1}}\), and \(\varepsilon_{r_{1}}\) while the sector \(S(C,\theta,\phi)\) has as borders the red ones \(\varepsilon_{\ell_{2}}\), and \(\varepsilon_{r_{2}}\). The line \(\varepsilon_{\ell_{1}}\) partitions \(S(C,\theta,\phi)\cap S(K,\theta_{K},\phi_{K})\) into two quadrilaterals.
To find the local extreme points of the function \(A_{\phi}(\theta)\) we use the above equation and the fact that the function \(A_{\theta}(\phi)\) has two local extreme points (Lemma 1).
Let \(f\) be a continuous function in an interval \([a,b]\subseteq\mathcal{R}\), and \(x_{0}\in[a,b]\) be the only root of \(f\) in \([a,b]\). From the intermediate value Theorem [25] it follows that the sign of \(f\) does not change sign inside intervals \([a.x_{0}]\) and \([x_{0},b]\). Let \(\theta_{1},\theta_{2}\) be the roots of \(dA^{L}_{\theta}/d\phi=0\), and \(\theta_{3},\theta_{4}\) be the roots of \(dA^{R}_{\theta}/d\phi=0\). As mentioned above from the intermediate value Theorem [25], the values \(\theta_{1},\ldots,\theta_{4}\) partition the domain \([\theta_{0},\theta_{0}+\phi]\) into at most five intervals where the functions \(f_{L}=dA^{L}_{\theta}/d\phi\), and \(f_{R}=dA^{R}_{\theta}/d\phi\) will be either positive or negative. An example is shown in Table 1 of how the monotonicity of \(A^{L}_{\phi}\), and \(A^{R}_{\phi}\) should remain intact inside the intervals \([a,\phi_{1}],[\phi_{1},\phi_{2}],[\phi_{2},b]\), and \([\phi_{2},\phi_{3}],[\phi_{3},\phi_{4}],[\phi_{4},b]\) respectively.
Without loss of generality let's assume that \(\theta_{1}<\ldots<\theta_{4}\), which partitions the domain \([\theta_{0},\theta_{0}+\phi]\) in at most five intervals \([\theta_{0},\theta_{1}]\cup[\theta_{1},\theta_{2}]\cup[\theta_{2},\theta_{3}] \cup[\theta_{3},\theta_{4}]\cup[\theta_{4},\theta_{0}+\phi]\). By running a modified Newton Raphson in the intervals where \(f_{L}\cdot f_{R}<0\) which returns a negative number if it does not converge after a constant number of iterations, we can find all the values \(r_{1},\ldots,r_{k}\), \(k\leq 5\) of possible local maximum points of equation (3). The local maximum of a function inside a closed given interval is either at a root of the derivative of said function or it is at the boundaries of the interval. Also by checking the edges of the interval \(\theta_{0},\theta_{0}+\phi\) we obtain the maximum out of the set of values \(\{A_{\phi}(r_{1}),\ldots,A_{\phi}(r_{k}),A_{\phi}(\theta_{0}),A_{\phi}(\theta _{0}+\phi)\}\) The running time is \(\mathcal{O}(\log\varepsilon)\) because we run at most five times the Newton Raphson method and to do so, we can evaluate the derivative of \(f_{L}+f_{R}\) in constant time by plugging the analytical formula. Furthermore, all the rest of the evaluations to check can also be done in constant time.
Next, we show how to find a maximal intersection for unrestricted rotation with a direction \(\theta\in[a,b]\supseteq[\theta_{0},\theta_{0}+\phi]\)
**Theorem 2**.: _Given an interval \([a,b]\subseteq[0,\pi]\) with \(z=|b-a|\), and two sectors \(S^{\prime}\) and \(S(C,\theta,\phi)\); the direction of the maximum area of intersection \(S(C,\theta,\phi)\cap S^{\prime}\) where \(\theta\in[a,b]\) can be \(\varepsilon\)-approximated in time \(\mathcal{O}((z\,\log\varepsilon)/\phi)\)._
Proof.: We can partition the interval \([a,b]\) into \(k>1\) intervals of length \(\phi\), that is \([a,b]=\left[a,a+\phi\right]\cup\ldots\cup\left[a+(k-1)\phi,b\right]\). For each interval \([a+i\phi,a+(i+1)\phi]\), \(i\in\{0,\ldots,k-1\}\) we can find all the local extreme points using Lemma 2, we run Newton Raphson up to five times and then we select the maximum value \(M_{k}\). Then we select the maximum \(\max\left(M_{0},\ldots,M_{k-1}\right)\). If the length of the given interval \([a,b]\) is \(z\) then we would have \(\lceil z/\phi\rceil\) intervals, and in each interval, we run at most 5 times Newton Raphson with \(\varepsilon\) accuracy. Given the evaluation of the function of the area and its derivative and constant time, this means that in a worst-case scenario, we would have \(\mathcal{O}(z\,\log\varepsilon/\phi)\).
### \(\mathcal{LMR}\) Intersection and the Global Objective Function
In this section, we decompose the area of intersection of a polygon \(\mathcal{P}\) and a sector \(S[C,\varepsilon_{r},\varepsilon_{\ell}]\) as a summation of multiple areas of intersections of two sectors. Notice that if \(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}]\) does not contain any vertices of \(\mathcal{P}\), then this case is identical to the intersection of two sectors.
In the case that \(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}]\) contains one vertex \(P_{0}\in\mathcal{P}\) or it contains two vertices colinear with \(C\), then we can express the area as \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}])\) as \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},C_{P_{0}}])+\mathit{Area}( \mathcal{P}\cap S[C,C_{P_{0}},\varepsilon_{\ell}])\), see Figure 6(a). We will refer to \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},C_{P_{0}}])\), \(\mathit{Area}(\mathcal{P}\cap S[C,C_{P_{0}},\varepsilon_{\ell}])\) as right and left respectively. Now we can use equation (1), so \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}])=A^{L}( \theta_{1},\phi_{1})+A^{R}(\theta_{2},\phi_{2})\).
If \(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}]\) contains two non colinear vertices \(P_{1},P_{2}\in\mathcal{P}\), then using the lines \(\mathit{CP}_{1}\) and \(\mathit{CP}_{2}\) we can express the area of intersection of \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}])\) as \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},C_{P_{2}}])+\mathit{Area}( \mathcal{P}\cap S[C,C_{P_{1}},\varepsilon_{\ell}])+\mathit{Area}(\mathcal{P} \cap S[C,C_{P_{1}},C_{P_{2}}])\) (see Figure 6(b)). The area of \(S[C,\mathit{CP}_{1},\mathit{CP}_{2}]\) remains constant for certain rotations and will call it the middle area. Similarly using equation (1), the area of intersection is \(\mathit{Area}(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}])=A^{L}( \theta_{1},\phi_{1})+A^{R}(\theta_{2},\phi_{2})+A^{M}\) where \(A^{M}\) is a constant unless the number of the vertices that \(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}]\) contains, changes. The same argument can be said in the case that \(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}]\) contains \(P_{1},\ldots,P_{k}\) vertices \(k>2\). The
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(\phi\) & a & \(\phi_{1}\) & \(\phi_{2}\) & \(\phi_{3}\) & \(\phi_{4}\) & b \\ \hline \(dA^{L}/d\phi\) & \(+\) & \(-\) & \(+\) & \(+\) & \(+\) \\ \(A^{L}_{\phi}\) & \(\nearrow\) & \(\searrow\) & \(\nearrow\) & \(\nearrow\) & \(\nearrow\) \\ \(dA^{R}_{\phi}/d\phi\) & \(-\) & \(-\) & \(-\) & \(+\) & \(-\) \\ \(A^{R}_{\phi}\) & \(\searrow\) & \(\searrow\) & \(\nearrow\) & \(\nearrow\) & \(\searrow\) \\ \hline \end{tabular}
\end{table}
Table 1: Assume that \(\phi_{1}\), and \(\phi_{2}\) are the roots of \(dA^{L}/d\phi=0\) and \(\phi_{3}\), and \(\phi_{4}\) are the roots of \(dA^{R}/d\phi=0\). The sign of the respective derivative changes only if \(\phi\) crosses one of its roots. Worst-case scenario we need to search for a local extreme point in every interval.
only difference is that the middle area will be \(\sum_{i=1}^{k-1}Area(S[C,P_{i},P_{i+1}])\). This is a different decomposition from the one we presented in the previous section, which enables the identification of the extreme points.
In the next section, we will partition the domain of directions \(\mathcal{D}_{G}\subseteq[0,2\pi]\) into intervals \([\partial_{i_{t}},\partial_{i+1}]\), where only the left, and right area change (see Remark 1) for every rotation \(\theta\in[\partial_{i_{t}},\partial_{i+1}]\). This means that the area of the intersection \(S(C,\theta,\phi)\cap\mathcal{P}\) in each interval is
\[Area(S(C,\theta,\phi)\cap\mathcal{P})=f_{i}(\theta) =Area(L)+Area(M)+Area(R)\] \[=A_{\phi}^{L}(\theta)+A_{\phi}^{R}(\theta)+Area(M) \theta\in[\partial_{i},\partial_{i+1}]\]
Where \(Area(M)\) is a constant, and it can be calculated either using the shoelace formula [19] or as the sum of its quadrilateral sections. Now we can define properly the objective function of the area \(f:\mathcal{D}_{G}\rightarrow\mathbb{R}\) as
\[f(\theta)=\begin{cases}f_{1}(\theta)=A_{\phi}^{L_{1}}(\theta)+A_{\phi}^{R_{ 1}}(\theta)+Area(M_{1})&\theta\in[\partial_{1},\partial_{2}]\\ \qquad\qquad\qquad\qquad\qquad\vdots\\ f_{i}(\theta)=A_{\phi}^{L_{i}}(\theta)+A_{\phi}^{R_{1}}(\theta)+Area(M_{i})& \theta\in[\partial_{i},\partial_{i+1}]\\ \qquad\qquad\qquad\qquad\vdots\\ f_{i}(\theta)=A_{\phi}^{L_{i-1}}(\theta)+A_{\phi}^{R_{1}-1}(\theta)+Area(M_{ \mathcal{Q}-1})&\theta\in[\partial_{\mathcal{Q}-1},\partial_{\mathcal{Q}}] \end{cases} \tag{4}\]
## 4 Partitioning P into finite LMR cells
In every optimisation problem, the optimal value is obtained by the minimisation or maximisation of a given objective function [24] and in this problem, this objective function is the area of the intersection. One of the challenges of this problem is to express the area of the intersection as a function of rotations in a systematic way because the intersection can be in many shapes (see Figure 2). In this section, we present a partition of the polygon into a sequence of quadrilaterals. Using them as a point of reference not only can we express the area of intersection in a systematic way but also we prove that there are finite independent sub-problems. Let us now explain how to decompose an infinite set of intersections into a finite number of independent sub-problems. First, we partition the polygon into quadrilateral sections by a set of lines from a point \(C\) to polygon vertices. Then every intersection \(S(C,\theta,\phi)\cap\mathcal{P}\) can be written as a union of three unique convex sets \(L\),\(M\), and \(R\) (Left, Middle, Right), where \(L\) and \(R\) are subsets of the polygon's sections. By defining an equivalence relation on the \(L\), \(M\), \(R\) sets, we are partitioning all intersections \(S(C,\theta,\phi)\cap\mathcal{P}\) into finite families. So, we can obtain the maximal intersection by selecting the maximum overall maximums in the equivalent classes. More formally:
**Definition 3**.: _A **partition** of a set \(S\) is a collection of nonempty subsets of \(S\) such that every element of \(S\) is in exactly one of the subsets. The subsets are the **cells** of the partition._
**Notation** (Counterclockwise Vertices' Angular Ordering).: _Let \(\mathcal{P}=(P_{1},\ldots,P_{n})\) be a polygon with \(n\) vertices, a point \(C\) outside of \(\mathcal{P}\), and the semi-lines of \(\mathcal{C}P_{i}\) that extend from \(C\). We will denote with \(\{P_{k}\}_{i=1}^{n}\) the strictly increasing sequence of the vertices such that \(P_{k_{i}}<P_{k_{i+1}}\) if \(\mathcal{C}P_{i}<\mathcal{C}\dot{P}_{i+1}\) (see Figure 7(a))._
**Definition 4**.: _Let \(\mathcal{P}=(P_{1},\ldots,P_{n})\) be a polygon with \(n\) vertices, a point \(C\) outside of \(\mathcal{P}\), and the semi-lines of \(\mathcal{C}P_{i}\) that extend from \(C\). We will call the sequence \(\{S\}_{i=1}^{m}\), \(m\leq n-1\), **vertex partitioning of \(\mathcal{P}\) from \(C\)** and a set \(S_{i}\) a **section** of \(\mathcal{P}\) where: \(S_{i}=S[C,\mathcal{C}P_{k_{i+1}},\mathcal{C}P_{k}]\cap\mathcal{P}\)._
Figure 6: The case of intersection \(\mathcal{P}\cap S[C,\varepsilon_{r},\varepsilon_{\ell}]\) has two colinear vertices with \(C\), has two non-colinear vertices and has more than two vertices.
Notice that the sequence \(S_{i}\) partitions the polygon \(\mathcal{P}\) using the angular position of the vertices of \(\mathcal{P}\) from point \(C\). Sorting the semi-lines \(\mathcal{CP}_{i}\) in a strictly increasing sequence means that we exclude semi-lines where \(\mathcal{CP}_{i}\) coincides with \(\mathcal{CP}_{i+1}\), which means the number of sections is at most \(n-1\).
We can partition the intersection into three sets, Left, Middle, and Right, where during a "small rotation " only the Left and Right are quadrilaterals and change while the middle remains constant. In Lemma 3, we show that these can be expressed uniquely if defined as in Definition 5 and also illustrate an example in Figure 6.
**Definition 5**.: _Let \(S(C,\theta,\phi)\) be a sector that either fully or partially intersects a polygon \(\mathcal{P}\), \(K=\mathcal{P}\cap S(C,\theta,\phi)\), and \(\{S_{i}\}_{i=1}^{m}\) is the vertex partitioning of \(\mathcal{P}\) from \(C\). Let us define three sets \(L\), \(M\), \(R\) (Left, Middle, Right) for the intersection \(K\): \(\bullet\)\(L=\emptyset\) or \(L=K\cap S_{i}\) for \(i=\max\left\{q\in\{1,\ldots,m\}:S_{q}\cap K\neq\emptyset\right\}\) or \(L=K\) if \(K\subseteq S_{i}\), \(i\in\{1,\ldots,m\}\); \(\bullet\)\(R=\emptyset\) or \(R=K\cap S_{j}\), for \(i\neq j=\min\left\{q\in\{1,\ldots,m\}:S_{q}\cap K\neq\emptyset\right\}\); \(\bullet\)\(M=K\setminus(L\cup R)\)._
**Lemma 3**.: _Every intersection \(K=\mathcal{P}\cap S(C,\theta,\phi)\) is a union of three unique sets \(L,M,R\) as defined in Definition 5._
Proof.: We only need to consider how many elements set \(H=\left\{q\in\{1,\ldots,m\}:S_{q}\cap K\neq\emptyset\right\}\) contains. If the cardinality of \(|H|=1\), then \(K\subseteq S_{i}\), for \(i\in\{1,\ldots,m\}\), and the triplet \((L,M,R)=(K\cap S_{i},\emptyset,\emptyset)\). If \(|H|=2\), then \(i=max(H)\), and \(j=min(H)\), so the triplet \((L,M,R)=(K\cap S_{i},\emptyset,K\cap S_{j})\). If \(|H|>2\), then \(i=max(H)\), and \(j=min(H)\), so the triplet \((L,M,R)=(K\cap S_{i},K\setminus(L\cup R),K\cap S_{j})\).
It is apparent that \(K=L\cup M\cup R\), and the uniqueness of the sets stems from the uniqueness of the minimum and the maximum element of \(H\). Finally, notice that the only ambiguous case is when \(|H|=1\), but then we select \(L=K\).
The result of Lemma 3 means that there is a bijection between an intersection \(\mathcal{P}\cap S(C,\theta,\phi)\) and its decomposition \(L\), \(M\), \(R\), so an equivalence relation on these sets not only partitions them but partitions the intersections as well. The intersections expressed their decomposition \(L\),\(M\),\(R\) share the same branch of equation (4) if both their left sets are subsets of the same section and at the same time, both their right sets are subsets of the same section.
**Definition 6**.: _Let two intersections \(K_{1}=S(C,\theta_{1},\phi)\cap\mathcal{P}=L_{1}\cup M_{1}\cup R_{1}\), \(K_{2}=S(C,\theta_{2},\phi)\cap\mathcal{P}=L_{2}\cup M_{2}\cup R_{2}\), and \(\{S_{i}\}_{i=1}^{m}\) be the vertex partitioning of \(\mathcal{P}\) from \(C\). We define the relation \(\mathcal{LMR}\), and we will say that \(K_{1}\) and \(K_{2}\) are \(\mathcal{LMR}\)**related** if and only if the following statements are both true, for \(j<i\in\{1,\ldots,n\}\):_
* \(L_{1}\subseteq L_{2}\subseteq S_{i}\) _or_ \(L_{2}\subseteq L_{1}\subseteq S_{i}\) _or_ \(L_{1}=L_{2}=\emptyset\)__
* \(R_{1}\subseteq R_{2}\subseteq S_{j}\) _or_ \(R_{2}\subseteq R_{1}\subseteq S_{j}\) _or_ \(R_{1}=R_{2}=\emptyset\)__
**Remark 1**.: _Notice that if two intersections \(L_{1},M_{1},R_{1}\), and \(L_{2},M_{2},R_{2}\) are \(\mathcal{LMR}\) related, then \(M_{1}=M_{2}\) because either \(M_{1}=M_{2}=\emptyset\) or if we consider \(H=\left\{q\in\{1,\ldots,m\}:S_{q}\cap K\neq\emptyset\right\}\), \(i=\max(H)\), and \(j=\min(H)\) then \(M_{1}=M_{2}=\cup_{k=j+1}^{i-1}S_{k}\)._
**Lemma 4**.: _The relation \(\mathcal{LMR}\) is a relation of equivalence._
Figure 7: (a) The vertex partitioning of \(\mathcal{P}\) from \(C\). The vertices of \(\mathcal{P}\) are sorted counterclockwise, and the sequence \(S_{1},\ldots,S_{n-1}\) is sorted from right to left. (b) The LMR partition with \(L\subseteq S_{4}\), \(K\subseteq S_{2}\), and \(M=S_{3}\) is valid for every \(\theta\in[\mathcal{CP}_{k-1},\theta_{7}]\), where \(\theta_{7}=\mathcal{CP}_{k+2}-\phi\). When \(\theta=\theta_{7}\) then \(\varepsilon_{r}\) will cross \(CP_{k+2}\).
Proof.: Let three partitions \(L_{1},M_{1},R_{1},L_{2},M_{2},R_{2},L_{3},M_{3},R_{3}\) of three different intersection, and \(\{S_{i}\}_{i=1}^{m}\) as Lemma 4 assumes. We will denote a partition as a pair \((L_{i},R_{i}),i\in\{1,2,3\}\). \((L_{i},R_{i})\sim(L_{i},R_{i})\) denotes the partitions \((L_{i},R_{i})\), and \((L_{j},R_{j})\) belong in the same LMR family. From definition \(\ell\) it is directly induced that the reflective \((L_{i},R_{i})\sim(L_{i},R_{i})\)) and the symmetric (if \((L_{i},R_{i})\sim(L_{j},R_{j})\) then \((L_{j},R_{j})\sim(L_{i},R_{i})\)) properties are true. All we need to is to prove the transitive property. If \((L_{1},R_{1})\sim(L_{2},R_{2})\) and \((L_{2},R_{2})\sim(L_{3},R_{3})\), then we know that both \(L_{1}\), and \(L_{3}\) are subsets of a section \(S_{i}\), which lead to the fact that either \(L_{1}\subseteq L_{3}\) or \(L_{3}\subseteq L_{1}\) or \(L_{1}=L_{3}=\emptyset\). By the same logic both \(R_{1}\), and \(R_{3}\) are subsets of a section \(S_{i}\), which lead to the fact that either \(R_{1}\subseteq R_{3}\) or \(R_{3}\subseteq R_{1}\) or \(R_{1}=R_{3}=\emptyset\) which leads to \((L_{1},R_{1})\sim(L_{3},R_{3})\)
The intersections \(S(C,\theta_{1},\phi)\cap\mathcal{P}\), and \(S(C,\theta_{2},\phi)\cap\mathcal{P}\) are not \(\mathcal{LMR}\) related if during the rotation from \(\theta_{1}\) to \(\theta_{2}\) either the left or the right semi-line of the sector \(S(C,\theta,\phi)\) crosses one of the \(\mathcal{C}P_{i}\) lines of polygon \(\mathcal{P}\), \(i\in\{1,\ldots,n\}\). In other words, if in an interval \([\theta_{s},\theta_{f}]\) contains an angle of rotation \(\theta\) of the form \(\mathcal{C}P_{i}-\phi\) or \(\mathcal{C}P_{i}\) then the intersections \(S(C,\theta_{s},\phi)\cap\mathcal{P}\), and \(S(C,\theta_{f},\phi)\cap\mathcal{P}\) are not \(\mathcal{LMR}\) related. Also, it is known [20] that an equivalence relation on a set \(S\) yields a partition of \(S\).
**Corollary 2**.: _The equivalence relation \(\mathcal{LMR}\) partitions naturally the domain of rotations into intervals \([\theta_{i},\theta_{i+1}]\), where the sequence \(\{\theta_{i}\}_{i=1}^{\Omega}\) is the merged sorted list of the two strictly increasing sequences of angles \(\{\mathcal{C}P_{k_{1}}\}_{i=1}^{n}\), and \(\{\mathcal{C}P_{k}-\phi\}_{i=1}^{n}\)._
**Corollary 3**.: _The number of \(\mathcal{LMR}\) cells \(Q\) is at most \(2n\)._
If a sector \(S(C,\theta_{0},\phi)\) intersects either fully or partially a polygon \(\mathcal{P}=(P_{1},\ldots,P_{n})\) then from Lemma 3 there exists a partition \(L\),\(M\),\(R\) of \(\mathcal{P}\cap S(C,\theta_{0},\phi)\). The partition of \(L\),\(M\),\(R\) belongs to a unique \(\mathcal{LMR}\) cell over the interval of rotations \(\{\theta_{i},\theta_{i+1}\}\), \(i\in\{1,...,Q-1\}\), and from Remark 1, the change of the intersection area is equal to the change in the sum of the two quadrilaterals \(L\) and \(R\) for every rotation \(\theta\in[\theta_{i}.\theta_{i+1}]\) as the area \(M\) remains constant.
In the following section, we study the area of the intersection when it is a quadrilateral as a function of rotations. We will present a rotational sweep algorithm that approximates the maximum intersection by obtaining the maximum of all the approximated local maximums in the intervals \([\theta_{i},\theta_{i+1}]\). Finally, we provide an analysis of the intersection for \(\mathcal{LMR}\) cells.
## 5 Maximum Intersection Algorithm
First, we need to compute the sections of the polygon \(\mathcal{P}\). We can do that using a rotational sweep on the vertices of \(\mathcal{P}\) from the centre of the sector \(C\), to compute all the \(\mathcal{C}P_{i}\) lines \(i\in\{1,\ldots,n\}\), and their derivatives \(\mathcal{C}P_{i}\). Then we can compute also compute \(\mathcal{C}P_{i}-\phi\) and merge them with the ordered list \(\mathcal{C}P_{i}\) to create the sequence of \(\{\theta_{i}\}_{i=1}^{\Omega}\) that make up the intervals \([\theta_{i},\theta_{i+1}]\) of the independent problems.
If we consider that the intersection \(K=\mathcal{P}\cap S(C,\theta,\phi)\) is a quadrilateral then we need to know the edges of the polygon that contribute to \(K\) to be able to evaluate equation (1). We can identify the upper and lower edges of each section by using an algorithm that goes through the upper and lower hull of \(\mathcal{P}\) from centre \(C\) using the counter-clockwise order \((P_{1},\ldots,P_{n})\) and the order \((P_{k_{1}},\ldots,P_{k_{n}})\).
```
0: A polygon \((P_{i},P_{k_{1}}^{n}\), and a rearrangement \((P_{k_{1}},P_{k_{2}}^{n}\) ordered by their \(\mathcal{C}P_{k_{2}}\) values
0: Two lists of the upper and lower edge of each section of the polygon.
1: Set upper_Edges = \(\{P_{k_{1}},P_{k_{1}+1}\}\), lower_Edges = \(\{P_{k_{1}},P_{k_{1}-1}\}\)
2: Set cu = \(k_{1}+1/\)The index of the left vertex of the current Upper edge
3:for\(j=2\) to \(n\)do
4: //Check if the change is in the upper or lower edge
5:if\(P_{k_{1}}\)coincides with \(P_{\alpha}\)then
6: Set upper_Edges = upper_Edges \(\cup[P_{k_{1}+1}]\), cu = \(k_{j}+1\)
7:else
8: Set lower_Edges = lower_Edges \(\cup[P_{k_{j}-1}]\)
9:endif
10:endfor
11:return upper_Edges and lower_Edges
```
**Algorithm 1**_Identifying the Lower and Upper Edge of Each Section_
Now we are ready to present an algorithm that \(\varepsilon\) approximates the maximum intersection with accuracy \(\varepsilon>1\), \(|apx-opt|<10^{-\varepsilon}\).
To find the maximum in line 6 then we examine all the local extreme points where \(df_{i}/d\theta=0\) plus the values \(f(\theta_{i})\), and \(f(\theta_{i+1})\).
**Theorem 3**.: _Given a convex polygon \(\mathcal{P}=(P_{1},\ldots,P_{n})\) with \(n\) vertices, and a sector \(S(C,\theta,\phi)\) where \(\theta\in[0,2\pi]\), then Algorithm 2 approximates up to \(\varepsilon\) accuracy the direction \(\theta_{max}\) such that the area of \(S(C,\theta_{max},\phi)\cap\mathcal{P}\) is maximised, in time \(\mathcal{O}(n(\log n+\log\varepsilon/\phi))\)._
Proof.: Let \(\{\theta_{i}\}_{i=1}^{Q}\) be the sequence of domains where an LMR does not change. The area of intersection \(\mathcal{P}\cap S(\theta,\phi)\) for each cell of the partition, \(i\in\{1,\ldots,Q\}\), is given from equation (4)
\[f(\theta) =f_{i}(\theta) \theta \in[\theta_{i},\theta_{i+1}], i\in\{1,\ldots,Q\}\]
We prove that Algorithm 2 returns a value \(\theta^{*}\) that maximises \(f\), that is \(f(\theta^{*})\geq f_{A}(\theta)\), \(\theta\in\mathcal{D}_{G}\). But first, we need to guarantee that such \(\theta^{*}\) exists.
**Lemma 5**.: _There is at least one point \(\theta^{*}\in\mathcal{D}_{G}\), so the function \(f\) has a global maximum._
Proof.: Let us consider the piecewise function \(f=f_{i}(\theta)\), \(\theta\in[\theta_{i},\theta_{i+1}]\), \(i\in\{1,\ldots,Q\}\) which is continuous since each branch \(f_{i}\) is continuous in the interval \([\theta_{i},\theta_{i+1}]\). Also the domain of \(f\) is compact because \(\mathcal{D}_{G}=\bigcup_{i=1}^{Q}[\theta_{i},\theta_{i+1}]\), this means \(\mathcal{D}_{G}\) is the finite union of closed and bounded subsets of \(\mathbb{R}\). Hence \(f:\mathcal{D}_{G}\to\mathbb{R}\) is a continuous function defined in the compact set \(\mathcal{D}_{G}\). So from the extreme value Theorem [25], there is at least one point \(\theta^{*}\in\mathcal{D}_{G}\) such that \(f(\theta^{*})=\max\left(f\right)\). _End of proof of Lemma 5_
The sequence \(\{\theta_{i}\}_{i=1}^{Q}\) partitions the domain \(\mathcal{D}_{G}\), if we find the local maximum of each partition, then the maximum of the local maximums should be the global maximum. At lines 4-8 this is what the Algorithm 2 does.
To find a local maximum, that is a maximum in each partition we search in \(Z_{i}=\left\{r\in[\theta_{i},\theta_{i+1}]:\frac{df}{d\theta}(r)=0\right\}\cup \{\theta_{i}\}\cup\{\theta_{i+1}\}\). Each \(f_{i}\) is derivable because it is of the form of equation (4) where \(A_{\phi}^{L_{i}}\) is function (1) which is derivable, and \(A_{\phi}^{R_{i}}\) is also function (1) with different arguments. There are three cases for function \(f_{i}\), notice that when the sector intersects partially the polygon \(\mathcal{P}\), then we and we need to be able to find the maximum of function \(f\) in each case:
* **Case 1** If \(S(C,\theta,\phi)\) contains \(\mathcal{P})\) (i.e. \(\phi>\theta_{Q}-\theta_{1}\)) so the maximum intersection is the polygon itself. In this case, we can return \(\theta_{1}\).
* **Case 2:** If \(\mathcal{P}\cap S(C,\theta,\phi)\) is a subset of a section \(S_{i}\), \(in\{1,\ldots,n\}\) then from Theorem 2 we can approximate the maximum up to \(\varepsilon\) accuracy.
* **Case 3:** There are three partitions of Left Middle Right \[f_{i}(\theta) =A_{\phi}^{L_{i}}(\theta)+A_{\phi}^{R}(\theta)+Area(M_{i}) \theta \in[\theta_{i},\theta_{i+1}]\] In this case we can still apply the same technique as in Theorem 2. We rewrite functions \(A_{\phi}^{L_{i}}(\theta)+A_{\phi}^{R_{i}}(\theta)\) as four functions \(A_{\theta}(\phi)\) as in Lemma 2, and we can portion the domain \([\theta_{i},\theta_{i+1}]\) into at most 9 cells and then run the Newton Raphson in every one of them to \(\varepsilon\)-approximate the extreme points of \(f_{i}\).
For the running time to partition \(\mathcal{P}\) from \(C\) to compute and to sort \(\{CP_{i}\}_{i=1}^{n}\) takes \(\mathcal{O}(n\log n)\). Algorithm 1 is computing the upper and lower edges of each section which both take time \(\mathcal{O}(n)\). Now in lines 4 - 7 of Algorithm 2, the algorithm either finds or approximates the local maximum of each partition \(LMR_{i}\). In case 2, the algorithm either runs at most 5 times Newton Raphson or in case 3 it runs as Theorem 2 states in time \(\mathcal{O}(|b-a|\,\log\varepsilon/\phi)\), where the length of the interval cannot be more than \(\pi\) because \(\mathcal{P}\) is a convex polygon. So the running time of the algorithm is \(\mathcal{O}(n\log n+n\log\varepsilon/\phi)=\mathcal{O}(n(\log n+\log \varepsilon/\phi))\).
**Conclusion:** The designed methods of finding the maximal intersection of a convex polygon with a rotating FOV directly can be applied to the special case of non-convex polygons where a rotating FOV could have only one component intersection and would not split the intersection into several disconnected parts. In this case, the presented methods still work because there is no restriction on gradients in the independent subproblems, and the polygon can be decomposed to the already studied equivalent classes. On the other hand, the intersection of a non-convex polygon with a rotating FOV could create disconnected areas. However, the area functions are still applicable. The distinctive difference is that for every equivalence class, many intersection components may appear that lead to the calculation of the summation of multiple Left and Right functions. Finally, to complete the solution in this case, one must be aware of the other computational geometry problem of identifying these disconnected areas.
## 6 Technical Calculations and Proofs
In this section, we will provide proofs for Theorem 1, and Propostion 1. Before proving Theorem 1, we will prove a formula that calculates the area of any convex polygon.
**Lemma 6**.: _The area of a polygon \(\mathcal{P}=(P_{1},\ldots,P_{n})\), is given by the following formula:_
\[poly\_area((x_{1},y_{1}),\ldots,(x_{n},y_{n})) =\frac{1}{2}\cdot\left(x_{n}\ y_{1}+\sum_{i=1}^{n-1}x_{i}\ y_{i+1 }-x_{1}y_{n}-\sum_{i=1}^{n-1}x_{i+1}\ y_{i}\right)\] \[=\frac{1}{2}\left(det(P_{n},P_{1})+\sum_{i=1}^{n-1}det(P_{i},P_{( i+1)}\right) \tag{5}\]
Proof.: We will prove this formula with the use of induction on the number of vertices of a polygon.
**Base of the Induction:** First, with the following claim, we prove the base of the induction for \(n=3\). The area of a triangle with coordinates \(A(x_{1},y_{1})\),\(B(x_{2},y_{2})\) and \(C(x_{3},y_{3})\) is:
\[poly\_area(ABC)=\frac{1}{2}(x_{1}y_{2}+x_{2}y_{3}+x_{3}y_{1}-x_{1}y_{3}-x_{2}y _{1}-x_{3}y_{2})\]
Indeed, by defining the two vectors \(v_{1}=B-A\) and \(v_{2}=C-A\) it well known (see [6]) that the area of the triangle \(ABC\) is
\[\frac{1}{2}det(v_{1},v_{2})=\frac{1}{2}\left|\begin{array}{cc}(x_{2}-x_{1}) &(x_{3}-x_{1})\\ (y_{2}-y_{1})&(y_{3}-y_{1})\end{array}\right|=\frac{1}{2}(x_{1}y_{2}+x_{2}y_{ 3}+x_{3}y_{1}-x_{1}y_{3}-x_{2}y_{1}-x_{3}y_{2})\]
**Induction Hypothesis:** Let us assume that the polygon \(P_{1},\ldots,P_{n}\) with \(n\) vertices in a counterclockwise order has an area that is given from:
\[poly\_area(P_{1},\ldots,P_{n})=\frac{1}{2}\cdot det(P_{i},P_{(i\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
Proof of Theorem 1.: The fact that one of the sectors has a fixed direction means that it can be defined using two intersecting lines \(\varepsilon_{1}\) and \(\varepsilon_{2}\). We will denote with \(\varepsilon_{1}\) (resp. \(\varepsilon_{2}\)) the left (resp. the right) semi-line of \(S(K,\theta_{K},\phi_{K})\). Notice that the slopes of \(\varepsilon_{1},\varepsilon_{2}\) are of angle \(\theta_{K}\) and \(\theta_{K}+\phi_{K}\) respectively. So we can define the semi-lines of \(S(K,\theta_{K},\phi_{K})\) using a point, and a slope. The theorem is proven through the following lemma.
**Lemma 7**.: _Let \(B=(x_{b},y_{b})\) and \(D=(x_{d},y_{d})\) be two points on the plane, \(\beta\leq\omega\in(-\pi/2,\pi/2)\) two positive angles, and a sector \(S(C,\theta,\phi)\) with center \(C=(x_{0},y_{0})\) and inner angle \(\phi\in(0,\pi)\). The line \(\varepsilon_{1}\) is defined by the point \(D\), and the slope \(\tan\omega\); and \(\varepsilon_{2}\) from \(B\), and \(\tan\beta\) respectively. The quadrilateral's area created by \(\varepsilon_{1}\cap\varepsilon_{2}\cap S\), is given from the following function:_
\[A(\theta,\phi)=\frac{d_{1}\sin\phi\ \cos^{2}\omega}{2\sin(\theta+\phi- \omega)\sin(\theta-\omega)}+\frac{d_{2}\sin\phi\ \cos^{2}\beta}{2\sin(\theta+\phi- \beta)\sin(\theta-\beta)}\qquad\qquad(\theta,\phi)\in\mathcal{R} \tag{6}\]
_where \(\mathcal{R}=(\theta_{min},\theta_{max}-\phi)\times(0,\pi)\). \(K=(x_{b},y_{b})\) is the point of \(\varepsilon_{1}\cap\varepsilon_{2}\)_
\[\theta_{min}=\begin{cases}\omega&\text{if }\frac{x_{b}-x_{b}}{|x_{0}-x_{b}|}>0 \\ CK&\text{if }\frac{x_{b}-x_{b}}{|x_{b}-x_{b}|}<0\end{cases}\qquad\qquad\theta_{ max}=\begin{cases}\dot{C}K&\text{if }\frac{x_{b}-x_{b}}{|x_{0}-x_{b}|}>0\\ \beta&\text{if }\frac{x_{b}-x_{b}}{|x_{b}-x_{b}|}<0\end{cases}\]
\[d_{1}=\frac{x_{0}-x_{b}}{|x_{0}-x_{b}|}(\tan\omega(x_{0}-x_{d})+y_{d}-y_{0}) ^{2},\qquad\qquad d_{2}=\frac{x_{b}-x_{0}}{|x_{b}-x_{0}|}(\tan\beta(x_{0}-x_{b })+y_{b}-y_{0})^{2}\]
Proof of Lemma 7.: Without loss of generality, we will prove the lemma in the case where \(\frac{x_{b}-x_{b}}{|x_{0}-x_{b}|}>0\). The equations of the lines \(\varepsilon_{1}\), \(\varepsilon_{2}\), \(\varepsilon_{\ell}\), and \(\varepsilon_{r}\) (see Figure 8) are:
\[\varepsilon_{1}:\ y=\tan\omega\ (x-x_{d})+y_{d} \varepsilon_{2}:\ y=\tan\beta\ (x-x_{b})+y_{b}\] \[\varepsilon_{r}:\ y=\tan\theta\ (x-x_{0})+y_{0} \varepsilon_{\ell}:\ y=\tan\left(\theta+\phi\right)\ (x-x_{0})+y_{0}\]
Line \(\varepsilon_{2}\) passes through the point \(E=(x_{0},y_{e})\) where \(y_{e}\) can be expressed as \(y_{e}=y_{0}+d_{1}\), respectively \(\varepsilon_{1}\) passes through the point \(E^{\prime}=(x_{0},y_{e}^{\prime})\), where \(y_{e}^{\prime}=y_{0}+d_{1}+d_{2}\), \(d_{1},d_{2}>0\).
\[\varepsilon_{1}:\ y=\tan\omega\ (x-x_{0})+y_{0}+d_{1}+d_{2} \varepsilon_{2}:\ y=\tan\beta\ (x-x_{0})+y_{0}+d_{1}\]
We begin by computing the coordinates of the points \(P_{1},P_{2},P_{3},P_{4}\) as the intersections of the lines \((\varepsilon_{r}\cap\varepsilon_{2})\),
Figure 8: The area of intersection of two sectors. There are the two lines \(\varepsilon_{1}\) and \(\varepsilon_{2}\), and the sector \(S(C,\theta,\phi)\) with the blue lines. The positive angles \(\omega\) and \(\beta\) correspond to the slopes of \(\varepsilon_{1}\) and \(\varepsilon_{2}\) respectively. Function \(A(\theta,\phi)\) is defined between the lines \(\varepsilon_{0_{max}}\), and \(\varepsilon_{0_{min}}\) which correspond to angles \(\omega\) and \(\dot{C}K\), where \(\theta\in\left(\omega,\dot{C}K\right)\).Finally, \(E^{\prime}=\varepsilon_{y}\cap\varepsilon_{1}\), \(E=\varepsilon_{y}\cap\varepsilon_{2}\) and \(d_{1}=|CE^{\prime}|^{2}\), \(d_{2}=|CE|^{2}\).
\((\varepsilon_{r}\cap\varepsilon_{1})\), \((\varepsilon_{\ell}\cap\varepsilon_{1})\) and \((\varepsilon_{\ell}\cap\varepsilon_{2})\) respectively.
\[(\varepsilon_{r}\cap\varepsilon_{2}):\tan\theta\;(x-x_{0})+y_{0}= \tan\beta\;(x-x_{0})+y_{0}+d_{1}\Rightarrow(\tan\theta-\tan\beta)\;(x-x_{0})= d_{1}\] \[\Rightarrow x_{1}=x_{0}+\frac{d_{1}}{\tan\theta-\tan\beta}, y_{1}=y_{0}+\frac{d_{1}\tan\theta}{\tan\theta-\tan\beta}\] \[(\varepsilon_{r}\cap\varepsilon_{1}):\tan\theta\;(x-x_{0})+y_{0}= \tan\omega\;(x-x_{0})+y_{0}+d_{1}+d_{2}\Rightarrow\] \[\Rightarrow(\tan\theta-\tan\omega)\;(x-x_{0})=d_{1}+d_{2}\Rightarrow\] \[\Rightarrow x_{2}=x_{0}+\frac{d_{1}+d_{2}}{\tan\theta-\tan\omega}, y_{2}=y_{0}+\frac{(d_{1}+d_{2})\tan\theta}{\tan\theta-\tan\omega}\] \[(\varepsilon_{\ell}\cap\varepsilon_{1}):\tan(\theta+\phi)\;(x-x_ {0})+y_{0}=\tan\omega\;(x-x_{0})+y_{0}+d_{1}+d_{2}\Rightarrow\] \[\Rightarrow(\tan(\theta+\phi)-\tan\omega)\;(x-x_{0})=d_{1}+d_{2}\Rightarrow\] \[\Rightarrow x_{3}=x_{0}+\frac{d_{1}+d_{2}}{\tan(\theta+\phi)-\tan\omega}, y_{3}=y_{0}+\frac{(d_{1}+d_{2})\tan(\theta+\phi)}{\tan(\theta+\phi)-\tan\omega}\] \[(\varepsilon_{\ell}\cap\varepsilon_{2}):\tan(\theta+\phi)\;(x-x_ {0})+y_{0}=\tan\beta(x-x_{0})+y_{0}+d_{1}\Rightarrow\] \[\Rightarrow(\tan(\theta+\phi)-\tan\beta)(x-x_{0})=d_{1}\] \[\Rightarrow x_{4}=x_{0}+\frac{d_{1}}{\tan(\theta+\phi)-\tan\beta}, y_{4}=y_{0}+\frac{d_{1}\tan(\theta+\phi)}{\tan(\theta+\phi)-\tan\beta}\] \[(\varepsilon_{1}\cap\varepsilon_{2}):\tan\omega\;(x-x_{0})+y_{0}+d _{1}+d_{2}=\tan\beta(x-x_{0})+y_{0}+d_{1}\Rightarrow\] \[\Rightarrow x_{int}=x_{0}+\frac{d_{2}}{\tan\beta-\tan\omega}, y_{int}=y_{0}+d_{1}+\frac{d_{2}\tan\beta}{\tan\beta-\tan\omega}\]
Now we can use the shoelace formula that computes the area of a polygon with vertices, ordered counterclockwise, \(P_{1}\dots P_{n}\), from Lemma 6.
\[poly\_area(P_{1}P_{2}\dots P_{n})=\frac{1}{2}\left(det(P_{n},P_{1})+\sum_{i=1}^ {n}det(P_{i}P_{i+1})\right) \tag{7}\]
to compute the area of the quadrilateral \(P_{1}P_{2}P_{3}P_{4}\)
\[2\cdot poly\_area((x_{1},y_{1}),\dots,(x_{4},y_{4}))=x_{1}y_{2}+x_{ 2}y_{3}+x_{3}y_{4}+x_{4}y_{1}-y_{1}x_{2}-y_{2}x_{3}-y_{3}x_{4}-y_{4}x_{1}=\] \[= y_{1}(x_{4}-x_{2})+y_{2}(x_{1}-x_{3})+y_{3}(x_{2}-x_{4})+y_{4}(x _{3}-x_{1}) \tag{8}\]
\[\bullet y_{1}(x_{4}-x_{2})=y_{1}\left(x_{0}+\frac{d_{1}}{\tan( \theta+\phi)-\tan\beta}-x_{0}-\frac{d_{1}+d_{2}}{\tan\theta-\tan\omega}\right) =\left(y_{0}+\frac{d_{1}\tan\theta}{\tan\theta-\tan\theta}\right)\left(\frac{d_ {1}}{\tan(\theta+\phi)-\tan\theta}-\frac{d_{1}+d_{2}}{\tan\theta-\tan\omega}\right)\] \[\bullet y_{2}(x_{1}-x_{3})=y_{2}\left(x_{0}+\frac{d_{1}+d_{2}}{\tan \theta-\tan\beta}-x_{0}-\frac{d_{1}+d_{2}}{\tan\theta(\theta+\phi)-\tan\alpha} \right)=\left(y_{0}+\frac{(d_{1}+d_{2})\tan\theta}{\tan\theta-\tan\omega}\right) \left(\frac{d_{1}}{\tan\theta-\tan\omega}-\frac{d_{1}+d_{2}}{\tan\theta(\theta+ \phi)-\tan\alpha}\right)\] \[\bullet y_{3}(x_{2}-x_{4})=y_{3}\left(x_{0}+\frac{\frac{d_{1}+d_{2}}{ \tan\theta-\tan\alpha}-x_{0}}-\frac{d_{1}}{\tan(\theta+\phi)-\tan\alpha} \right)=\left(y_{0}+\frac{(d_{1}+d_{2})\tan(\theta+\phi)}{\tan(\theta+\phi)- \tan\theta}\right)\left(\frac{d_{1}+d_{2}}{\tan(\theta+\phi)-\tan\alpha} \right)\] \[\bullet y_{4}(x_{3}-x_{1})=y_{4}\left(x_{0}+\frac{d_{1}+d_{2}}{\tan (\theta+\phi)-\tan\alpha}-x_{0}-\frac{d_{1}}{\tan\theta-\tan\theta-\tan\theta} \right)=\left(y_{0}+\frac{d_{1}\tan(\theta+\phi)}{\tan(\theta+\phi)-\tan\alpha} \right)\left(\frac{d_{1}}{\tan(\theta+\phi)-\tan\alpha}-\frac{d_{1}}{\tan\theta -\tan\beta}\right)\] \[\bullet y_{1}(x_{4}-x_{2})+y_{3}(x_{2}-x_{4})=(y_{3}-y_{1})(x_{2}-x_ {4})+\left(\frac{(d_{1}+d_{2})\tan(\theta+\phi)}{\tan(\theta+\phi)-\tan\alpha} \right)+\frac{d_{1}\tan\theta}{\tan\theta-\tan\theta}\right)\left(\frac{d_{1} }{\tan\theta-\tan\alpha}-\frac{d_{1}}{\tan\theta-\tan\beta}\right)=\] \[=\frac{(d_{1}+d_{2})^{2}\tan(\theta+\phi)}{(\tan(\theta+\phi)-\tan \omega)(\tan\theta-\tan\alpha)}-\frac{d_{1}(d_{1}+d_{2})\tan(\theta+\phi)}{( \tan(\theta+\phi)-\tan\alpha)(\tan\theta+\tan\beta)}+\frac{d_{1}\tan\theta}{( \tan(\theta+\phi)-\tan\alpha)(\tan\theta-\tan\beta)}\] \[\bullet y_{2}(x_{1}-x_{3})+y_{4}(x_{3}-x_{1})=(y_{4}-y_{2})(x_{3}-x_ {1})=\left(\frac{d_{1}\tan(\theta+\phi)}{\tan(\theta+\phi)-\tan\alpha}\right) \left(\frac{d_{1}+d_{2}}{\tan(\theta+\phi)-\tan\alpha}\right)\left(\frac{d_ {1}}{\tan\theta-\tan\alpha}\right)\left(\frac{d_{1}+d_{2}}{\tan(\theta+\phi)- \tan\alpha}\right)\left(\frac{d_{1}}{\tan\theta-\tan\beta}\right)\] \[=-\frac{(d_{1}+d_{2})^{2}\tan\theta}{(\tan(\theta+\phi)-\tan\alpha)( \tan\theta-\tan\alpha)}-\frac{d_{1}^{2}\tan(\theta+\phi)}{(\tan(\theta+\phi)- \tan\beta)(\tan\theta-\tan\beta)}+\frac{d_{1}(d_{1}+d_{2})\tan(\theta+\phi)}{( \tan(\theta+\phi)-\tan\alpha)(\tan\theta+\phi)-\tan\beta}+\frac{d_{1}(d_{1}+d_{2}) \tan\theta}{(\tan\theta-\tan\alpha)(\tan\theta-\tan\beta)}\]
By plugging our calculation to equation (8):
\[2\cdot poly\_area((x_{1},y_{1}),\dots,(x_{4},y_{4}))=(y_{3}-y_{1})(x_{2}-x_{4})+ (y_{4}-y_{2})(x_{3}-x_{1})\]
\[\Rightarrow poly\_area=\frac{(d_{1}+d_{2})^{2}(\tan(\theta+\phi)-\tan\theta)}{2( \tan(\theta+\phi)-\tan\alpha)(\tan\theta-\tan\omega)}+\frac{d_{1}^{2}(\tan \theta-\tan\beta)(\tan\theta-\tan\beta)}{2(\tan(\theta+\phi)-\tan\beta)(\tan \theta-\tan\beta)}\]
If we express the lines \(\varepsilon_{1}\) and \(\varepsilon_{2}\) using the points \(B=(x_{b},y_{b})\) and \(D=(x_{d},y_{d})\) respectively then if we take into account that \(E\in\varepsilon_{2}\) and \(E^{\prime}\in\varepsilon_{1}\) then
\[y_{e}=(x_{0}-x_{b})\tan\beta+y_{b}\Rightarrow d_{1}=(x_{0}-x_{b})\tan \beta+y_{b}
Thus obtaining the equation
\[A(C,B,D,\beta,\omega,\theta,\phi)= \frac{(\tan\omega(x_{0}-x_{d})+y_{d}-y_{0})^{2}(\tan\left(\theta+ \phi\right)-\tan\theta)}{2(\tan\left(\theta+\phi\right)-\tan\omega)(\tan\theta- \tan\omega)}+\] \[+\frac{(\tan\beta(x_{0}-x_{d})+y_{d}-y_{0})^{2}(\tan\theta-\tan \left(\theta+\phi\right))}{2(\tan\left(\theta+\phi\right)-\tan\beta)(\tan \theta-\tan\beta)} \tag{9}\]
Now note the following equations
\[\tan a-\tan b=\frac{\sin a}{\cos a}-\frac{\sin b}{\cos b}=\frac{\sin\left(a-b \right)}{\cos a\cos b} \tag{10}\]
\[\frac{\tan a-\tan b}{(\tan a-\tan c)(\tan b-\tan c)}=\frac{\sin\left(a-b \right)\cos a\cos b\cos^{2}c}{\sin\left(a-c\right)\sin\left(b-c\right)\cos a \cos b}=\frac{\sin\left(a-b\right)\cos^{2}c}{\sin\left(a-c\right)\sin\left(b- c\right)} \tag{11}\]
By substituting appropriately the above equation to equation 9 we have
\[A(C,B,D,\beta,\omega,\theta,\phi)=\frac{(\tan\omega(x_{0}-x_{d}) +y_{d}-y_{0})^{2}(\tan\left(\theta+\phi\right)-\tan\theta)}{2(\tan\left( \theta+\phi\right)-\tan\omega)(\tan\theta-\tan\omega)}+\] \[+\frac{(\tan\beta(x_{0}-x_{d})+y_{d}-y_{0})^{2}(\tan\theta-\tan (\theta+\phi))}{2(\tan\left(\theta+\phi\right)-\tan\beta)(\tan\theta-\tan\beta)}\] \[=\frac{(\tan\omega(x_{0}-x_{d})+y_{d}-y_{0})^{2}\sin\phi\cos^{2} \omega}{2\sin\left(\theta+\phi-\omega\right)\sin\left(\theta-\omega\right)}- \frac{(\tan\beta(x_{0}-x_{b})+y_{b}-y_{0})^{2}\sin\phi\cos^{2}\beta}{2\sin \left(\theta+\phi-\beta\right)\sin\left(\theta-\beta\right)}\]
_End of proof of Lemma 7_
By substituting \(\omega=\theta_{K}+\phi_{K}\), and \(\beta=\theta_{K}\) at (6) we have equation (1).
_End of proof of Theorem 1_
Here we provide a proof for Proposition 1.
Proof.: We set \(\omega=\theta_{K}+\phi_{K}\), and \(\beta=\theta_{K}\), so equation (1) is
\[A_{\phi}(\theta)=\frac{d_{1}\sin\phi\ \cos^{2}\omega}{2\sin\left(\theta+ \phi-\omega\right)\sin\left(\theta-\omega\right)}-\frac{d_{2}\sin\phi\ \cos^{2}\beta}{2\sin\left(\theta+\phi-\beta\right)\sin\left(\theta- \beta\right)}\]
Now to show that function \(A\) can be expressed as a polynomial function we will use the determinant to show the complexity of the polygons by avoiding the calculations. Notice that
\[\sin(\theta+\phi-\omega)=\begin{vmatrix}\cos\omega&\cos\theta&\sin\theta\\ \sin\omega&\sin\theta&-\cos\theta\\ 0&\sin\phi&\cos\phi\end{vmatrix}\qquad\qquad\qquad\sin\left(\theta-\omega \right)=\begin{vmatrix}\sin\theta&\cos\theta\\ \sin\omega&\cos\omega\end{vmatrix}\]
\[\sin(\theta+\phi-\omega)\sin\left(\theta-\omega\right)=\begin{vmatrix}\cos \omega&\cos\theta&\sin\theta&0&0\\ \sin\omega&\sin\theta&-\cos\theta&0&0\\ 0&\sin\phi&\cos\phi&0&0\\ 0&0&0&\sin\theta&\cos\theta\\ 0&0&0&\sin\omega&\cos\omega\end{vmatrix} \tag{12}\]
Let \(x=\sin\theta\) and \(\cos\theta=\sqrt{1-x^{2}}\) then equation (12) is
\[\sin(\theta+\phi-\omega)\sin\left(\theta-\omega\right)=\begin{vmatrix}\cos \omega&\sqrt{1-x^{2}}&x&0&0\\ \sin\omega&x&-\sqrt{1-x^{2}}&0&0\\ 0&\sin\phi&\cos\phi&0&0\\ 0&0&0&x&\sqrt{1-x^{2}}\\ 0&0&0&\sin\omega&\cos\omega\end{vmatrix}\]
The determinant of the above equation will produce an equation of the following form
\[P_{1}(x)+P_{2}(x)\sqrt{1-x^{2}}\]
Where \(P_{1}\) is a polynomial of order at most 2 and \(P_{2}\) is a polynomial of order 1. If we apply the same technique for \(\sin(\theta+\phi-\beta)\sin(\theta-\beta)\) we get the rational form of \(A_{\phi}(\theta)\)
\[A_{\phi}(\theta) =\frac{D_{1}}{P_{1}(x)+P_{2}(x)\sqrt{1-x^{2}}}+\frac{D_{2}}{P_{3}( x)+P_{4}(x)\sqrt{1-x^{2}}}\] \[=\frac{D_{1}P_{3}(x)+D_{1}P_{4}(x)\sqrt{1-x^{2}}+D_{2}P_{1}(x)+D_{ 2}P_{2}(x)\sqrt{1-x^{2}}}{(P_{1}(x)+P_{2}(x)\sqrt{1-x^{2}})(P_{3}(x)+P_{4}(x) \sqrt{1-x^{2}})}\]
|
2305.19922 | Representation-Driven Reinforcement Learning | We present a representation-driven framework for reinforcement learning. By
representing policies as estimates of their expected values, we leverage
techniques from contextual bandits to guide exploration and exploitation.
Particularly, embedding a policy network into a linear feature space allows us
to reframe the exploration-exploitation problem as a
representation-exploitation problem, where good policy representations enable
optimal exploration. We demonstrate the effectiveness of this framework through
its application to evolutionary and policy gradient-based approaches, leading
to significantly improved performance compared to traditional methods. Our
framework provides a new perspective on reinforcement learning, highlighting
the importance of policy representation in determining optimal
exploration-exploitation strategies. | Ofir Nabati, Guy Tennenholtz, Shie Mannor | 2023-05-31T14:59:12Z | http://arxiv.org/abs/2305.19922v2 | # Representation-Driven Reinforcement Learning
###### Abstract
We present a representation-driven framework for reinforcement learning. By representing policies as estimates of their expected values, we leverage techniques from contextual bandits to guide exploration and exploitation. Particularly, embedding a policy network into a linear feature space allows us to reframe the exploration-exploitation problem as a representation-exploitation problem, where good policy representations enable optimal exploration. We demonstrate the effectiveness of this framework through its application to evolutionary and policy gradient-based approaches, leading to significantly improved performance compared to traditional methods. Our framework provides a new perspective on reinforcement learning, highlighting the importance of policy representation in determining optimal exploration-exploitation strategies.
Machine Learning, Reinforcement Learning, Reinforcement Learning
## 1 Introduction
Reinforcement learning (RL) is a field in machine learning in which an agent learns to maximize a reward through interactions with an environment. The agent maps its current state into action and receives a reward signal. Its goal is to maximize the cumulative sum of rewards over some predefined (possibly infinite) horizon (Sutton and Barto, 1998). This setting fits many real-world applications such as recommendation systems (Li et al., 2010), board games (Silver et al., 2017), computer games (Mnih et al., 2015), and robotics (Polydoros and Nalpantidis, 2017).
A large amount of contemporary research in RL focuses on gradient-based policy search methods (Sutton et al., 1999; Silver et al., 2014; Schulman et al., 2015, 2017; Haarnoja et al., 2018). Nevertheless, these methods optimize the policy **locally** at specific states and actions. Salimans et al. (2017) have shown that such optimization methods may cause high variance updates in long horizon problems, while Tessler et al. (2019) have shown possible convergence to suboptimal solutions in continuous regimes. Moreover, policy search methods are commonly sample inefficient, particularly in hard exploration problems, as policy gradient methods usually converge to areas of high reward, without sacrificing exploration resources to achieve a far-reaching sparse reward.
In this work, we present Representation-Driven Reinforcement Learning (RepRL) - a new framework for policy-search methods, which utilizes theoretically optimal exploration strategies in a learned latent space. Particularly, we reduce the policy search problem to a contextual bandit problem, using a mapping from policy space to a linear feature space. Our approach leverages the learned linear space to optimally tradeoff exploration and exploitation using well-established algorithms from the contextual bandit literature (Abbasi-Yadkori et al., 2011; Agrawal and Goyal, 2013). By doing so, we reframe the exploration-exploitation problem to a representation-exploitation problem, for which good policy representations enable optimal exploration.
We demonstrate the effectiveness of our approach through its application to both evolutionary and policy gradient-based approaches - demonstrating significantly improved performance compared to traditional methods. Empirical experiments on the MuJoCo (Todorov et al., 2012) and MinAtar (Young and Tian, 2019) show the benefits of our approach, particularly in sparse reward settings. While our framework does not make the exploration problem necessarily easier, it provides a new perspective on reinforcement learning, shifting the focus to policy representation in the search for optimal exploration-exploitation strategies.
## 2 Preliminaries
We consider the infinite-horizon discounted Markov Decision Process (MDP). An MDP is defined by the tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},r,T,\beta,\gamma)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(T:\mathcal{S}\times\mathcal{A}\rightarrow\Delta(\mathcal{S})\) is the transition kernel, \(r:\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) is the reward function, \(\beta\in\Delta(\mathcal{S})\) is the initial state distribution, and \(\gamma\in[0,1)\) is the discount factor. A stationary policy \(\pi:\mathcal{S}\rightarrow\Delta(\mathcal{A})\), maps states
into a distribution over actions. We denote by \(\Pi\) the set of stationary stochastic policies, and the history of policies and trajectories up to episode \(k\) by \(\mathcal{H}_{k}\). Finally, we denote \(S=|\mathcal{S}|\) and \(A=|\mathcal{A}|\).
The return of a policy is a random variable defined as the discounted sum of rewards
\[G(\pi)=\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}), \tag{1}\]
where \(s_{0}\sim\beta,a_{t}\sim\pi(s_{t}),s_{t+1}\sim T(s_{t},a_{t})\), and the policy's value is its mean, i.e., \(v(\pi)=\mathbb{E}[\,\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\mid\beta,\pi,T\,]\). An optimal policy maximizes the value, i.e., \(\pi^{*}\in\arg\max_{\pi\in\Pi}v(\pi)\).
We similarly define the per-state value function, \(v(\pi,s)\) as \(v(\pi,s)=\mathbb{E}[\,\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t})\mid s_{0}=s, \pi,T\,]\), and note that \(v(\pi)=\mathbb{E}_{s\sim\beta}[v(\pi,s)]\).
Finally, we denote the discounted state-action frequency distribution w.r.t. \(\pi\) by
\[\rho^{\pi}(s,a)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}Pr\bigg{(}s_{t}=s,a_{t} =a|\beta,\pi,T\bigg{)},\]
and let \(\mathcal{K}=\{\rho^{\pi}:\pi\in\Pi\}\).
### Linear Bandits
In this work, we consider the linear bandit framework as defined in Abbasi-Yadkori et al. (2011). At each time \(t\), the learner is given a decision set \(D_{t}\subseteq\mathbb{R}^{d}\), which can be adversarially and adaptively chosen. The learner chooses an action \(x_{t}\in D_{t}\) and receives a reward \(r_{t}\), whose mean is linear w.r.t \(x_{t}\), i.e., \(\mathbb{E}[\,r_{t}\mid x_{t}\,]=\langle x_{t},w\rangle\) for some unknown parameter vector \(w\in\mathbb{R}^{d}\).
A general framework for solving the linear bandit problem is the "Optimism in the Face of Uncertainty Linear bandit algorithm" (OFUL, Abbasi-Yadkori et al. (2011)). There, a linear regression estimator is constructed each round as follows:
\[\hat{w}_{t} =V_{t}^{-1}b_{t},\] \[V_{t} =V_{t-1}+x_{t}x_{t}^{\top},\] \[b_{t} =b_{t-1}+x_{t}y_{t}, \tag{2}\]
where \(y_{t},x_{t}\) are the noisy reward signal and chosen action at time \(t\), respectively, and \(V_{0}=\lambda I\) for some positive parameter \(\lambda>0\).
It can be shown that, under mild assumptions, and with high probability, the self-normalizing norm \(\left\|\hat{w}_{t}-w\right\|_{V_{t}}\) can be bounded from above (Abbasi-Yadkori et al., 2011). OFUL then proceeds by taking an optimistic action \((x_{t},\bar{w}_{t})\in\arg\max_{x\in D_{t},\bar{w}\in\mathcal{C}_{t}}\langle x,\bar{w}\rangle\), where \(\mathcal{C}_{t}\) is a confidence set induced by the aforementioned bound on \(\left\|\hat{w}_{t}-w\right\|_{V_{t}}\). In practice, a softer version is used in Chu et al. (2011), where an action is selected optimistically according to
\[x_{t}\in\arg\max_{x\in D_{t}}\langle x,\hat{w}_{t}\rangle+\alpha\sqrt{x^{T}V_ {t}^{-1}x},\] (OFUL)
where \(\alpha>0\) controls the level of optimism.
Alternatively, linear Thompson sampling (TS, Abeille & Lazaric (2017)) shows it is possible to converge to an optimal solution with sublinear regret, even with a constant probability of optimism. This is achieved through the sampling of a parameter vector from a normal distribution, which is determined by the confidence set \(\mathcal{C}_{t}\). Specifically, linear TS selects an action according to
\[x_{t}\in\arg\max_{x\in D_{t}}\langle x,\tilde{w}_{t}\rangle,\;\tilde{w}_{t} \sim\mathcal{N}\big{(}\hat{w}_{t},\sigma^{2}V_{t}^{-1}\big{)},\] (TS)
where \(\sigma>0\) controls the level of optimism. We note that for tight regret guarantees, both \(\alpha\) and \(\sigma\) need to be chosen to respect the confidence set \(\mathcal{C}_{t}\). Nevertheless, it has been shown that tuning these parameters can improve performance in real-world applications (Chu et al., 2011).
## 3 RL as a Linear Bandit Problem
Classical methods for solving the RL problem attempted to use bandit formulations (Fox & Rolph, 1973). There, the set of policies \(\Pi\) reflects the set of arms, and the value \(v(\pi)\) is the expected bandit reward. Unfortunately, such a solution is usually intractable due to the exponential number of policies (i.e., bandit actions) in \(\Pi\).
Alternatively, we consider a linear bandit formulation of the RL problem. Indeed, it is known that the value can be expressed in linear form as
\[v(\pi)=\mathbb{E}_{(s,a)\sim\rho^{\pi}}[r(s,a)]=\langle\rho^{\pi},r\rangle. \tag{3}\]
Here, any \(\rho^{\pi}\in\mathcal{K}\) represents a possible action in the linear bandit formulation (Abbasi-Yadkori et al., 2011). Notice
Figure 1: RepRL scheme. Composed of 4 stages: representation of the parameters, constructing a decision set, choosing the best arm using an off-the-shelf linear bandit algorithm, collect data with the chosen policy.
that \(|\mathcal{K}|=|\Pi|\), as any policy \(\pi\in\Pi\) can be written as \(\pi(a|s)=\frac{\rho^{\pi}(s,a)}{\sum_{a^{\prime}}\rho^{\pi}(s,a^{\prime})}\), rendering the problem intractable. Nevertheless, this formulation can be relaxed using a lower dimensional embedding of \(\rho^{\pi}\) and \(r\). As such, we make the following assumption.
**Assumption 3.1** (Linear Embedding).: There exist a mapping \(f:\Pi\rightarrow\mathbb{R}^{d}\) such that \(v(\pi)=\langle f(\pi),w\rangle\) for all \(\pi\in\Pi\) and some unknown \(w\in\mathbb{R}^{d}\).
We note that Assumption 3.1 readily holds when \(d=SA\) for \(f(\pi)\equiv\rho^{\pi}\) and \(w=r\). For efficient solutions, we consider environments for which the dimension \(d\) is relatively low, i.e., \(d\ll SA\).
Note that neural bandit approaches also consider linear representations (Riquelme et al., 2018). Nevertheless, these methods use **mappings from states \(\mathcal{S}\mapsto\mathbb{R}^{d}\)**, whereas we consider **mapping entire policies \(\Pi\mapsto\mathbb{R}^{d}\)** (i.e., embedding the _function_\(\pi\)). Learning a mapping \(f\) can be viewed as trading the effort of finding good exploration strategies in deep RL problems to finding a good representation. We emphasize that we do not claim it to be an _easier_ task, but rather a _different_ viewpoint of the problem, for which possible new solutions can be derived. Similar to work on neural-bandits (Riquelme et al., 2018), finding such a mapping requires alternating between representation learning and exploration.
### RepRL
We formalize a representation-driven framework for RL, inspired by linear bandits (Section 2.1) and Assumption 3.1. We parameterize the policy \(\pi\) and mapping \(f\) using neural networks, \(\pi_{\theta}\) and \(f_{\phi}\), respectively. Here, a policy \(\pi_{\theta}\) is represented in lower-dimensional space as \(f_{\phi}(\pi_{\theta})\). Therefore, searching in policy space is equivalent to searching in the parameter space. With slight abuse of notation, we will denote \(f_{\phi}(\pi_{\theta})=f_{\phi}(\theta)\).
Pseudo code for RepRL is presented in Algorithm 1. At every episode \(k\), we map the policy's parameters \(\theta_{k-1}\) to a latent space using \(f_{\phi_{k-1}}(\theta_{k-1})\). We then use a construction algorithm, ConstructDecisonSet\((\theta_{k-1},\mathcal{H}_{k-1})\), which takes into account the history \(\mathcal{H}_{k-1}\), to generate a new decision set \(D_{k}\). Then, to update the parameters \(\theta_{k-1}\) of the policy, we select an optimistic policy \(\pi_{\theta_{k}}\in D_{k}\) using a linear bandit method, such as TS or OFUL (see Section 2.1). Finally, we rollout the policy \(\pi_{\theta_{k}}\) and update the representation network and the bandit parameters according to the procedure outlined in Equation (2), where \(x_{k}\) are the learned representations of \(f_{\phi_{k}}\). A visual schematic of our framework is depicted in Figure 1.
```
1:Init:\(\mathcal{H}_{0}\leftarrow\emptyset\), \(\pi_{\theta_{0}}\), \(f_{\phi_{0}}\) randomly initialized
2:for\(k=1,2,\dots\)do
3: Representation Stage: Map the policy network \(\pi_{\theta_{k-1}}\) using representation network \(f_{\phi_{k-1}}(\theta_{k-1})\).
4: Decision Set Stage:\(D_{k}\leftarrow\texttt{ConstructDecisonSet}(\theta_{k-1},\mathcal{H}_{k-1})\).
5:Bandit Stage: Use linear bandit algorithm to choose \(\pi_{\theta_{k}}\) out of \(D_{k}\).
6: Exploitation Stage: Rollout policy \(\pi_{\theta_{k}}\) and store the return \(G_{k}\) in \(\mathcal{H}_{k}\).
7: Update representation \(f_{\phi_{k}}\).
8: Update bandit parameters \(\hat{w}_{t},V_{t}\) (Equation (2)) with the updated representation.
9:endfor
```
**Algorithm 1** RepRL
In the following sections, we present and discuss methods for representation learning, decision set construction, and propose two implementations of RepRL in the context of evolutionary strategies and policy gradient. We note that RepRL is a framework for addressing RL through representation, and as such, any representation learning technique or decision set algorithm can be incorporated as long as the basic structure is maintained.
### Learning Representations for RepRL
We learn a linear representation of a policy using tools from variational inference. Specifically, we sample a representation from a posterior distribution \(z\sim f_{\phi}(z|\theta)\), and train the representation by maximizing the Evidence Lower Bound (ELBO) (Kingma and Welling, 2013)\(\mathcal{L}(\phi,\kappa)=-\mathbb{E}_{z\sim f_{\phi}(z|\theta)}[\log p_{\kappa}(G |z)]+D_{KL}(f_{\phi}(z|\theta)\|p(z)),\) where \(f_{\phi}(z|\theta)\) acts as the encoder of the embedding, and \(p_{\kappa}(G|z)\) is the return decoder or likelihood term.
The latent representation prior \(p(z)\) is typically chosen to be a zero-mean Gaussian distribution. In order to encourage linearity of the value (i.e the return's mean) with respect to the learned representation, we chose the likelihood to be a Gaussian distribution with a mean that is linear in the representation, i.e., \(p_{\kappa}(G|z)=\mathcal{N}(\kappa^{\top}z,\sigma^{2})\). When the encoder is also chosen to be a Gaussian distribution, the loss function has a closed form. The choice of the decoder to be linear is crucial, due to the fact that the value is supposed to be linear w.r.t learned embeddings. The parameters \(\phi\) and \(\kappa\) are the learned parameters of the encoder and decoder, respectively. Note that a deterministic mapping occurs when the function \(f_{\phi}(z|\theta)\) takes the form of the Dirac delta function. A schematic of the architectural framework is presented in Figure 2.
### Constructing a Decision Set
The choice of the decision set algorithm (line 4 of Algorithm 1) may have a great impact on the algorithm in terms
of performance and computational complexity. Clearly, choosing \(D_{k}=\Pi,\forall k\) will be unfeasible in terms of computational complexity. Moreover, it may be impractical to learn a linear representation for all policies at once. We present several possible choices of decision sets below.
Policy Space Decision Set.One potential strategy is to sample a set of policies centered around the current policy
\[D_{k}=\{\theta_{k}+\epsilon_{i}\}_{i=1}^{N},\ \ \epsilon_{i}\sim\mathcal{N}(0, \nu^{2}I), \tag{4}\]
where \(\nu>0\) controls how local policy search is. This approach is motivated by the assumption that the representation of policies in the vicinity of the current policy will exhibit linear behavior with respect to the value function due to their similarity to policies encountered by the learner thus far.
Latent Space Decision Set.An alternative approach involves sampling policies in their learned latent space, i.e.,
\[D_{k}=\{z_{k}+\epsilon_{i}\}_{i=1}^{N},\ \ \epsilon_{i}\sim\mathcal{N}(0,\nu^{2}I), \tag{5}\]
where \(z_{k}\sim f_{\phi}(z|\theta_{k})\). The linearity of the latent space ensures that this decision set will improve the linear bandit target (UCB or the sampled value in TS), which will subsequently lead to an improvement in the actual value. This approach enables optimal exploration w.r.t. linear bandits, as it uniformly samples the eigen directions of the precision matrix \(V_{t}\), rather than only sampling specific directions as may occur when sampling in the parameter space.
Unlike Equation (4) constructing the set in Equation (5) presents several challenges. First, in order to rollout the policy \(\pi_{\theta_{k}}\), one must construct an inverse mapping to extract the chosen policy from the selected latent representation. This can be done by training a decoder for the policy parameters \(q(\theta|z)\). Alternatively, we propose to use a decoder-free approach. Given a target embedding \(z^{*}\in\arg\max_{z\in D_{t}}\langle z,\hat{w}\rangle\), we search for a policy \(\theta^{*}\in\arg\max_{\theta}f_{\phi}(z^{*}|\theta)\). This optimization problem can be solved using gradient descent-based optimization algorithms by varying the inputs to \(f_{\phi}\). A second challenge for latent-based decision sets involves the realizability of such policies. That is, there may exist representations \(z\in D_{k}\), which are not mapped by any policy in \(\Pi\). Lastly, even for realizable policies, the restored \(\theta\) may be too far from the learned data manifold, leading to an overestimation of its value and a degradation of the overall optimization process. One way to address these issues is to use a small enough value of \(\nu\) during the sampling process, reducing the probability of the set members being outside the data distribution. We leave more sophisticated methods of latent-based decision sets for future work.
History-based Decision Set.An additional approach uses the history of policies at time \(k\) to design a decision set. Specifically, at time episode \(k\) we sample around the set of policies observed so far, i.e.,
\[D_{k}=\bigcup_{\ell\in[k]}\{\theta_{\ell}+\epsilon_{\ell,i}\}_{i=1}^{N},\ \ \epsilon_{\ell,i}\sim\mathcal{N}(0,\nu^{2}I), \tag{6}\]
resulting in a decision set of size \(Nk\). After improving the representation over time, it may be possible to find a better policy near policies that have already been used and were missed due to poor representation or sampling mismatch. This method is quite general, as the history can be truncated only to consider a certain number of past time steps, rather than the complete set of policies observed so far. Truncating the history can help reduce the size of the decision set, making the search more computationally tractable.
In Section 5, we compare the various choices of decision sets. Nevertheless, we found that using policy space decisions is a good first choice, due to their simplicity, which leads to stable implementations. Further exploration of other decision sets is left as a topic for future research.
### Inner trajectory sampling
Vanilla RepRL uses the return values of the entire trajectory. As a result, sampling the trajectories at their initial states is the natural solution for both the bandit update and repre
Figure 2: The diagram illustrates the structure of the networks in RepRL. The policy’s parameters are fed into the representation network, which acts as a posterior distribution for the policy’s latent representation. Sampling from this posterior, the latent representation is used by the bandits algorithm to evaluate the value that encapsulates the exploration-exploitation tradeoff.
sentation learning. However, the discount factor diminishes learning signals beyond the \(\frac{1}{1-\gamma}\) effective horizon, preventing the algorithm from utilizing these signals, which may be critical in environments with long-term dependencies. On the other hand, using a discount factor \(\gamma=1\) would result in returns with a large variance, leading to poor learning. Instead of sampling from the initial state, we propose to use the discount factor and sample trajectories at various states during learning, enabling the learner to observe data from different locations along the trajectory. Under this sampling scheme, the estimated value would be an estimate of the following quantity:
\[\tilde{v}(\pi)=\mathbb{E}_{s\sim\rho^{\pi}}[v(\pi,s)].\]
In the following proposition we prove that optimizing \(\tilde{v}(\pi)\) is equivalent to optimizing the real value.
**Proposition 3.2**.: _For a policy \(\pi\in\Pi\), \(\tilde{v}(\pi)=\frac{v(\pi)}{1-\gamma}\)._
The proof can be found in the Appendix C. That is, sampling along the trajectory from \(\rho^{\pi}\) approximates the scaled value, which, like \(v(\pi)\), exhibits linear behavior with respect to the reward function. Thus, instead of sampling the return defined in Equation (1), we sample \(\tilde{G}(\pi)=\sum_{t=0}^{\infty}\gamma^{t}r(s_{t},a_{t}),\) where \(s_{0}\sim\rho^{\pi},a_{t}\sim\pi(s_{t}),s_{t+1}\sim T(s_{t},a_{t})\), both during representation learning and bandit updates. Empirical evidence suggests that uniformly sampling from the stored trajectory produces satisfactory results in practice.
```
1:Input: initial policy \(\pi_{\theta}\), decision set size \(N\), history \(\mathcal{H}\).
2:for\(t=1,2,\dots,T\)do
3: Sample an evaluation set and collect their returns.
4: Update representation \(f_{t}\) and bandit parameters \((\hat{w}_{t},V_{t})\) using history.
5: Construct a decision set \(D_{t}\).
6: Use linear bandit algorithm to evaluate each policy in \(D_{t}\).
7: Update policy using ES scheme (Section 4.1).
8:endfor
```
**Algorithm 2** Representation Driven Evolution Strategy
## 4 RepRL Algorithms
In this section we describe two possible approaches for applying the RepRL framework; namely, in Evolution Strategy (Wierstra et al., 2014) and Policy Gradients (Sutton et al., 1999).
### Representation Driven Evolution Strategy
Evolutionary Strategies (ES) are used to train agents by searching through the parameter space of their policy and sampling their return. In contrast to traditional gradient-based methods, ES uses a population of candidates evolving over time through genetic operators to find the optimal parameters for the agent. Such methods have been shown to be effective in training deep RL agents in high-dimensional environments (Salimans et al., 2017; Mania et al., 2018).
At each round, the decision set is chosen over the policy space with Gaussian sampling around the current policy as described in Section 3.3. Algorithm 5 considers an ES implementation of RepRL. To improve the stability of the optimization process, we employ soft-weighted updates across the decision set. This type of update rule is similar to that used in ES algorithms (Salimans et al., 2017; Mania et al., 2018), and allows for an optimal exploration-exploitation trade-off, replacing the true sampled returns with the bandit's value. Moreover, instead of sampling the chosen policy, we evaluate it by also sampling around it as done in ES-based algorithms. Each evaluation is used for the bandit parameters update and representation learning process. Sampling the evaluated policies around the chosen policy helps the representation avoid overfitting to a specific policy and generalize better for unseen policies - an important property when selecting the next policy.
Unlike traditional ES, optimizing the UCB in the case of OFUL or sampling using TS can encourage the algorithm to explore unseen policies in the parameter space. This exploration is further stabilized by averaging over the sampled directions, rather than assigning the best policy in the decision set. This is particularly useful when the representation is still noisy, reducing the risk of instability caused by hard assignments. An alternative approach uses a subset of \(D_{t}\) with the highest bandit scores, as suggested in Mania et al. (2018), which biases the numerical gradient towards the direction with the highest potential return.
### Representation Driven Policy Gradient
RepRL can also be utilized as a regularizer for policy gradient algorithms. Pseudo code for using RepRL in policy gradients is shown in Algorithm 6. At each gradient step, a weighted regularization term \(d(\theta,\tilde{\theta})\) is added, where \(\tilde{\theta}\) are
the parameters output by RepRL with respect to the current parameters for a chosen metric (e.g., \(\ell_{2}\)):
\[\mathcal{L}_{\text{reg}}(\theta)=\mathcal{L}_{\text{PG}}(\theta)+\zeta d(\theta, \tilde{\theta}). \tag{7}\]
After collecting data with the chosen policy and updating the representation and bandit parameters, the regularization term is added to the loss of the policy gradient at each gradient step. The policy gradient algorithm can be either on-policy or off-policy while in our work we experiment with an on-policy algorithm.
Similar to the soft update rule in ES, using RepRL as a regularizer can significantly stabilize the representation process. Applying the regularization term biases the policy toward an optimal exploration strategy in policy space. This can be particularly useful when the representation is still weak and the optimization process is unstable, as it helps guide the update toward more promising areas of the parameter space. In our experiments, we found that using RepRL as a regularizer for policy gradients improved the stability and convergence of the optimization process.
## 5 Experiments
In order to evaluate the performance of RepRL, we conducted experiments on various tasks on the MuJoCo (Todorov et al., 2012) and MinAtar (Young and Tian, 2019) domains. We also used a sparse version of the MuJoCo environments, where exploration is crucial. We used linear TS as our linear bandits algorithm as it exhibited good performance during evaluation. The detailed network architecture and hyperparameters utilized in the experiments are provided in Appendix F.
Grid-World Visualization.Before presenting our results, we demonstrate the RepRL framework on a toy example. Specifically, we constructed a GridWorld environment (depicted in Figure 4) which consists of spatially changing, noisy rewards. The agent, initialized at the bottom left state \((x,y)=(1,1)\), can choose to take one of four actions: up, down, left, or right. To focus on exploration, the rewards were distributed unevenly across the grid. Particularly, the reward for every \((x,y)\) was defined by the Normal random variable \(r(x,y)\sim\mathcal{N}\big{(}\mu(x,y),\sigma^{2}\big{)}\), where \(\sigma>0\) and \(\mu(x,y)\propto R_{1}\exp\Big{\{}-\frac{(x-x_{1})^{2}+(y-y_{1})^{2}}{a_{1}} \Big{\}}+R_{2}\exp\Big{\{}-\frac{(x-x_{2})^{2}+(y-y_{2})^{2}}{a_{2}}\Big{\}}+R _{3}\,\mathbb{1}_{\{(x,y)=\text{goal}\}}\). That is, the reward consisted of Normally distributed noise, with mean defined by two spatial Gaussians, as shown in Figure 4, with \(R_{1}>R_{2}\), \(a_{1}<a_{2}\) and a goal state (depicted as a star), with \(R_{3}\gg R_{1},R_{2}\). Importantly, the values of \(R_{1},R_{2},R_{3},a_{1},a_{2}\) were chosen such that an optimal policy would take the upper root in Figure 4.
Comparing the behavior of RepRL and ES on the GridWorld environment, we found that RepRL explored the environment more efficiently, locating the optimal path to the goal. This emphasizes the varying characteristics of state-space-driven exploration vs. policy-space-driven exploration, which, in our framework, coincides with representation-driven exploration. Figure 3 illustrates a two-dimensional t
Figure 4: GridWorld visualization experiment. Trajectories were averaged across 100 seeds at various times during training, where more recent trajectories have greater opacity. Background colors indicate the level of mean reward.
Figure 3: The two-dimensional t-SNE visualization depicts the policy representation in the GridWorld experiment. On the right, we observe the learned latent representation, while on the left, we see the direct representation of the policy’s weights. Each point in the visualization corresponds to a distinct policy, and the color of each point corresponds to a sample of the policy’s value.
SNE plot comparing the learned latent representation of the policy with the direct representation of the policy weights.
Decision Set Comparison.We begin by evaluating the impact of the decision set on the performance of the RepRL. For this, we tested the three decision sets outlined in Section 3.3. The evaluation was conducted using the Representation Driven Evolution Strategy variant on a sparse HalfCheetah environment. A history window of 20 policies was utilized when evaluating the history-based decision set. A gradient descent algorithm was employed to obtain the parameters that correspond to the selected latent code in the latent-based setting
As depicted in Figure 8 at Appendix E, RepRL demonstrated similar performance for the varying decision sets on the tested domains. In what follows, we focus on policy space decision sets.
MuJoCo.We conducted experiments on the MuJoCo suit-case task using RepRL. Our approach followed the setting of Mania et al. (2018), in which a linear policy was used and demonstrated excellent performance on MuJoCo tasks. We utilized the ES variant of our algorithm (Algorithm 5). We incorporated a weighted update between the gradients using the bandit value and the zero-order gradient of the sampled returns, taking advantage of sampled information and ensuring stable updates in areas where the representation is weak.
We first evaluated RepES on the standard MuJoCo baseline (see Figure 5). RepES either significantly outperformed or performed on-par with ES. We also tested a modified, sparse variant of MuJoCo. In the sparse environment, a reward was given for reaching a goal each distance interval, denoted as \(d\), where the reward function was defined as:
\[r(s,a)=\begin{cases}10-c(a),&|x_{\text{agent}}|\bmod d=0\\ -c(a),&\text{o.w.}\end{cases}\]
Here, \(c(a)\) is the control cost associated with utilizing action \(a\), and \(x_{agent}\) denotes the location of the agent along the \(x\)-axis. The presence of a control cost function incentivized the agent to maintain its position rather than actively exploring the environment. The results of this experiment, as depicted in Figure 5, indicate that the RepRL algorithm outperformed both the ES and SAC algorithms in terms of achieving distant goals. However, it should be noted that the random search component of the ES algorithm occasionally resulted in successful goal attainment, albeit at a significantly lower rate in comparison to the RepRL algorithm.
MinAtar.We compared the performance of RepRL on MinAtar (Young & Tian, 2019) with the widely used policy gradient algorithm PPO (Schulman et al., 2017). Specifically, we compared PPO against its regularized version with RepRL, as described in Algorithm 6, and refer to it as RepPG. We parametrized the policy by a neural network. Although PPO collects chunks of rollouts (i.e., uses sub-trajectories), RepPG adjusted naturally due to the inner trajectory sampling (see Section 3.4). That is, the critic was used to estimate the value of the rest of the trajectory in cases where the rollouts were truncated by the algorithm.
Figure 5: MuJoCo experiments during training. The results are for the MuJoCo suitcase (top) and the modified sparse MuJoCo (bottom).
Results are shown in Figure 6. Overall, RepRL outperforms PPO on all tasks, suggesting that RepRL is effective at solving challenging tasks with sparse rewards, such as those found in MinAtar.
## 6 Related Work
**Policy Optimization:** Policy gradient methods (Sutton et al., 1999) have shown great success at various challenging tasks, with numerous improvements over the years; most notable are policy gradient methods for deterministic policies (Silver et al., 2014; Lillicrap et al., 2015), trust region based algorithms (Schulman et al., 2015, 2017), and maximum entropy algorithms (Haarnoja et al., 2018). Despite its popularity, traditional policy gradient methods are limited in continuous action spaces. Therefore, Tessler et al. (2019) suggest optimizing the policy over the policy distribution space rather than the action space.
In recent years, finite difference gradient methods have been rediscovered by the RL community. This class of algorithms uses numerical gradient estimation by sampling random directions (Nesterov and Spokoiny, 2017). A closely related family of optimization methods is Evolution Strategies (ES) a class of black-box optimization algorithms that heuristic search by perturbing and evaluating the set members, choosing only the mutations with the highest scores until convergence. Salimans et al. (2017) used ES for RL as a zero-order gradient estimator for the policy, parameterized as a neural network. ES is robust to the choice of the reward function or the horizon length and it also does not need value function approximation as most state-of-art algorithms. Nevertheless, it suffers from low sample efficiency due to the potentially noisy returns and the usage of the final return value as the sole learning signal. Moreover, it is not effective in hard exploration tasks. Mania et al. (2018) improves ES by using only the most promising directions for gradient estimation.
**Policy Search with Bandits.** Fox and Rolph (1973) was one of the first works to utilize multi-arm bandits for policy search over a countable stationary policy set - a core approach for follow-up work (Burnetas and Katehakis, 1997; Agrawal et al., 1988). Nevertheless, the concept was left aside due to its difficulty to scale up with large environments.
As an alternative, Neural linear bandits (Riquelme et al., 2018; Xu et al., 2020; Nabati et al., 2021) simultaneously train a neural network policy, while interacting with the environment, using a chosen linear bandit method and are closely related to the neural-bandits literature (Zhou et al., 2020; Kassraie and Krause, 2022). In contrast to this line of work, our work maps entire policy functions into linear space, where linear bandit approaches can take effect. This induces an exploration strategy in policy space, as opposed to locally, in action space.
**Representation Learning.** Learning a compact and useful representation of states (Laskin et al., 2020; Schwartz et al., 2019; Tennenholtz and Mannor, 2019; Chandak et al., 2019), rewards (Barreto et al., 2017; Nair et al., 2018; Toro Icarte et al., 2019), and policies (Hausman et al., 2018; Eysenbach et al., 2018), has been at the core of a vast array of research. Such representations can be used to improve agents' performance by utilizing the structure of an environment more efficiently. Policy representation has been the focus of recent studies, including the work by Tang et al. (2022), which, similar to our approach, utilizes policy representation to learn a generalized value function. They demonstrate that the generalized value function can generalize across policies and improve value estimation for actor-critic algorithms, given certain conditions. In another study, Li et al. (2022) enhance the stability and efficiency of Evolutionary Reinforcement Learning (ERL) (Khadka and Tumer, 2018) by adopting a linear policy representation with a shared state representation between the evolution and RL components. In our
Figure 6: MinAtar experiments during training.
research, we view the representation problem as an alternative solution to the exploration-exploitation problem in RL. Although this shift does not necessarily simplify the problem, it transfers the challenge to a different domain, offering opportunities for the development of new methods.
## 7 Discussion and Future Work
We presented RepRL, a novel representation-driven framework for reinforcement learning. By optimizing the policy over a learned representation, we leveraged techniques from the contextual bandit literature to guide exploration and exploitation. We demonstrated the effectiveness of this framework through its application to evolutionary and policy gradient-based approaches, leading to significantly improved performance compared to traditional methods.
In this work, we suggested reframing the exploration-exploitation problem as a representation-exploitation problem. By embedding the policy network into a linear feature space, good policy representations enable optimal exploration. This framework provides a new perspective on reinforcement learning, highlighting the importance of policy representation in determining optimal exploration-exploitation strategies.
As future work, one can incorporate RepRL into more involved representation methods, including pretrained large Transformers (Devlin et al., 2018; Brown et al., 2020), which have shown great promise recently in various areas of machine learning. Another avenue for future research is the use of RepRL in scenarios where the policy is optimized in latent space using an inverse mapping (i.e., decoder), as well as more involved decision sets. Finally, while this work focused on linear bandit algorithms, future work may explore the use of general contextual bandit algorithms, (e.g., SquareCB Foster and Rakhlin (2020)), which are not restricted to linear representations.
## 8 Acknowledgments
This work was partially funded by the Israel Science Foundation under Contract 2199/20.
|
2302.14279 | Critical behavior of Ising model by preparing thermal state on quantum
computer | We simulate the critical behavior of the Ising model utilizing a thermal
state prepared using quantum computing techniques. The preparation of the
thermal state is based on the variational quantum imaginary time evolution
(QITE) algorithm. The initial state of QITE is prepared as a classical product
state, and we propose a systematic method to design the variational ansatz for
QITE. We calculate the specific heat and susceptibility of the long-range
interacting Ising model and observe indications of the Ising criticality on a
small lattice size. We find the results derived by the quantum algorithm are
well consistent with the ones from exact diagonalization, both in the
neighbourhood of the critical temperature and the low-temperature region. | Xiaoyang Wang, Xu Feng, Tobias Hartung, Karl Jansen, Paolo Stornati | 2023-02-28T03:29:19Z | http://arxiv.org/abs/2302.14279v2 | # Critical behavior of Ising model by preparing thermal state on quantum computer
###### Abstract
We simulate the critical behavior of the Ising model utilizing a thermal state prepared using quantum computing techniques. The preparation of the thermal state is based on the variational quantum imaginary time evolution (QITE) algorithm. The initial state of QITE is prepared as a classical product state, and we propose a systematic method to design the variational ansatz for QITE. We calculate the specific heat and susceptibility of the long-range interacting Ising model and observe indications of the Ising criticality on a small lattice size. We find the results derived by the quantum algorithm are well consistent with the ones from exact diagonalization, both in the neighbourhood of the critical temperature and the low-temperature region.
## I Introduction
With the development of quantum devices and quantum algorithms, it is possible to solve problems on quantum computers that are hard for classical ones. Quantum computers have already been successfully implemented in many fields, including quantum chemistry, condensed matter physics and lattice field theory, see references [1; 2; 3; 4; 5; 6; 7] as some examples. With the growing number of qubits and improved fidelities of quantum devices, more realistic physical models can be tackled, and the potential of quantum computers can be explored. As an example of application, in this article, we prepare the thermal state of the Ising model with a quantum algorithm at various temperatures, including points close to the critical temperature and the low-temperature region. To demonstrate the feasibility of our approach, we compare the quantum simulation results of the chosen physical quantities with the results from classical simulations.
Numerous algorithms have been proposed to enable a quantum computer to prepare a thermal state. These include the quantum thermal dynamic method, where the target system is coupled with a bath at equilibrium [8], variational quantum algorithm based on the thermofield double state [9; 10], as well as many quantum imaginary time evolution(QITE) algorithms such as the one utilizing Hubbard-Stratonovich transformation [11], QITE based on variational ansatz (QITE-ansatz) [12] and QITE based on measurement (QITE-measure) [13]. The scope of our research is to focus on the usage of noisy intermediate-scale quantum (NISQ) devices [14; 15]. Given the presence of quantum noise, it is necessary to minimize the depth of the quantum circuits. We utilize the QITE-ansatz algorithm to generate thermal states in our research, as it has a relatively shallower circuit depth in comparison to other algorithms mentioned previously. In QITE-ansatz algorithm, the imaginary time evolution is carried out on a prior parameterized quantum circuit, and the parameters are evolved variationally. Thus, the parameterized quantum circuit is usually called variational ansatz. The variational ansatz is designed for ground state preparation in most references utilizing QITE-ansatz, such as [12; 16; 17]. Here, for thermal state preparation, we propose to construct a variational ansatz converted from quantum circuits utilized in QITE-measure [13]. The circuit in QITE-measure can also carry out imaginary time evolution, but the circuit depth is quite large. The circuit depth can be much reduced by converting the circuit into a variational ansatz. For example, when simulating the Ising model, the quantum circuits in QITE-measure have \(\sim 100\) layers, while the variational ansatz circuits used in this work have less than 10 layers.
In this article, we study the long-range interacting Ising model. Long-range interaction between spins is introduced naturally in trapped-ion spin systems [18], and its dynamics can be simulated utilizing quantum simulation algorithms. The long-range interaction also leads to interesting physics such as confinement [19] and meson scattering [20]. Meanwhile, the long-range interaction leads to effective dimensions that impact the system's critical behavior. Here, we calculate the specific heat of the long-range interacting Ising model near the critical point and in the low-temperature region.
This article is organized as follows. In section II, we introduce the long-range interacting Ising model and the measurement method of relevant physical quantities on a quantum computer. In section III, we discuss the process of thermal state preparation using QITE-ansatz algorithm in detail, especially the method of variational ansatz design. In section IV, we present the numerical results and discuss the observed indications of the criti
cality. Finally, in section V, we summarize the techniques used in this article and discuss the possible extension for further works.
## II Long-range interacting Ising model
We consider the \(D=2\) dimensional Ising model on a square lattice \(\Lambda\) with long-range interactions. The Hamiltonian reads
\[H=-\sum_{i>j\in\Lambda}\frac{J}{r_{ij}^{\alpha}}Z_{i}Z_{j}-h\sum_{i}Z_{i}, \tag{1}\]
where \(Z_{i}\) is the Pauli-\(Z\) operator on the \(i\)th spin. \(J\) is the bare coupling strength, and \(\alpha\) denotes the range of the interaction. \(h\) denotes the strength of the longitudinal external field. The distance \(r_{ij}\) is defined by the Manhattan distance under periodic boundary condition(PBC): Assuming the position of spin \(i\) on the square lattice is represented by integer vector \(\vec{r}^{i}=(r_{1}^{i},\ldots,r_{D}^{i},)\) and the volume of the lattice is \(|\Lambda|=N_{1}\times\ldots\times N_{D}\), then
\[r_{ij}=\sum_{d=1}^{D}\min(|r_{d}^{i}-r_{d}^{j}|,N_{d}-|r_{d}^{i}-r_{d}^{j}|). \tag{2}\]
This Hamiltonian is a generalization of the interaction part of the Hamiltonian introduced in reference [19]. It reduces to the original nearest-neighbor Ising model (NNIM) in the limit \(\alpha\rightarrow\infty\).
The state of the Ising system at a finite temperature is described by the density operator. Its equilibrium state is the Gibbs state of which the density operator reads
\[\rho=\frac{1}{Z_{\beta}}e^{-\beta H},\quad Z_{\beta}\equiv\text{tr}\big{(}e^{ -\beta H}\big{)}. \tag{3}\]
Here \(\beta\) is the inverse temperature \(\beta\equiv 1/(k_{B}T)\) and we define \(K\equiv J\beta\) for later convenience. For an arbitrary observable \(O\), its expectation value of the thermal state is given by
\[\langle O\rangle\equiv\text{tr}(\rho O). \tag{4}\]
This article targets the case where the expectation values are evaluated for different \(K\) and a zero external field \(h=0\).
Now we exhibit observables to compute the Ising model's specific heat and susceptibility. Analyzing these measures allows us to examine the critical behavior of the Ising model. The specific heat is defined by the changing rate of the internal energy in a unit volume when varying the temperature \(T\). It can be evaluated by the energy-fluctuation relation:
\[C_{v}\equiv\frac{1}{|\Lambda|}\frac{\partial\langle H\rangle}{\partial T}= \frac{1}{|\Lambda|T^{2}}\left[\langle H^{2}\rangle-\langle H\rangle^{2}\right], \tag{5}\]
where the last expression can be derived by taking the Gibbs state Eq. (3) to evaluate the expectation values.
Similarly, the susceptibility is defined by the changing rate of the magnetization in a unit volume with respect to the external field strength \(h\) (evaluated at \(h=0\)). The total magnetization is given by
\[\langle M\rangle\equiv\langle Z_{tot}\rangle, \tag{6}\]
where \(Z_{tot}\equiv\sum_{i}Z_{i}\), i.e., the sum of all the spins in the lattice. Then the susceptibility can be evaluated according to the susceptibility-fluctuation relation
\[\chi\equiv\frac{1}{|\Lambda|}\frac{\partial\langle M\rangle}{\partial h} \Big{|}_{h=0}=\frac{1}{|\Lambda|T}\left[\langle Z_{tot}^{2}\rangle-\langle Z _{tot}\rangle^{2}\right]. \tag{7}\]
In summary, evaluating the specific heat and susceptibility is equivalent to calculating the expectation values of the corresponding operators. The operators to be measured include
\[H^{2},H,Z_{tot}^{2},Z_{tot} \tag{8}\]
which can all be reduced to linear combinations of Pauli operators. To evaluate the expectation values of the above operators on quantum computers, we can generate the thermal state utilizing a quantum algorithm and then evaluate the expectation values of the Pauli operators. Notice that for the above operators, the elementary Pauli operators can be written as products of Pauli-\(Z\) operators, so they commute and can be measured simultaneously on the quantum computer. Combined with the fact that the Hamiltonian in Eq. (1) consists of only Pauli-\(Z\) operators, we can simplify the initial state to be evolved on quantum computer. It enables us to simulate the system on a larger lattice. For general models, such as the Ising model with a transversal field, the simplification does not hold. More details can be found in section (III.1).
## III Thermal state preparation with quantum imaginary time evolution
One can use the quantum imaginary time evolution(QITE) algorithm to prepare a thermal state, as demonstrated in previous studies [12; 13]. This section provides an explanation of the QITE-ansatz algorithm. QITE-ansatz algorithm is designed to evolve an \(N_{q}\)-qubit quantum state \(\ket{\psi(0)}\) to
\[\ket{\psi(\tau)}=\frac{e^{-\tau H}\ket{\psi(0)}}{\sqrt{\bra{\psi(0)}e^{-2\tau H }\ket{\psi(0)}}}, \tag{9}\]
where \(\tau\) is a real number denoting imaginary time. The denominator is a normalization factor to guarantee the evolution's unitarity. Assuming we have the quantum circuit to carry out the unitary evolution, then by choosing the initial state to be the maximally mixed state (defined
as the density operator) \(\ket{\psi(0)}\bra{\psi(0)}=\mathbf{I}/\mathbf{d}\)[21] (\(\mathbf{I}\) is the identity operator of the \(\mathbf{d}\equiv 2^{N_{q}}\) dimensional Hilbert space), one finds the final state is the thermal state with inverse temperature \(\beta=2\tau\)
\[\ket{\psi(\tau)}\bra{\psi(\tau)}=\frac{1}{Z_{2\tau}}e^{-2\tau H},\quad Z_{2\tau }\equiv\operatorname{tr}\bigl{(}e^{-2\tau H}\bigr{)}. \tag{10}\]
The QITE-ansatz algorithm was proposed in references [12; 22]. This technique is originally used to project out the ground state of the Hamiltonian according to Eq. (9). It has been successfully implemented in the field of quantum chemistry, quantum field theory and machine learning, see e.g. [1; 16; 17].
Following [22], we first review the QITE-ansatz algorithm within the density operator formalism. The density operator of Eq. (9) reads
\[\rho(\tau)=\frac{e^{-\tau H}\ket{\psi(0)}\bra{\psi(0)}e^{-\tau H}}{\bra{\psi( 0)}e^{-2\tau H}\ket{\psi(0)}}. \tag{11}\]
The mathematical description of a quantum state with the density operator is equivalent to that with the pure state. In particular, the expectation values of any observable \(O\) coincide
\[\operatorname{tr}(\rho(\tau)O)=\bra{\psi(\tau)}O\ket{\psi(\tau)}. \tag{12}\]
The imaginary time evolution of the density operator follows the von-Neumann equation [22]
\[\frac{\mathrm{d}\rho(\tau)}{\mathrm{d}\tau}=\mathcal{L}[\rho(\tau)], \tag{13}\]
where \(\mathcal{L}\) is the Liouville operator defined by \(\mathcal{L}(\rho)=-\{H,\rho\}+2\operatorname{tr}(\rho H)\rho\) with anti-commutator \(\{H,\rho\}=H\rho+\rho H\). As the Hilbert space of the whole \(N_{q}\) qubits is hard to be explored by a quantum circuit, we utilize a density operator \(\hat{\rho}(\tau)=\ket{\phi(\tau)}\bra{\phi(\tau)}\) to approximate the target density \(\rho(\tau)\). The approximation \(\hat{\rho}(\tau)\) satisfies the following requirements: (1) It has the same initial state \(\hat{\rho}(0)=\rho(0)=\ket{\psi(0)}\bra{\psi(0)}\). (2) The evolution of \(\hat{\rho}(\tau)\) approximately satisfies the von-Neumann equation \(\mathrm{d}\hat{\rho}(\tau)/\mathrm{d}\tau-\mathcal{L}[\hat{\rho}(\tau)]=0\).
The approximation \(\hat{\rho}(\tau)\) is generated with a variational ansatz \(\ket{\phi(\vec{\theta}(\tau))}=U(\vec{\theta}(\tau))\ket{\psi(0)}\), where \(\vec{\theta}\) is a real variational parameter vector with \(N\) components. \(U(\vec{\theta})=U_{N}(\theta_{N})\dots U_{1}(\theta_{1})\) is a series of parameterised unitary quantum gates. According to the first requirement mentioned above, \(U(\vec{\theta}(0))\) should be the identity operator \(\mathbf{I}\). With the variational ansatz, the evolution of the quantum state is converted to the evolution of the variational parameters \(\vec{\theta}\). However, as the variational ansatz cannot explore the whole Hilbert space, \(\ket{\phi(\vec{\theta}(\tau))}\) can not fulfill the von-Neumann equation exactly. Instead, we demand that the von-Neumann equation is fulfilled sufficiently well according to the second requirement. The violation of the von-Neumann equation is measured by the McLachlan distance \(L^{2}\), which is defined by
\[L^{2}\equiv\left|\left|\frac{\mathrm{d}\hat{\rho}(\tau)}{\mathrm{d}\tau}- \mathcal{L}[\hat{\rho}(\tau)]\right|\right|^{2}, \tag{14}\]
where \(||A||^{2}=\operatorname{tr}(A^{\dagger}A)\) represents Frobenius norm. According to the differential chain rule, we have
\[L^{2}=\left|\left|\sum_{\mu}\frac{\partial\hat{\rho}(\theta)}{\partial\theta_ {\mu}}\dot{\theta}_{\mu}-\mathcal{L}(\hat{\rho})\right|\right|^{2}. \tag{15}\]
So that the McLachlan distance is a quadratic function of the time derivatives of the variational parameters \(\dot{\theta}_{\mu}\equiv\partial\theta_{\mu}/\partial\tau\). \(L^{2}\) can be minimized with the variational principle, which leads to
\[\delta L^{2}=0\Rightarrow\frac{\partial L^{2}}{\partial\theta_{\mu}}=\sum_{ \nu}M_{\mu\nu}\dot{\theta}_{\nu}-V_{\mu}=0, \tag{16}\]
where
\[M_{\mu\nu} \equiv 2\operatorname{Re}\left[\frac{\partial\bra{\phi(\vec{ \theta})}}{\partial\theta_{\mu}}\frac{\partial\ket{\phi(\vec{\theta})}}{ \partial\theta_{\nu}}\right], \tag{17}\] \[V_{\mu} \equiv -2\operatorname{Re}\left[\frac{\partial\bra{\phi(\vec{\theta})}}{ \partial\theta_{\mu}}H\ket{\phi(\vec{\theta})}\right].\]
Here \(M\) is a \(N\times N\) matrix while \(V\) is a \(N\) dimensional vector. Following [12; 23], one can construct some specific quantum circuits to measure \(M\) and \(V\), which cost \(\mathcal{O}(N^{2})\) quantum device calls and one additional ancilla qubit.
After deriving \(M\) and \(V\), we can construct the following linear equations
\[\sum_{\nu}M_{\mu\nu}\dot{\theta}_{\nu}=V_{\mu}. \tag{18}\]
Then one can solve for the time derivative of the variational parameters \(\dot{\theta}_{\nu}|_{\tau=\tau_{0}}\) at a given imaginary time \(\tau_{0}\), utilizing methods such as pseudo-inverse [12]. The variational parameters at the next time slice \(\tau_{0}+\delta\tau\) are given according to the Euler method
\[\vec{\theta}(\tau_{0}+\delta\tau)\simeq\vec{\theta}(\tau_{0})+\dot{\vec{\theta}} \delta\tau, \tag{19}\]
where \(\dot{\theta}_{\nu}=\sum_{\mu}M_{\nu\mu}^{-1}V_{\mu}\).
The computational complexity of the QITE-ansatz grows polynomially with the number of variational parameters \(N\). In each time slice, the time complexity of solving linear equations grows polynomially with \(N\), while the matrix \(M\) and vector \(V\) can also be evaluated using quantum computers within polynomial time. Thus as long as \(N\) grows polynomially with the system size \(N_{q}\), the time complexity of the QITE-ansatz grows polynomially with \(N_{q}\) and can be extended to large-scale quantum systems. The following subsections will introduce how to prepare the maximally mixed state and choose an appropriate variational ansatz.
### Initial state preparation
Here we introduce how to prepare the initial state as the maximally mixed state \(\mathbf{I}/\mathbf{d}\). Quantum circuits are suitable for generating pure states. We need some strategies to generate mixed states utilizing pure states. As discussed in [24], there are two strategies: ancilla pair state (APS) and classical product state (CPS). Both strategies can be used to prepare maximally mixed state \(\mathbf{I}/\mathbf{d}\). However, preparing \(\mathbf{I}/\mathbf{d}\) with APS doubles the number of qubits to \(2N_{q}\)[17]. It also introduces some complexities in variational ansatz design to evolve the pair state.
Instead, we can prepare the maximally mixed state via CPS, which reduces the required qubits to \(N_{q}\). The maximally mixed state \(\mathbf{I}/\mathbf{d}\) describes that the probabilities of sampling every basis vector from a given orthogonal basis are the same, where each basis vector is a pure state. As the maximally mixed state is unitarily invariant \(U(\mathbf{I}/\mathbf{d})U^{-1}=\mathbf{I}/\mathbf{d}\), the orthogonal basis can be chosen arbitrarily. To generate the thermal state, it is recommended in [24] to use a basis formed by classical product states, such as \(\{\ket{+},\ket{-}\}^{\otimes N_{q}}\), where \(\{\cdot\}^{\otimes N_{q}}\) represents a set generated by the \(N_{q}\) times tensor product of each element in \(\{\cdot\}\). For example,
\[\{\ket{+},\ket{-}\}^{\otimes 2}=\{\ket{++},\ket{+-},\ket{-+},\ket{--}\}. \tag{20}\]
Here \(\ket{+}\),\(\ket{-}\) represent the eigenvectors of the Pauli-\(X\) operator
\[X\ket{+}=\ket{+},\quad X\ket{-}=-\ket{-}. \tag{21}\]
If we use the classical product state as the initial state, the thermal expectation value \(\left\langle O\right\rangle\) can not be measured straightforwardly due to the normalization factor in Eq. (9). Assume that we take the orthogonal basis as \(\{\ket{i}\}\). Evolving all basis vectors \(\ket{i}\) for imaginary time \(\tau\), one gets the expectation values of an observable \(O\), which read
\[\left\langle i(\tau)\right|O\ket{i(\tau)}=\frac{\bra{i}e^{-\tau H}Oe^{-\tau H }\ket{i}}{\bra{i}e^{-2\tau H}\ket{i}}. \tag{22}\]
Usually, the denominators would be different for different basis vectors \(\ket{i}\). To derive the thermal expectation value \(\left\langle O\right\rangle\) in Eq. (4), we should multiply the above expectation values with coefficients \(\{p_{i}\}\)
\[\left\langle O\right\rangle=\sum_{i}p_{i}\left\langle i(\tau)\right|O\ket{i( \tau)}, \tag{23}\]
where \(p_{i}\) is defined by
\[p_{i}\equiv\frac{\bra{i}e^{-2\tau H}\ket{i}}{Z_{2\tau}}. \tag{24}\]
Here \(\{p_{i}\}\) can be treated as a probability distribution, as they are all positive and satisfy the normalization condition \(\sum_{i}p_{i}=1\). To evaluate the thermal expectation value of the operator \(O\), as mentioned in [13], we do not need to calculate all the \(\{p_{i}\}\)(which would be impossible to calculate, as the number of \(p_{i}\) grows exponentially with the number of qubits). With the minimally entangled typical thermal state(METTS) algorithm proposed by Stoudenmire and White [25], one can sample \(\{\ket{i}\}\) according to the distribution \(\{p_{i}\}\). The thermal expectation value \(\left\langle O\right\rangle\) is the average of the expectation of \(O\) with the time-evolved sampled vectors. In conclusion, though imaginary time evolution with CPS as initial states requires the number of qubits equal to the system size, one has to evolve different initial states \(\ket{i}\) to acquire statistics. On the other hand, imaginary time evolution with APS as an initial state doubles the number of qubits while evolving only one initial state.
However, the situation gets simplified when we consider the classical Ising model and the observables in Eq. (8), which consist of Pauli-\(Z\) operators. The observables can be generally expressed as
\[O=\sum_{m}h_{m}\tilde{Z}_{m}. \tag{25}\]
Here \(\tilde{Z}_{m}\) represents the tensor product of \(Z\) operators at some sites and identity operators at the others, such as \(\tilde{Z}_{m}=Z_{N_{q}-1}\dots I_{1}Z_{0}\). In Appendix A, we prove that the thermal expectation value of \(O\) can be calculated according to
\[\left\langle O\right\rangle=\sum_{m}h_{m}\left\langle\tilde{+}(\tau)\right| \tilde{Z}_{m}\left|\tilde{+}(\tau)\right\rangle, \tag{26}\]
where \(\left|\tilde{+}(\tau)\right\rangle\) is imaginary time evolved state according to Eq. (9). The state is initialized as \(\left|\tilde{+}(0)\right\rangle=\left|\tilde{+}\right\rangle\), where \(\left|\tilde{+}\right\rangle\equiv\left|+\right\rangle^{\otimes N_{q}}\) is the \(N_{q}\)-fold tensor product of \(\ket{+}\) in Eq. (21). Thus for the Ising model, we only need to calculate the imaginary time evolution with the initial state \(\left|\tilde{+}\right\rangle\).
In this work, we use \(\left|\tilde{+}\right\rangle\) as the initial state to present our results. For general models, such as the Ising model with a transversal field, the above simplification does not hold. We need to sample the classical product states using the METTS algorithm or utilize the ancilla pair state.
### Variational ansatz design
Choosing a proper variational ansatz is a cornerstone for the success of the QITE-ansatz algorithm [15]. In most literature on QITE-ansatz, the variational ansatz is designed to prepare the ground state of a Hamiltonian, and it is suitable to evolve some specific initial states, such as the unitary coupled cluster ansatz evolving Hartree-Fock states [1]. Focusing on thermal state preparation and the initial state introduced in the previous section, we propose to construct a variational ansatz
converted from quantum circuits utilized in the QITE-measure algorithm proposed by Motta et. al. [13].
We briefly introduce how to construct the quantum circuits used in the QITE-measure algorithm. The goal of QITE-measure is also evolving an initial state \(\ket{\psi(0)}\) according to Eq. (9). Consider evolving the state \(\ket{\psi(\tau_{0})}\) for a small time slice \(\Delta\tau\)
\[\ket{\psi(\tau_{0}+\Delta\tau))}=\frac{e^{-\Delta\tau H}\ket{\psi(\tau_{0})}}{ \sqrt{\bra{\psi(\tau_{0})}e^{-2\Delta\tau H}\ket{\psi(\tau_{0})}}}. \tag{27}\]
As this transformation is unitary, we can always find a Hermitian operator \(\hat{A}(\tau_{0})\) such that
\[\ket{\psi(\tau_{0}+\Delta\tau)}=e^{-i\Delta\tau\hat{A}(\tau_{0})}\ket{\psi(\tau _{0})}, \tag{28}\]
and \(\hat{A}(\tau_{0})\) can be expanded in a complete Pauli basis
\[\hat{A}(\tau_{0})=\sum_{i_{1}\dots i_{N_{q}}}a_{i_{1}\dots i_{N_{q}}}^{(\tau_{ 0})}\sigma_{i_{1}}\dots\sigma_{i_{N_{q}}}\equiv\sum_{I}a_{I}^{(\tau_{0})} \tilde{\sigma}_{I}, \tag{29}\]
where the expansion coefficients \(a_{i_{1}\dots i_{N_{q}}}^{(\tau_{0})}\) are real due to the Hermicity of \(\hat{A}(\tau_{0})\), and \(\sigma_{i_{j}}=I,X,Y,Z\) corresponding to \(i_{j}=0,1,2,3\) is the single-qubit Pauli operator on the site \(j\), and we call the tensor product of the single-qubit Pauli operator, \(\tilde{\sigma}_{I}\) as Pauli string. For this reason, the single-qubit Pauli operator is sometimes called Pauli letter [26]. For each imaginary time \(\tau_{0}\), one can calculate all the expansion coefficients \(a_{I}^{(\tau_{0})}\) by evaluating the expectation values of some observables with respect to the quantum state \(\ket{\psi(\tau_{0})}\). The observables are the composition of Pauli strings and the Hamiltonian (See more details in [13]). Notice that the transformation in Eq. (28) can be approximated by
\[e^{-i\Delta\tau\sum_{I}a_{I}^{(\tau_{0})}\tilde{\sigma}_{I}}=\prod_{I}e^{-i \Delta\tau a_{I}^{(\tau_{0})}\tilde{\sigma}_{I}}+\mathcal{O}(\Delta\tau^{2}), \tag{30}\]
where the product consists of several Pauli exponentials which have the form \(e^{-i\theta\tilde{\sigma}_{I}}\), and the Pauli exponential can be realized with quantum gates in a standard way [27]. Thus, the whole quantum circuit used in the QITE-measure can be constructed using several Pauli exponentials for each time slice. In the last time slice, the circuit depth is proportional to the final imaginary time \(\tau\).
Notice that if a system has \(N_{q}\) qubits, the total number of Pauli strings on these qubits is \(4^{N_{q}}\). Thus the number of Pauli exponentials required for evolving each time slice seems exponential as a function of system size according to Eq. (29). However, the situation gets simplified when the Hamiltonian \(H\) consists of some local interaction terms
\[H=\sum_{m}H_{m}, \tag{31}\]
where each \(H_{m}\) acts on a local set of qubits, and the number of \(H_{m}\) is polynomial as a function of system size. For example, \(H_{m}\propto Z_{i}Z_{j}\) and the number of \(H_{m}\) is \(\mathcal{O}(N_{q}^{2})\) in case of long-range interacting Ising model. Though the local terms \(H_{m}\) may not commute, the imaginary time evolution \(e^{-\Delta\tau H}\) can be decomposed by
\[e^{-\Delta\tau H}=\prod_{m}e^{-\Delta\tau H_{m}}+\mathcal{O}(\Delta\tau^{2}). \tag{32}\]
Then the previous steps in QITE-measure can be implemented for each \(e^{-\Delta\tau H_{m}}\). As shown in [13], when the Hamiltonian consists of local terms and the correlation length of the system is finite, the expansion in Eq. (29) for each \(H_{m}\) can be implemented with Pauli strings on a support constantly larger than the support of \(H_{m}\) (Support of a Pauli string is defined by the set of qubits on which the Pauli letters are not identity). The correlation length of a system is finite when its Hamiltonian is outside the critical region. Thus the support of the Pauli strings has no dependence on the system size, and the total number of Pauli exponentials \(e^{-i\theta\tilde{\sigma}_{I}}\) is a polynomial function of the system size at least when the Hamiltonian is sufficiently far away from the critical point.
Compared with the QITE-ansatz, the precision of the QITE-measure is not limited by the variational ansatz. However, the circuit depth grows linearly with the evolution time \(\tau\). Thus this algorithm would be very sensitive to coherent or incoherent noise in real quantum devices and can only be applied to small spin systems [28].
Quantum circuits constructed in QITE-measure can be naturally converted into a variational ansatz with the following steps: (1) using all the necessary Pauli exponentials at one time slice as one layer of the variational ansatz; (2) sequentially repeating the layer several times in the quantum circuit; (3) converting all the expansion coefficients \(a_{I}^{(\tau_{0})}\) into undetermined parameters, which are initially zero and to be evolved according to the QITE-ansatz algorithm. Times of repetition for one layer is called the depth of the variational ansatz, also called the number of layers.
The behavior of this variational ansatz can be analyzed with the help of QITE-measure. Assuming we have the same quantum circuit layers for the variational ansatz in QITE-ansatz and the quantum circuits in QITE-measure. Because the states prepared in QITE-measure can all be explored by the variational ansatz, one can expect QITE-ansatz using this circuit to behave at least better than QITE-measure. The systematic error of the QITE-measure circuit is of the first-order Trotter type, i.e., \(\text{error}\sim\mathcal{O}(\Delta\tau)\)[13]. By equalizing the longest circuit depth used in QITE-measure and the depth in variational ansatz, it can be deduced that in the worst case, the variational ansatz leads to an error of \(\mathcal{O}(1/\text{L})\), where \(L\) is the number of layers.
In the numerical simulations, we find that the circuit depth required in QITE-ansatz is much smaller than that required in the QITE-measure. For example, in our numerical simulation of the Ising model, if the imaginary time of the final state is \(\tau=0.5\), with step size
\(\Delta\tau=0.002\), QITE-measure requires the number of layers \(\tau/\Delta\tau=250\). In contrast, to reach a sufficiently good precision using the variational ansatz, we find the number of layers required is at most \(L=N_{d}\) for the 2-D nearest neighbor Ising model where \(N_{d}\) is the side length of the Ising lattice system. More details on the number of circuit layers required are shown in Appendix B.
The variational ansatz can be simplified due to some special structures of the Hamiltonian and the initial state. In the numerical simulations, we notice that some of the variational parameters are always zero during the whole evolution, which corresponds to the same set of Pauli strings over all the layers. We call the Pauli string in this set _irrelevant_, and the other Pauli strings corresponding to non-zero variational parameters are _relevant_. As the irrelevant Pauli exponentials are identity, they can be removed a priori when constructing the variational ansatz. These irrelevant Pauli strings can be identified according to the symmetry and some special structures of the Hamiltonian and the initial state. For example, if all the entries in the Hamiltonian and the initial state are real, then the corresponding unitary operator \(e^{-i\Delta\tau\bar{A}}\) should also be real. Thus all Pauli strings with an even number of Pauli-\(Y\) letters are irrelevant.
We demonstrate the above construction of variational ansatz using an example of a two-qubit (\(N_{q}=2\)) Ising system. There are \(4^{2}=16\) Pauli strings on the two-qubit system. Assume we have an initial state \(\left|++\right\rangle\) and the system Hamiltonian \(H=-Z_{1}Z_{0}\). Because all the entries in the Hamiltonian and the initial state are real, eliminating Pauli strings with an even number of Pauli-\(Y\) letters leaves 6 Pauli strings: \(I_{1}Y_{0},X_{1}Y_{0},Y_{1}I_{0},Y_{1}X_{0},Z_{1}Y_{0},Y_{1}Z_{0}\). Evolving one layer with these 6 Pauli strings using QITE-ansatz, we further find 4 Pauli strings are irrelevant. It leaves only two relevant Pauli strings for the imaginary time evolution
\[Z_{1}Y_{0},Y_{1}Z_{0}. \tag{33}\]
One can verify that
\[\begin{split} e^{-\Delta\tau H}\left|++\right\rangle& =e^{\Delta\tau Z_{1}Z_{0}}\left|++\right\rangle\\ &\propto e^{-ia_{1}(0)Z_{1}Y_{0}}e^{-ia_{2}(0)Y_{1}Z_{0}}\left|++ \right\rangle,\end{split} \tag{34}\]
with expansion coefficients
\[a_{1}^{(0)}=a_{2}^{(0)}=\frac{1}{2}\tan^{-1}(\tanh\Delta\tau). \tag{35}\]
Figure 1: Example of circuits for the imaginary time evolution of Ising systems. The basic building blocks of the circuits are defined as \(U_{ZY}(\theta)\equiv e^{-i\theta ZY},U_{YZ}(\theta)\equiv e^{-i\theta YZ}\). (**a**) The quantum circuit in the QITE-measure algorithm to carry out the imaginary time evolution \(e^{\tau\bar{Z}Z}\left|++\right\rangle\). \(\Delta\tau\) is the length of one time slice. (**b**) The variational ansatz converted from the QITE-measure circuit. \(L\) is the number of circuit layers, \(\theta_{i},\theta_{i}^{\prime},i\in[1,L]\) are free variational parameters. (**c**) The variational ansatz for nearest neighbor 1-D Ising chain under the periodic boundary condition. Each layer consists of one layer of ZY-Pauli exponentials and one layer of YZ-Pauli exponentials, as shown in the dashed box. The figure shows the case of two layers, and we have measurements denoted by black boxes at the end of the circuit.
In the QITE-measure algorithm, to evolve the initial state to an arbitrary time \(\tau\), the quantum circuit is shown in figure 1**a**. It has \(\tau/\Delta\tau\) layers. The variational ansatz with \(L\) layers for the two-qubit Ising system is constructed as shown in figure 1**b**. In this circuit, \(\{\theta_{1},\theta^{\prime}_{1}\dots\theta_{L},\theta^{\prime}_{L}\}\) are all variational parameters, taking zero as initial values, and to be evolved according to the QITE-ansatz algorithm.
## IV Numerical results
In this section, we apply the previous variational ansatz design procedure to the long-range interacting Ising model, where we will prepare CPS as the initial state. Equipped with the thermal state, we can calculate the specific heat \(C_{v}\) and susceptibility \(\chi\) as a function of \(K\equiv J\beta\). Our numerical simulations are carried out on the Qiskit noiseless statevector quantum simulator [29].
The initial state and variational ansatz are chosen as described in section III. To calculate the thermal expectation values of the Ising model, we only need to calculate the imaginary time evolution of the product state \(|\tilde{+}\rangle\). With the initial state, and for every local interaction term in Ising model \(Z_{i}Z_{j}(\forall i,j\in\Lambda)\), we have the corresponding relevant Pauli strings
\[Z_{i}Y_{j},Y_{i}Z_{j}. \tag{36}\]
Then we can construct the variational ansatz for the target Ising Hamiltonian. An example of a variational ansatz for nearest-neighbour Ising chain under periodic boundary conditions is shown in figure 1**c**. Each layer of the variational ansatz consists of one layer of ZY-Pauli exponentials and one layer of YZ-Pauli exponentials, as shown in the dashed box. In the figure, we show the case of two layers, and we will use two layers in the following numerical simulations if not specified otherwise. One has to notice that here we assume the imaginary time evolution of each local interaction term \(e^{\tau Z_{i}Z_{j}}\) can be realized with the Pauli exponentials \(e^{-i\theta Z_{i}Y_{j}}e^{-i\theta^{\prime}Y_{i}Z_{j}}\), which have the same support of \(Z_{i}Z_{j}\). These two Pauli exponentials are enough in the 2-qubit case as indicated by Eq. (34), but are not when the system size is large and when the system approaches the critical point, as explained in the previous section. It means that the expressivity of this variational ansatz is not sufficiently good to carry out the whole imaginary time evolution \(e^{-\tau H}\). Limited expressivity leads to systematic errors, which will affect the numerical results.
First, we present the numerical results of the nearest
Figure 2: Specific heat(left column) and susceptibility(right column) as a function of \(K\) in 2-D(upper row) and 3-D(lower row) nearest-neighbour Ising model(\(\alpha=\infty\)). ED represents results from exact diagonalization. We see that the results from the noiseless quantum simulation are close to exact diagonalization, especially in the region far from the critical point. The black dashed lines in the four panels are the exact critical temperatures \(K_{c}\) of the corresponding dimension in the infinite volume limit. The solid grey line in the upper left panel shows the peak movement as the system size enlarged, which is fitted inspired by finite size scaling (FSS). As the lattice size increases, the peaks of the specific heat and the leaps of the susceptibility are more obvious, and the transition points approach the exact critical point.
neighbor Ising model(NNIM), i.e., taking the limit \(\alpha\to\infty\) in Eq. (1). With the nearest-neighbor interaction, there are \(N=2D|\Lambda|L\) parameters in the variational ansatz. In two and three-dimensional NNIMs, there is a second-order phase transition in the infinite volume limit, where the critical points are \(K_{c}=\ln\bigl{(}1+\sqrt{2}\bigr{)}/2\approx 0.441\)[30] and \(0.222\)[31] for dimension \(D=2,3\), respectively. The specific heat and susceptibility would hence diverge near the critical point in the infinite volume limit. Figure 2 shows the specific heat and susceptibility for various \(K\) values obtained via QITE-ansatz. The lattice size is \(2\times 2,3\times 3,4\times 4\) for the 2-D system, marked by triangular-down, circle and triangular-up, respectively, and \(2\times 2\times 2,3\times 3\times 2\) for the 3-D system, with results marked by triangular-down and circle respectively. In the evolution of the variational parameters, we use the Euler method with step length \(\delta\tau=0.002\) as in Eq. (19), which is chosen such that further shrinking the step length has no impact on the numerical results (We will take this step length also throughout the following simulations.). We see that the QITE results converge well with the results from exact diagonalization(ED) when the system size is small for both 2-D and 3-D systems. For \(4\times 4\) and \(3\times 3\times 2\) lattices, the specific heat curves deviate from the ED curves near the critical point, which result from the limitation of the variational ansatz expressivity. The expressivity can be improved by increasing the number of ansatz layers and using longer Pauli strings for each local interaction term beyond \(Z_{i}Y_{j},Y_{i}Z_{j}\). More detailed error analyses are shown in Appendix B.
Indications of the Ising criticality can be observed in figure 2. The critical temperatures of 2-D and 3-D systems in the infinite volume limit are denoted by the black dashed line. Near the critical points, values of the specific heat and susceptibility increase, and there are peaks in the specific heat as a function of \(K\). For 2-D NNIM with volume \(N_{d}\times N_{d}\), we denote the position of the peak as \(K_{c}(N_{d})\). For larger system sizes, \(K_{c}(N_{d})\) moves slowly towards the infinite volume critical point \(K_{c}\). To guide the eye of this movement, we draw the grey solid line in the upper left panel of figure 2. The analytic expression of the grey solid line is inspired by the finite size scaling [32].
Figure 3 presents the behavior of the specific heat for the 2-D long-range interacting Ising model with finite \(\alpha\). Compared with the nearest neighbor interaction, the long-range model introduces more \(Z_{i}Z_{j}\) interactions and requires more variational parameters. There are \(N=|\Lambda|(|\Lambda|-1)L\) parameters in the variational ansatz. The system size in the figure is \(|\Lambda|=3\times 3\), with \(\alpha=1,2,3\) and the nearest neighbor case \(\alpha=\infty\), marked by the triangular-up, cross, triangular-down and circle, respectively. We see that for various \(\alpha\) and \(K\), the QITE-ansatz results and the ED results are consistent. Moreover, the peak of the specific heat shifts to the direction of high temperature(smaller \(K\)) for a larger interaction range(smaller \(\alpha\)). This behavior is reasonable since the long-range interaction effectively raises the system's dimension, and a higher system dimension leads to a higher critical temperature, e.g., 3-D NNIM critical temperature is higher than that of 2-D NNIM.
## V Discussion
This work discussed the possibility of using the imaginary time evolution algorithm to prepare the thermal state of the Ising model on NISQ devices. We numerically calculate the specific heat and susceptibility of the long-range interacting Ising model with the prepared thermal state. We find that the results using the quantum algorithm are consistent with the ones from exact diagonalization for various temperatures, including the critical and low-temperature regions.
We presented a systematic procedure to design a variational ansatz for the thermal state preparation. This ansatz is inherited from the quantum circuits used in QITE-measure algorithm. We show that it out-performs the original circuit designed using QITE-measure. This variational ansatz can be further simplified according to the symmetry of the Hamiltonian and the initial state.
The ideas proposed in this work can be applied to study the critical behavior of other classical models, such as the \(Q\)-state Potts model, which would be difficult to simulate using the Monte-Carlo algorithm when \(Q\) is very large. Additionally, according to the correspondence of the \(D\) dimensional quantum model to the \(D+1\) dimensional classical model [33], the algorithm can also be used to study quantum phase transition.
Figure 3: Specific heat as a function of \(K\) in the 2-D long-range interacting Ising model with interaction range \(\alpha=1,2,3,\infty\), where smaller \(\alpha\) indicates larger interaction range. The system size is \(|\Lambda|=3\times 3\). ED represents results from exact diagonalization. We see that for various \(\alpha\) and \(K\), the QITE results and the ED results are consistent. The black dashed line denotes the exact critical point of the 2-D NNIM in the infinite volume limit. As \(\alpha\) decreases, the peak of the specific heat curve left shift, indicating that the effective dimension is raised for a larger interaction range.
Acknowledgements
We thank Xiao Yuan, Jinzhao Sun, Lena Funcke, Stefan Kuhn and Yahui Chai for helpful discussions. X.W. and X.F. were supported in part by NSFC of China under Grants No. 12125501, No. 12070131001, and No. 12141501, and National Key Research and Development Program of China under No. 2020YFA0406400. PS acknowledges support from: ERC AdG NOQIA; Ministerio de Ciencia y Innovation Agencia Estatal de Investigaciones (PGC2018-097027-B-I00/10.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI, QUANTERA MAQS PCI2019-111828-2, QUANTUM DYNAMITE PCI2022-132919, Proyectos de I+D+I "Retos Colaboracion" QUSPIN RTC2019-007196-7); MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya; Fundacio Cellex; Fundacio Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2022-1-0042); EU Horizon 2020 FET-OPEN OPTOlogic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 -- NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal "QuantumGaudi" project; European Union's Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 ("La Caixa" Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013). Views and opinions expressed in this work are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
## Appendix A Simplification of thermal state preparation in classical field theory
The Hamiltonian of a classical field theory is naturally diagonalized and can be written as a linear combination of Pauli-\(Z\) operators, such as the Ising model considered in the main text and \(Q\)-state Potts model. Such Hamiltonian has energy eigenstates that can be encoded on the computational basis of qubits, and all the Pauli-\(Z\) operators commute with each other. To compute the expectation values of such Hamiltonian's thermal state, we only need imaginary time evolution on an initial state \(\left|\bar{+}\right\rangle\equiv\left|+\right\rangle^{\otimes N_{q}}\) where \(N_{q}\) is the number of system's qubits and \(\left|+\right\rangle=(\left|0\right\rangle+\left|1\right\rangle)/\sqrt{2}\). A similar idea is also proposed in the tensor network algorithm targeting on classical Ising model [34]. The above statement is proved as follows.
The thermal expectation values \(\left\langle O\right\rangle\) as defined in Eq. (4) can be expanded with an arbitrary orthogonal basis \(\left\{\left|i\right\rangle\right\}\)
\[\left\langle O\right\rangle=\frac{\sum_{i}\left\langle i\right|e^{-\tau H}Oe^ {-\tau H}\left|i\right\rangle}{Z_{2\tau}}, \tag{10}\]
where
\[Z_{2\tau}=\sum_{i}\left\langle i\right|e^{-2\tau H}\left|i\right\rangle. \tag{11}\]
We choose the orthogonal basis of Pauli-\(X\) operators \(\left\{\left|i\right\rangle\right\}=\left\{\left|+\right\rangle,\left|- \right\rangle^{\otimes N_{q}}\). Notice that all vectors in the set can be generated by applying Pauli-\(Z\) operators on one basis vector \(\left|\bar{+}\right\rangle\). For example
\[Z_{2}Z_{1}\left|+\right\rangle_{2}\left|+\right\rangle_{1}\left|+\right\rangle _{0}=\left|-\right\rangle_{2}\left|-\right\rangle_{1}\left|+\right\rangle_{0}. \tag{12}\]
The Hamiltonian consists of Pauli-\(Z\) operators, so it commutes with all the Pauli-\(Z\) operators. Thus, all terms in the partition function are equal
\[\left\langle i\right|e^{-2\tau H}\left|i\right\rangle=\left\langle\bar{+} \right|e^{-2\tau H}\left|\bar{+}\right\rangle, \tag{13}\]
for all \(\left|i\right\rangle\in\left\{\left|+\right\rangle,\left|-\right\rangle\right\} ^{\otimes N_{q}}\), and we have \(Z_{2\tau}=2^{N_{q}}\left\langle\bar{+}\right|e^{-2\tau H}\left|\bar{+}\right\rangle\). Further, notice that all the observables concerning specific heat and susceptibility in Eq. (8) consist of Pauli-\(Z\) operators, which can be formally written as
\[O=\sum_{m}h_{m}\tilde{Z}_{m}, \tag{14}\]
where \(\tilde{Z}_{m}\) denotes the tensor product of \(Z\) operators at some sites and identity operators at others. Similar to Eq. (13), all terms in the numerator of Eq. (10) are equal
\[\left\langle i\right|e^{-\tau H}\tilde{Z}_{m}e^{-\tau H}\left|i\right\rangle= \left\langle\bar{+}\right|e^{-\tau H}\tilde{Z}_{m}e^{-\tau H}\left|\bar{+} \right\rangle, \tag{15}\]
for all \(\left|i\right\rangle\in\left\{\left|+\right\rangle,\left|-\right\rangle\right\} ^{\otimes N_{q}}\). Thus we have
\[\left\langle\tilde{Z}_{m}\right\rangle =\frac{2^{N_{q}}\left\langle\bar{+}\right|e^{-\tau H}\tilde{Z}_{m}e^ {-\tau H}\left|\bar{+}\right\rangle}{Z_{2\tau}} \tag{16}\] \[=\frac{\left\langle\bar{+}\right|e^{-\tau H}\tilde{Z}_{m}e^{-\tau H }\left|\bar{+}\right\rangle}{\left\langle\bar{+}\right|e^{-2\tau H}\left| \bar{+}\right\rangle}.\]
In conclusion, the thermal expectation value of an observable \(O=\sum_{m}h_{m}\tilde{Z}_{m}\) with a thermal state of a classical Hamiltonian can be derived according to imaginary time evolution on initial state \(\left|\bar{+}\right\rangle\),
\[\left\langle O\right\rangle =\sum_{m}h_{m}\langle\tilde{Z}_{m}\rangle \tag{17}\] \[=\sum_{m}h_{m}\frac{\left\langle\bar{+}\right|e^{-\tau H}\tilde{Z} _{m}e^{-\tau H}\left|\bar{+}\right\rangle}{\left\langle\bar{+}\right|e^{-2\tau H }\left|\bar{+}\right\rangle}\] \[=\sum_{m}h_{m}\left\langle\bar{+}(\tau)\right|\tilde{Z}_{m}\left| \bar{+}(\tau)\right\rangle,\]
where \(\left|\tilde{+}(\tau)\right\rangle\) is imaginary time evolved state according to Eq. (9). The state is initialized as \(\left|\tilde{+}(0)\right\rangle=\left|\tilde{+}\right\rangle\). Thus we prove the statement in Eq. (26).
## Appendix B Error analysis and circuit layers estimation
There are four main sources of errors when implementing the QITE-ansatz algorithm on real quantum devices [35]
* The variational ansatz has limited expressivity. The imaginary time evolution proceeds on the manifold expanded by the variational ansatz. Thus the evolved wave function deviates from the true wave function in Eq. (9), and leads to the systematic error of the expectation values of the observables.
* Errors arise from the numerical integration using the Euler method as in Eq. (19).
* Noisy quantum gates, state preparation and measurement in quantum devices result in systematic errors when evaluating expectation values and estimating \(M\) and \(V\) (See Eq. (17)).
* Finite number of shots results in statistical errors in evaluating expectation values, \(M\) and \(V\).
Errors of the last two items exist in general for any quantum algorithms. In the following, we will only analyze the errors specific to the QITE algorithm, as depicted in the first and second items.
The errors from the limited variational ansatz expressivity have been shortly discussed in the main text. There are two ways to improve expressivity. The first is by increasing the number of ansatz layers, and the second is by considering longer Pauli strings expansion in Eq. (29) for each local interaction term in the Hamiltonian. It is not hard to see that by extending number of layers to infinity and taking the expansion on the whole system, the variational ansatz can carry out the evolution \(e^{-\tau H}\) exactly. In the following text, we numerically investigate how these two aspects affect the performance in calculating the specific heat of 2-D NNIM.
The limitation of finite ansatz layers can be observed by tuning the number of layers \(L\). In figure 4, we compute the average absolute error of 2-D NNIM specific heat as a function of \(L\), in case of lattice volumes \(|\Lambda|=3\times 3,4\times 4\). The average absolute error is defined by
\[\overline{\Delta C_{v}}\equiv\frac{1}{|K_{\text{max}}-K_{\text{min}}|}\int_{K _{\text{min}}}^{K_{\text{max}}}\mathrm{d}K\ |C_{v}-C_{v}^{ED}|, \tag{20}\]
where \(C_{v}\) is specific heat from the quantum simulator, and \(C_{v}^{ED}\) is from exact diagonalization. Here we take the integration range \([K_{\text{min}},K_{\text{max}}]=[0,1]\). The errors of specific heat decrease rapidly as \(L\) increase and saturate to a platform after a certain layer \(L^{*}\). We will analyze this transition layers \(L^{*}\) after a while. When \(L>L^{*}\), the remaining average absolute error of specific heat is mainly from the finite length of Pauli strings expansion.
Here provide an empirical explanation of the transition layer \(L^{*}\) observed in figure 4. It also helps to estimate how many layers we need when constructing variational ansatz for simulating NNIM. As shown in figure 5, variational ansatz generates correlation in the spin system. In the best case, the correlation between two neighboring spins is generated by one unitary transformation such as \(e^{-i\theta ZY}\) in the Ising case; In the worst case, we need a whole layer of the variational ansatz such as \(e^{-i\theta ZY}e^{-i\theta^{\prime}YZ}\) to generate such correlation. The transition layer \(L^{*}\) indicates the lowest number of circuit layers to generate correlation between the two most distant spins in the D-dimensional nearest neighbor lattice system. Thus for D-dimensional NNIM with volume \(N_{d}^{D}\) and PBC, as the Manhattan distance of the most remote two spins is \(DN_{d}/2\) (Equal to the number of the yellow arrows in figure 5, where \(D=2,N_{d}=3,4\) respectively.), the transition layer would be in the range
\[\frac{DN_{d}}{2G}\leq L^{*}\leq\frac{DN_{d}}{2}, \tag{21}\]
which corresponds to the best case and worst case mentioned above. Here \(G\) is the number of Pauli exponentials in one layer, i.e., the number of relevant Pauli operators for some local interaction terms. The transition layers in figure 4 are in accord with this range, i.e., \(N_{d}/2\leq L^{*}\leq N_{d}\), and we see larger number of layers have almost no improvement to the average absolute error of the specific heat. Thus we say \(L^{*}\) layers are enough for variational ansatz to simulate NNIM. This estimation on the number of ansatz layers can be generalized to more complicated short-range interacting models.
Figure 4: The average absolute error of specific heat as a function of variational ansatz layers. We utilize the 2-D nearest neighbor Ising model with two volumes \(|\Lambda|=N_{d}\times N_{d}=3\times 3,4\times 4\). The limitation of variational ansatz can be well controlled by increasing the layers. As the number of layers is larger than some transition layers \(L^{*}\), the error has no obvious change. Theoretically, we can predict \(L^{*}\in[1.5,3]\) for \(N_{d}=3\) and \(L^{*}\in[2,4]\) for \(N_{d}=4\).
Comparing the required number of layers of the variational ansatz provided by Eq. (30) and the layers of quantum circuits used in QITE-measure, one finds the former is much less than the latter. It can be partially explained using the example of the two-qubit Ising system shown in the main text. For the QITE-measure circuit figure 1**a**, due to the commutability of relevant Pauli operators \([ZY,YZ]=0\), it is equivalent to the circuit shown in figure 6, which consists of only two Pauli exponentials where the rotation angles are the summation of all the coefficients of the corresponding Pauli exponentials in figure 1**a**. Therefore, if we use one layer of the circuit in figure 1**b**, and \(\theta_{1}=a_{1}^{(0)}+\ldots+a_{1}^{(\tau-\Delta\tau)},\theta_{1}^{\prime}= a_{2}^{(0)}+\ldots+a_{2}^{(\tau-\Delta\tau)}\), the QITE-measure circuit could be rephrased without loss of the precision. Thus compared with the QITE-measure circuit, the number of variational ansatz layers used in our simulation can be significantly reduced.
Numerical integration errors can be controlled via a more elaborate numerical integration algorithm. In the main text, we use the Euler method that accumulates a global error of \(\mathcal{O}(\delta\tau)\) at the final step. One could use a more elaborate numerical algorithm such as the 4th-order Runge-Kutta method to control the systematic error, which accumulates a global error of \(\mathcal{O}(\delta\tau^{4})\) at the final step. In our simulations, as the numerical integration error is not the dominate systematic error, the Euler method is sufficiently good.
|
2309.04318 | Generating the Ground Truth: Synthetic Data for Soft Label and Label
Noise Research | In many real-world classification tasks, label noise is an unavoidable issue
that adversely affects the generalization error of machine learning models.
Additionally, evaluating how methods handle such noise is complicated, as the
effect label noise has on their performance cannot be accurately quantified
without clean labels. Existing research on label noise typically relies on
either noisy or oversimplified simulated data as a baseline, into which
additional noise with known properties is injected. In this paper, we introduce
SYNLABEL, a framework designed to address these limitations by creating
noiseless datasets informed by real-world data. SYNLABEL supports defining a
pre-specified or learned function as the ground truth function, which can then
be used for generating new clean labels. Furthermore, by repeatedly resampling
values for selected features within the domain of the function, evaluating the
function and aggregating the resulting labels, each data point can be assigned
a soft label or label distribution. These distributions capture the inherent
uncertainty present in many real-world datasets and enable the direct injection
and quantification of label noise. The generated datasets serve as a clean
baseline of adjustable complexity, into which various types of noise can be
introduced. Additionally, they facilitate research into soft label learning and
related applications. We demonstrate the application of SYNLABEL, showcasing
its ability to precisely quantify label noise and its improvement over existing
methodologies. | Sjoerd de Vries, Dirk Thierens | 2023-09-08T13:31:06Z | http://arxiv.org/abs/2309.04318v2 | # Generating the Ground Truth:
###### Abstract
Most real-world classification tasks suffer from label noise to some extent. Such noise in the data adversely affects the generalization error of learned models and complicates the evaluation of noise-handing methods, as their performance cannot be accurately measured without clean labels. In label noise research, typically either noisy or incompletely simulated data are accepted as a baseline, into which additional noise with known properties is injected. In this paper, we propose SYNLABEL, a framework that aims to improve upon the aforementioned methodologies. It allows for creating a noiseless dataset informed by real data, by either pre-specifying or learning a function and defining it as the ground truth function from which labels are generated. Furthermore, by resampling a number of values for selected features in the function domain, evaluating the function and aggregating the resulting labels, each data point can be assigned a soft label or label distribution. Such distributions allow for direct injection and quantification of label noise. The generated datasets serve as a clean baseline of adjustable complexity into which different types of noise may be introduced. We illustrate how the framework can be applied, how it enables quantification of label noise and how it improves over existing methodologies.
## 1 Introduction
Classification models are of great interest to the research community and machine learning practitioners alike. When applied to real-world problems, these models are confronted with noisy data, with noise defined as anything that obscures the relationship between the dependent and independent variables [1]. Label noise can have detrimental effects on classifier performance, model complexity, learning rates and effect size estimation [12]. Therefore, a lot of research is conducted into prediction methods that are robust to such noise and into pre-processing steps for filtering label noise from data [13, 14].
When algorithms are evaluated with regard to their ability to handle label noise, typically, existing real-world datasets are assumed to be the ground truth, after which artificial noise is injected into the labels [1, 1]. Alternatively, data is entirely simulated, often by modelling relatively simple relationships between the dependent and independent variables [1, 1]. Lastly, a limited number of curated datasets are publicly available for which the label noise has been quantified by expert annotators [21, 12, 13].
Running experiments using any of the above types of datasets has drawbacks associated with it: either the relationships in the data are not sufficiently complex and therefore not realistic (simulated data), or there are no clean labels available to evaluate on and noise has to injected into already noisy data (real-world data), or the noise cannot be tailored, such that any method is only tested on a very specific noise pattern (curated data). Creating a curated dataset takes considerable effort as well. While noisy real-world data are available in abundance, the lack of a clean evaluation set cannot be easily overcome. In [12] it is stated, in the context of method comparison, that the presence of label noise in the validation data causes any estimates to be off by an unknown amount. The authors mention this as an important open research question, one which we address in this work.
In this paper we aim to improve upon the aforementioned experimental strategies. We present the SYNLABEL framework: Synthetic Labels As Baseline for Experiments with Label noise. SYNLABEL facilitates the construction of artificial tabular datasets for performance evaluation of methods dealing with label noise. We propose to define a pre-specified or learned function as the ground truth relationship. Then, by applying this ground truth function to any input data contained in its domain, noiseless labels are generated. As we are interested in testing methods on known noise, rather than finding the best model for a specific real-world problem, this function is not required to exactly represent the original data. The generated ground truth set can be further transformed into a partial ground truth set for which each data point is accompanied by a soft label: first, a number of specified variables from the ground truth dataset are hidden. Then, by learning or specifying a (conditional) prior distribution, resampling values from it and combining these sampled values with fixed values for the variables that were not hidden, a posterior distribution is generated via the ground truth relationship. Although the prior distribution over the hidden variables is almost guaranteed no to match the exact underlying distribution of the real-world generative distribution over the original dataset, as before this is not necessary for our purpose. The advantage of constructing a set with
soft labels compared to dealing with hard labels is that it allows for explicit quantification and direct injection of label noise. Furthermore, these sets can be used for problems from different domains that can be mapped to a label distribution problem, e.g. learning from crowds or with confidence scores. The sets with clean hard or soft labels serve as a starting point from which further transformations can be applied to the data in order to generate the specific noise of interest, allowing for any such added noise to be quantified.
In summary, the key contributions of this paper are:
* The SYNLABEL framework which facilitates the generation of experimental datasets for label noise research.
* A method for constructing a ground truth dataset informed by real-world data for the purpose of evaluation.
* A method for converting hard into soft labels by resampling values for features that are hidden from the model.
* An analysis showing the advantages of using soft labels for quantification and injection of label noise.
## 2 Related Work
Systematic experiments in label noise research require a dataset with both clean labels as a baseline to evaluate on, as well as noisy labels. Based on how these sets are obtained, the experiments in the label noise field can be placed into three categories: (1) an existing dataset for which the labels have been manually corrected is used, or artificial noise is injected into the labels of either (2) a clean dataset which has been simulated or (3) an existing real-world dataset.
The first type of experiment uses a curated real-world dataset for which noisy labels have been corrected, such as Clothing1M [13], Food-101N [14] and WebVision [15]. The ground truth labels are generally decided upon by a panel of experts. Nevertheless, even among experts high inter-observer disagreement may occur [20].
In both the second and third category of experiments, first a baseline is established after which noise is injected into the labels. This injected noise is constructed based on a selected noise model. In order of decreasing commonality and increasing complexity, the injected noise can be classified as [1]: Noisy Completely At Random (NCAR) [1], Noisy At Random (NAR) [16] or Noisy Not At Random (NNAR) [15, 14, 13].
When synthetic data is constructed, the true underlying function is known by design, e.g. data is sampled from Gaussian distributions or constructed using rule-based generation [17]. Clean labels are then generated from this function, to which noise can be added. The largest downside is that these simulated sets generally lack the complex interactions that one expects between variables in real data.
In case of real-world data, when there are no resources available to curate the sets, the true labels remain unknown and thus the level of natural noise cannot be quantified [17]. The importance of using controllable artificial data, especially in the context of noise, was already mentioned in [1], as it enables systematic research into different aspects of a domain. While the use of real-world data is the least labour intensive experimental method, the effects of any further added label noise cannot be separated from the inherent noise present and thereby the overall noise level cannot be sufficiently controlled.
In summary, while different types of label noise experiments exists, they each suffer from shortcomings. Furthermore, when using hard labels, noise cannot be specified for any individual label, beyond the label being correct or not.
Recently, [11] proposed a framework for generating instance-dependent label noise. Our work differs from theirs in a few ways: they introduce a particular type of classifier based instance-dependent noise, whereas our framework allows for any type of noise injection. Furthermore, a dataset to be used with their work requires the labels to be clean, while we present methods for generating such data with either clean hard or soft labels.
## 3 The SYNLABEL Framework
We present the Synthetic Labels As Baseline for Experiments with Label noise (SYNLABEL) framework, shown in Figure 1, which facilitates generating synthetic tabular datasets for use in label noise experiments.
The SYNLABEL framework defines different types of datasets and transformations between them. Each dataset consists of input variables \(X\), labels \(y\) and a functional relationship between the two, \(y=f(X)\). The two types of ground truth datasets depicted in the Unobservable part of Figure 1, are generally unobtainable for real-world problems, as their functional relationship is defined to be exact, i.e. noiseless. In practice, an Observable dataset is available and often the task at hand is precisely to discover a relationship between \(X\) and \(y\) that generalizes well. The datasets can be further categorized based on whether the output is a single unambiguous class, also known as a hard label, or a discrete probability distribution over the label space, commonly referred to as a soft label.
The user is encouraged to utilize the framework to construct a noiseless ground truth dataset based on a known, possibly learned, functional relationship. Any further noise applied to this dataset can be quantified exactly for each individual data point. This allows for analysing method performance on a specific type of noise in isolation. We stress that SYNLABEL is not meant for optimizing models for a specific dataset, but rather for evaluating and comparing methods in the presence of label noise. In the following we describe the different components of the framework.
### Notation
In the SYNLABEL framework, a dataset \(D\) is made up of objects \(o\) denoted by \(D:o_{i}=\{X_{i},y_{i}\}\). The framework is meant for any deterministic classification task:
**Definition 1**: _A deterministic classification task is a task for which, given that all required information (\(X^{G}\)) to the outcome \(y^{G}\) is available, there is a true deterministic function \(f^{G}(X^{G})=y^{G}\)._
In other words, given that we know all of the relevant information to the task \(X_{i}\), \(y_{i}\) is unambiguously assigned one true class through the function \(f^{G}\) for all \(y_{i}\in y^{G}\). A dataset with corresponding classification task for which Definition 1 holds is defined as a Ground Truth (\(G\)) dataset \(D^{G}\):
**Definition 2**: _A Ground Truth (\(G\)) dataset \(D^{G}:o_{i}=\{X_{i}^{G},f^{G}(X_{i}^{G})=y_{i}^{G}\}\) is a dataset for which any input \(X_{i}^{G}\) in the domain of \(f^{G}\) is mapped to its deterministic hard label \(y_{i}^{G}\) by the true function \(f^{G}\)._
Note that a dataset almost never satisfies this definition unless it is simulated. SYNLABEL offers the tools to generate such a dataset based on a noisy dataset.
When not all features required for deterministic classification are available, yet the true classification function is known, the dataset is referred to as a Partial Ground Truth (\(PG\)) dataset \(D^{PG}\), defined as:
**Definition 3**: _A Partial Ground Truth (\(PG\)) dataset \(D^{PG}:o_{i}=\{X_{i}^{PG},f^{G}(X_{i}^{PG})=y_{i}^{PG}\}\) is a dataset for which any input \(X_{i}^{PG}\) in the domain of \(f^{G}\) is mapped to its soft label \(y_{i}^{PG}\) by the true function \(f^{G}\)._
Here \(X^{PG}\subset X^{G}\) contains the available data for the task, while any unavailable features are contained in \(X^{PG^{\prime}}\subset X^{G}\) such that \(X^{PG}\cup X^{PG^{\prime}}=X^{G}\). Since some of the information (\(X^{PG^{\prime}}\)) needed for an exact classification is missing, \(f^{G}(X^{PG})\) produces discrete label distributions or soft labels \(y^{PG}\), with quantifiable uncertainty, instead of hard labels.
In the (Partial) Ground Truth sets the input \(X\) is mapped to the output \(y\) by the true underlying function \(f^{G}\). Such datasets with corresponding mapping are practically not obtainable in real life, i.e. they are unobservable.
Data that is observed in practice, \(D^{O}\), has different characteristics: noise can be present in both the observed input data \(X^{O}\), as well as in the corresponding label \(y^{O}\). Often, \(y^{O}\) is not measured directly and is instead annotated by an expert or system, based on their own non-deterministic, noisy functional relationship \(f^{E}\). Such an expert may have the same information available as is available for the classification task, i.e. \(X^{E}\subseteq X^{O}\), or additional relevant information \(X^{O^{\prime}}\) could be available: \(X^{E}\subseteq(X^{O}\cup X^{O^{\prime}})\).
An annotator, either implicitly or explicitly, assigns probabilities to the different candidate labels corresponding to an object \(o_{i}\), producing a Observed Soft Label (OS) set with corresponding outcome \(y^{OS}\). This set is often discretized into a Observed Hard Label (OH) set with label \(y^{OH}\) based on some decision function \(f_{dec}\). Depending on whether the intermediate label distribution is preserved (\(OS\)) or not (\(OH\)), the final dataset becomes either \(D^{OS}\):
**Definition 4**: _An Observed Soft Label (\(OS\)) dataset \(D^{OS}:o_{i}=\{X_{i}^{O},y_{i}^{OS}\}\) is a dataset for which the input \(X^{O}\) is associated with soft labels \(y^{OS}\)._
or more commonly \(D^{OH}\):
**Definition 5**: _An Observed Hard Label (\(OH\)) dataset \(D^{OH}:o_{i}=\{X_{i}^{O},y_{i}^{OH}\}\) is a dataset for which the input \(X^{O}\) is associated with hard labels \(y^{OH}\)._
Note that there are no function-related requirements for these sets. While a function \(f^{O}\) may be learned from these data, it is nearly guaranteed not to match the true functional relationship for the corresponding deterministic classification task. An overview of the different datasets defined in Definition 2-5 is presented in Table 1.
## 4 Data Transformations
At the core of the SYNLABEL framework lie the different operations that enable a user to transform the datasets from one type to another. These allow a user to obtain both a (Partial) Ground Truth dataset for validation as well as realistic datasets that contain the specific type of label noise for which an experiment is to be conducted.
The different types of datasets can be transformed in two directions. The direction in which data is most often
Figure 1: A schematic overview of the SYNLABEL framework. The white boxes represent data, either input \(X\) or labels \(y\). The gray boxes represent a type of dataset, which includes a function relating the input to the output, although this function may not be defined in case of observable data. The arrows represent the different transformations and functions as specified above. \(Rs\): Resampling. \(f()\): a function.
transformed, to obtain observed sets containing varying label noise, is down the chain: Ground Truth \((G)\rightarrow\) Partial Ground Truth \((PG)\rightarrow\) Observed Soft Label \((OS)\rightarrow\) Observed Hard Label \((OH)\). Transformations in this direction ensure that the objects that are contained in each set remain coupled, i.e. two objects \(o_{i}\) in different sets following a transformation down the chain still represent the same entity and remain linked to the original relationship that governs the ground truth data from which they originate. The reverse direction, up the chain, is made possible by using learned functions. In this case, however, a distinct dataset is generated, that is, the objects in the original set are not the same as those in the transformed set, since a new ground truth relationship is defined. The objects become decoupled.
Objects remain coupled following an arbitrary number of transformations both up and down the chain only when all sets are identical, thus \(D^{G}=D^{PG}=D^{OS}=D^{OH}\). This implies that the soft labels, \(y^{PG}_{i}\) and \(y^{G}_{i}\), have probability 1 for the class in \(y^{OH}_{i}\) and \(y^{G}_{i}\) and probability 0 for the other classes and that the true functional relationship \(f^{G}\) required for the (Partial) Ground Truth is known.
The transformations and functions and their corresponding types are shown in Figure 1. A transformation can either be an identity transformation, through which the data are not altered, or a non-identity transformation which adds noise to the data. A function can either be a true function or some other function, e.g. a learned function or specified decision function, denoted by any function.
### Down the chain
In the following we describe the supported transformations down the chain. These transitions allow the objects \(o\) to remain coupled between the different datasets.
#### From Ground Truth to Partial Ground Truth
An identity transformation exists between \(X\) in \(D^{G}\) and in \(D^{PG}\). The variables contained in \(X^{G}\) have identical values in \(D^{PG}\), but they may be split into two sets, \(X^{PG}\) and \(X^{PG^{\prime}}\). Additionally, the function \(f^{G}\) describes the true relationship between \(X\) and \(y\): \(f^{G}(X^{G}_{i})\) is equal to the true class of object \(o_{i}\) with absolute certainty. \(f^{G}(X^{PG}_{i})\) is equal to the true soft label of \(o_{i}\), given that some information contained in \(X^{G}_{i}\) is now contained in \(X^{PG^{\prime}}_{i}\) and thus missing from \(X^{PG}_{i}\), whereby the classification task becomes ambiguous.
To obtain \(D^{PG}\), \(X^{PG^{\prime}}\) cannot simply be ignored and a function \(f^{PG}\) learned on \(\{X^{PG},y^{G}\}\), as the resulting function would not equal the original relationship \(f^{G}\). This follows from the fact that the variables in \(X^{PG}\) would have to be irrelevant to the task and such variables are not contained in \(X^{G}\) by definition, and thus \(X^{PG^{\prime}}\) by extension. The exception is when \(X^{PG^{\prime}}\) is empty, resulting in the special case \(X^{PG}=X^{G}\) and \(D^{G}=D^{PG}\). Otherwise, the missing information from \(X^{PG^{\prime}}\) will cause a different function to be learned. The labels produced by \(f^{PG}\) are then guaranteed to differ from \(y^{G}\) for some objects in the domain of \(f^{G}\). Two such labels must both be the true label for such an object, which is contradictory, hence the sets become decoupled.
In addition to the identity transformation, \(X^{PG}=X^{G}\), there is a transformation that we call feature hiding, which preserves the coupling between the objects in both sets when \(X^{PG}\neq X^{G}\) and which preserves the truth relationship \(f^{G}\). Instead of trying to learn a function from the known features \(X^{PG}\), as before, we use \(f^{G}\) and the information we have about the missing features in \(X^{PG^{\prime}}\) to construct \(y^{PG}\) as follows: first we resample a number of values \(j\) for the features contained in \(X^{PG^{\prime}}\) for each object \(i\) in accordance with a probability density function which was constructed based on \(X^{G}\). This can be either a conditional density function, \(P(X^{PG^{\prime}}|X^{PG})\) or even \(P(X^{PG^{\prime}}|X^{PG}\cup y^{G})\), or a marginal density function, \(P(X^{PG^{\prime}})\). These multiple sampled values \(X^{PG^{\prime}}_{i,j}\) are then joined with the known values \(X^{PG}_{i}\) for each object \(o_{i}\) after which \(f^{G}\) is applied to all combinations of resampled and known values to obtain a corresponding label. We then aggregate the obtained labels into a soft label:
\[y^{PG}_{i,c}=\sum_{j=1}^{n}\frac{\mathds{1}_{c}(f^{G}(X^{PG^{\prime}}_{i,j} \cup X^{PG}_{i}))}{n}, \tag{1}\]
which returns the probability of class \(c\) for object \(i\), with \(\mathds{1}_{c}\) the indicator function. As \(n\rightarrow\infty\), an exact soft label for \(y^{PG}_{i}\) is obtained given the density function.
The values in \(X^{G}\) could be sampled from an infinite number of distributions. Therefore, it is impossible to determine the exact prior distribution for \(X^{PG^{\prime}}\) from which we should resample. On the other hand, any distribution from which \(X^{G}\) could possibly be sampled is a valid choice and allows for generating soft labels that reflect the ground truth given that specific distribution. Resampling from any valid distribution thus allows for the creation of a new \(D^{PG}\) from a \(D^{G}\) by which the coupling between objects remains intact.
#### From Partial Ground Truth to Observed Soft Label
The following non-identity transformations from \(D^{PG}\) to \(D^{OS}\) serve to produce experimental datasets tailored to a variety of different classification tasks, with corresponding (Partial) Ground Truth labels being available to evaluate on:
* \(X^{PG}\) to \(X^{O}\). By using any other relationship than the identity transformation, \(X^{O}\) may be altered directly by for instance applying Gaussian noise to (some of) the variables in \(X^{PG}\). This can result in label noise when \(X^{O}\) is used to generate \(y^{OS}\) further on via \(X^{E}\) and \(f^{E}\).
* \(y^{PG}\) to \(y^{OS}\). If \(y^{OS}\) is not established based on the variables in \(X^{O}\) and/or \(X^{O^{\prime}}\), but is measured directly, noise can be introduced directly to \(y^{PG}\). This facilitates the NCAR and NAR noise models.
* \(X^{PG}\) and/or \(X^{PG^{\prime}}\) and possibly \(y^{PG}\) to \(y^{OS}\). If \(y^{OS}\) is
\begin{table}
\begin{tabular}{l c c} \hline \hline Dataset type & Label & Function \\ \hline Ground Truth (\(G\)) & Hard & True \\ Partial Ground Truth (\(PG\)) & Soft & True \\ Observed Soft Label (\(OS\)) & Soft & Any \\ Observed Hard Label (\(OH\)) & Hard & Any \\ \hline \hline \end{tabular}
\end{table}
Table 1: The different dataset types defined in SYNLABEL.
determined based on \(X^{PG}\) and/or \(X^{PG^{\prime}}\) and \(y^{PG}\), noise can be generated according to the NNAR model.
* \(X^{O}\) to \(X^{E}\). If, in contrast to the previous transformation, \(y^{OS}\) is decided upon through \(f^{E}\), for instance by an expert panel or system, using information \(X^{E}\) which is based on \(X^{O}\), the labels \(y^{OS}\) can be manipulated by adding noise to \(X^{O}\).
* \(X^{PG^{\prime}}\) to \(X^{O^{\prime}}\) to \(X^{E}\). If as in the previous transformation \(y^{OS}\) is decided upon by an expert through \(f^{E}\) using \(X^{E}\), and this expert has more relevant information \(X^{O^{\prime}}\) available than is contained in \(X^{O}\) alone, noise can be added to \(y^{OS}\) by using a non-identity transformation between \(X^{PG^{\prime}}\) and \(X^{O^{\prime}}\). An example of \(X^{O^{\prime}}\) would be textual descriptions that can be used by a physician when labelling for the presence of some disease which are not readily available to be used by a classification model.
* \(X^{E}\) to \(y^{OS}\). If, as per the previous two transformations, \(y^{OS}\) is obtained trough \(f^{E}\) based on \(X^{E}\), the labels may be transformed by adjusting the annotation function \(f^{E}\).
The specific transformation used to add label noise to the data is decided upon by the user. The transformation should be such that the resulting dataset is suited toward the classification task for which a method is to be validated. In case the user is interested in soft label research instead of label noise research, the identity transformations may also be applied such that \(X^{O}\) = \(X^{PG}\) and \(y^{OS}\) = \(y^{PG}\).
**From Observed Soft Label to Observed Hard Label** Given \(X^{O}\) and \(y^{OS}\), transformation to \(D^{OH}\) is straightforward. A decision function \(f_{dec}\) needs to be defined, which converts the soft labels into hard labels. Examples of such a function would be sampling the soft label distribution or simply selecting the class with the highest probability, although many more decision functions are possible. Note that this function may be stochastic, allowing for random tie breaks in case of equal probabilities, as there exists no truth requirements for this functional relationship.
### Back up
Transformations down the chain, i.e. \(D^{G}\to D^{PG}\to D^{OS}\to D^{OH}\), can be used to generate arbitrarily many observed datasets from a single ground truth set by adding some specified noise. Before this can be done, however, a ground truth dataset has to be constructed. This can be achieved using a real-world observed dataset by utilizing a transformation up the chain. Such transformations have to respect the constraints posed on the different datasets per Definition 2-5, summarized in Table 1, which generally forces the objects in the different sets to become decoupled:
* When transforming from \(D^{PG}\) to \(D^{G}\), a decision function \(f_{dec}\) must be applied to \(y^{PG}\) to transform the soft labels into hard labels. Unless \(X^{PG^{\prime}}\) is empty and thus \(X^{PG}\) = \(X^{G}\), \(X^{PG^{\prime}}\) contains information that is taken into account by \(f^{G}\) by definition. This information then has to result in different labels for \(f_{dec}(f^{G}(X^{PG}\cup X^{PG^{\prime}}))\) compared to \(f_{dec}(f^{G}(X^{PG}))\)for some values in the domain of \(f^{G}\). Due to the missing information in \(X^{PG^{\prime}}\), this transformation then decouples the objects.
* When transforming from \(D^{OS}\) or \(D^{OH}\) to \(D^{G}\) or \(D^{PG}\), the true function \(f^{G}\) for the task would need to be discovered. This is generally impossible, as observed data is always finite and theoretically infinite functions can be found that describe it perfectly, with no further information being available that enables the selection of the true governing function. Furthermore, duplicate instances in \(X^{O}\) can have different hard or soft labels, whereas \(y^{G}\) is constrained to contain deterministic hard labels.
As stated before, the exception is when sets along the chain are identical and thus noiselessly observed with a known ground truth function, which is generally not the case.
It is possible, however, to construct a different ground truth dataset based upon the observed dataset, such that its requirements are fulfilled. This will cause the objects in the new dataset to become decoupled from those in the original dataset. While this is an issue when the task of interest is to find the best model for a specific dataset, when the aim is to create a suitable dataset for validation of a method for a certain type of label noise this is of no concern.
To construct a new \(D^{PG}\), we have to meet Definition 3, i.e. we must obtain both soft labels and the true relationship between \(X\) and \(y\). Since the latter is generally not possible given \(X^{O}\) and \(y^{OS}\) or \(y^{OH}\), we propose a different approach: first a function is learned based on \(X^{O}\) and \(y^{OS}\) or more commonly \(y^{OH}\), i.e. \(f^{O}\). We then set this function equal to \(f^{G}\), \(X^{PG}\) to be \(X^{O}\) and \(y^{PG}\) to be \(f^{G}(X^{PG})\). In effect, we disregard the original labels and obtain new soft labels that are generated by the application of the selected ground truth function to the input data.
Furthermore, by imposing a deterministic decision function \(f_{dec}\) upon \(f^{G}\), a new \(D^{G}\) can be constructed in a similar manner from \(X^{PG}\) by setting \(y^{G}=f_{dec}(f^{G}(X^{PG}))\). Note that \(f_{dec}\) has to be deterministic, so if for a binary problem \(f_{dec}(f^{G}(X^{PG}))\) returns 0.5, the decision may not be randomly taken. Again, the transformation results in a new dataset consisting of different, decoupled objects: the items in the newly constructed \(D^{G}\) are not necessarily the same as those in \(D^{PG}\) and the truth function \(f^{G}\) is altered as well: \(f^{G}_{new}=f_{dec}(f^{G})\). From the newly constructed \(D^{G}\), any of the transformations described in Section 4 can be applied to construct any number of new datasets for experimentation.
The SYNLABEL framework together with all of the transformations previously described has been implemented and made available publicly on GitHub to encourage its use: [https://github.com/sjoerd-de-vries/SYNLABEL](https://github.com/sjoerd-de-vries/SYNLABEL).
## 5 Application of the Framework
In the following we demonstrate how SYNLABEL might be used in practice by the developer of a noise-robust algorithm and highlight differences with existing methods for label noise experimentation. We believe this to be more valuable than a comparison of different algorithms on generated datasets - one of the main uses of the framework - would be, as the results would speak to the algorithmic performance instead of the quality of the framework. The dataset used here
and in Section 6 is the Keel Vehicle Silhouette set [16], which consists of 18 features and 4 classes.
Constructing a Ground TruthTo thoroughly evaluate the performance of an algorithm, we need a number of datasets for which the noise has been quantified. Some curated datasets are available, for which this has been done by experts. Not all noisy instances may have been identified, however, and the type of noise these sets contain is fixed and might not be the of the type that interests us.
Alternatively we could generate, i.e. simulate, clean datasets from scratch and add the exact noise we are interested in later, e.g. by sampling from different normal distributions or constructing concentric circles. Once more this approach is less than optimal, as the resulting datasets typically do not capture the complexity of real-world data.
By using SYNLABEL and the transformations up the chain defined in Section 4, we can construct a \(D^{G}\) based on observed, noisy data \(D^{O}\). Most commonly, real datasets contain hard labels and as such we take \(D^{O}=D^{OH}=\{X^{O},y^{OH}\}\). We then transform this dataset as follows: based on \(D^{OH}\) a function \(f^{O}\) is learned. We set \(X^{G}\) to be \(X^{O}\), \(f^{G}\) to be \(f^{O}\) with a deterministic decision function \(f_{dec}\) applied to it: \(f^{G}=f_{dec}(f^{O})\). Then we simply set \(y^{G}=f^{G}(X^{O})\), as for the simulated data, to construct a real-world data inspired Ground Truth dataset.
The properties of this dataset depend on the \(f^{G}\) used. If a simple linear model is used, the resulting dataset will likely not contain the complexities expected to be present in a real world system. On the other hand, if an overfit neural network is used, the relationships may well be overly complicated. As model expressivity is varied, baselines of corresponding complexity are constructed, approximating real-world problem difficulty to different extents.
Partial Ground Truth via Feature HidingHaving generated \(D^{G}\) as a baseline set for evaluation, we then generate an additional set with soft labels. These allow for alternative, more direct ways of quantifying and injecting label noise compared to hard labels, as shown in Section 6.
\(D^{PG}\) can be constructed by applying feature hiding, as specified by Equation 1. First, we define which features to hide, i.e. add to \(X^{PG^{\prime}}\), and thereby which features remain in \(X^{PG}\). Next, we specify the method for constructing the prior distribution from which the data in \(X^{PG^{\prime}}\) is resampled. Several methods are implemented in SYNLABEL by default and these can easily be extended to include custom methods. Then we specify the number of samples drawn and apply the transformation to obtain soft labels \(y^{PG}\).
In Figure 2 we show how resampling from prior distributions for \(X^{PG^{\prime}}\) constructed via different methods results in different levels of uncertainty in the obtained posterior distributions via Equation 1, as measured by the Shannon entropy, for different numbers of features hidden. As expected, sampling according to a conditional density function \(P(X^{PG^{\prime}}|X^{PG})\) or even \(P(X^{PG^{\prime}}|X^{PG}\cup y^{G})\), in this case constructed using MICE [20], produces soft labels with lower entropy than using a marginal density function \(P(X^{PG^{\prime}})\) does.
Introducing NoiseNow that we have obtained both a baseline dataset with hard labels \(D^{G}\) and with soft labels \(D^{PG}\), we need to introduce the specific type of label noise we are interested in so that we can compare methods on the resulting sets. This noise may be added via any of the transformations that have been described in detail in Section 4. Note that such noise injection can also be applied to a noisy real-world dataset. In this case, however, we would add noise upon pre-existing, unspecified noise, which makes it impossible to study the effect of the added noise in isolation, as we show in Section 6.
## 6 Quantifying Label Noise
An inherent advantage of having soft labels available is that noise can be quantified by measures such as the Shannon entropy of the resulting distribution, or when two distributions \(P\) and \(Q\) are to be compared, the total variation distance:
\[D_{TV}(P,Q)=\frac{1}{2}||P-Q||_{1}. \tag{2}\]
The latter enables us to apply any noise generation method directly on \(D^{PG}\) to generate \(D^{OS}\) and quantify the strength of the label noise by evaluating \(D_{TV}(D^{PG},D^{OS})\). As most classification tasks are concerned with hard labels rather than soft labels, a decision function \(f_{dec}\) can be utilized to transform \(y^{OS}\) into \(y^{OH}\) and thereby generate \(D^{OH}\). Such a decision function could be for instance taking a sample in proportion to the label distribution or selecting the class with the highest probability. When a classifier has to be evaluated against some introduced noise, either its probabilistic output can be compared directly to \(y^{PG}\), or its hard label to \(y^{G}\) or \(y^{PG}\), depending on whether any noise due to partly unobserved data is of interest.
To illustrate the importance of a clean baseline and how the use of label distributions allows both quantification of noise and direct noise injection, we conducted experiments
Figure 2: The level of label noise generated by feature hiding as measured by the mean entropy of the resulting soft labels for different probability density estimation methods and different numbers of features hidden. Average over 50 runs. KDE: Kernel Density Estimation. MICE: Multivariate Imputation by Chained Equations.
with the Keel Vehicle Silhouette dataset, of which the results are shown in Figure 3. To obtain \(D^{G}\) from this observed dataset, we trained a Random Forest classifier on the original labels and set it equal to \(f^{G}\), a transformation up the chain.
On the left side the Mean Total Variation Distance is shown, either with respect to the labels in \(D^{G}\) or in case of \(\Delta_{3}\) with respect to \(D^{PG}\), which measures how often a label is expected to change due to the noise injection. We observe that there is a difference between the level of noise introduced when applying a uniform noise matrix \(T_{r}\) to a \(D^{PG}\) that has been constructed via feature hiding (\(FH\)) from \(D^{G}\), i.e. \(T_{r}(FH(D^{G}))\), and the separately added noise of \(\Delta_{1}=D^{PG}=FH(D^{G})\) and \(\Delta_{3}=T_{r}(D^{PG})\), \(\Delta_{1}+\Delta_{3}\). The same applies when this noise is added to \(D^{G}\) directly \(\Delta_{2}=T_{r}(D^{G})\), and then added to the noise introduced by \(FH\): \(\Delta_{1}+\Delta_{2}\). This illustrates how label noise applied to a set for which the baseline noise is unknown cannot be retrospectively isolated and properly quantified.
\(T_{r}(D^{OH})\) is the result of sampling hard labels from \(D^{PG}\) (100 times) and then applying the uniform flipping probability via \(T^{r}\) to each sample (100 times as well), while for \(T_{r}(FH(D^{G})\) the noise matrix is simply applied to the \(y^{PG}\) directly. As desired, \(T_{r}(D^{OH})=T_{r}(FH(D^{G}))\), which shows how the direct injection of noise into the label distribution makes repeated sampling from \(D^{PG}\), followed by repeated application of random noise functions to individual objects, is redundant, in this case saving 10.000 repeated actions. Furthermore, the noise level is exact instead of an estimation. In addition, some types of noise are more naturally applied directly to a label distribution.
Finally, \(ID(T_{r}(D^{OH}))\), where ID stands for instance-dependent (NNAR), is added to illustrate that similar results are obtained for a more complex type of noise. The uniform \(T_{r}\) from before is still used, but we applied it twice as often to the objects with the largest ratio of distance to their nearest neighbour of the same label to distance to a neighbour of the other label, as in (Garcia et al., 2019).
On the right hand side the mean entropy of the label distributions is shown and class-conditional noise is added. As entropy is not a measure between distributions, \(\Delta_{3}\) equals \(T_{r}(FH(D^{G}))\), and is omitted. The same patterns can be observed as for \(D_{TV}\), demonstrating that noise added to pre-existing, unknown noise cannot be studied in isolation.
## 7 Conclusion
In this work, we present the SYNLABEL framework which facilitates the generation of synthetic data for label noise experiments. Standard procedure would have the user utilize the framework to generate a ground truth dataset inspired by a real-world dataset by learning a classification function from the real-world data, setting it to be the ground truth and applying it to the input data to obtain new hard labels. This dataset can be transformed into a set with soft labels by hiding a number of the input features contained in the domain of the selected function and resampling values from learned or pre-specified distributions for these hidden features, evaluating the ground truth function on the resulting data and aggregating the resulting labels. This method called feature hiding adds measurable uncertainty into the labels, which we show can be useful for direct injection and more thorough quantification of label noise. These ground truth sets provide a clean baseline to evaluate method performance on, to which any noise of interest may be added. Conducting experiments using datasets generated by the framework offers advantages over the three types of datasets typically used in label noise research: the generated data are more complex than data simulated from scratch, provide a clean baseline for evaluation which is lacking from real-world data and allow for the noise to be controlled, in contrast to curated data, in addition to being constructed at a low cost.
Figure 3: Different noise measures for varying noise rates. Left: the mean \(D_{TV}\). Feature hiding was done by sampling from a marginal distribution constructed via Kernel Density Estimation (KDE). Uniform noise (NCAR) was added through \(T_{r}\). Right: the mean entropy. Feature hiding was done by sampling from a conditional distribution constructed using MICE. Random class-conditional noise (NAR) was introduced by a randomly generated \(T_{r}\), with equal probabilities on the main diagonal. \(T_{r}\): transition matrix, ID: instance-dependent (NNAR). FH: feature hiding. \(\Delta_{1}\): noise introduced by FH. \(\Delta_{2}\): noise introduced by applying \(T_{r}\) to \(D^{G}\). \(\Delta_{3}\): noise introduced by applying \(T_{r}\) to \(D^{PG}\). |
2309.05006 | Uniform algebras and distinguished varieties | In this article, we point out the connections between the distinguished
varieties introduced by Agler and McCarthy with certain uniform algebras on
bidisc studied by Samuelsson and Wold. We also prove analogues of
Samuelsson-Wold result for the domains in $\mathbb{C}^2$ that are the images of
the bidisc under certain proper polynomial map on $\mathbb{C}^2$. We also give
a description of polynomial convex hull of graph of anti-holomorphic polynomial
over the distinguished boundary of such domains. We mention the case for the
symmetrized bidisc as an example. | Sushil Gorai, Golam Mostafa Mondal | 2023-09-10T11:41:32Z | http://arxiv.org/abs/2309.05006v1 | # Uniform algebras and distinguished varieties
###### Abstract.
In this article, we point out the connections between the distinguished varieties introduced by Agler and McCarthy with certain uniform algebras on bidisc studied by Samuelsson and Wold. We also prove analogues of Samuelsson-Wold result for the domains in \(\mathbb{C}^{2}\) that are the images of the bidisc under certain proper polynomial map on \(\mathbb{C}^{2}\). We also give a description of polynomial convex hull of graph of anti-holomorphic polynomial over the distinguished boundary of such domains. We mention the case for the symmetrized bidisc as an example.
Key words and phrases:Polynomial convexity; Uniform approximation; Wermer maximality theorem; Symmetrized bidisc; Distinguished variety 2020 Mathematics Subject Classification: Primary: 32E30, 32E20; Secondary: 47A25
## 1. Introduction
This article connects the theory of distinguished varieties-a well-explored topic in operator theory, with the notions of uniform algebra generated by holomorphic polynomials and certain pluriharmonic functions. The latter one is also a very well-studied object in several complex variables. In particular, we observe that the failure of uniform approximation for all continuous functions on the distinguished boundary of certain domains in \(\mathbb{C}^{2}\) by elements of holomorphic polynomials in \(z_{1}\) and \(z_{2}\), and some pluriharmonic functions is the presence of certain distinguished variety in the domain where the pluriharmonic functions become holomorphic. Before making these precise, let us briefly mention the theory of distinguished varieties and the theory of uniform algebras one by one.
In a seminal paper [4], Agler and McCarthy introduced the notion of distinguished variety in the bidisc \(\mathbb{D}^{2}\) as follows: A non-empty set \(V\) in \(\mathbb{C}^{2}\) is said to be a _distinguished variety_ if there exists a polynomial \(p\) in \(\mathbb{C}[z,w]\) such that
\[V=\{(z,w)\in\mathbb{D}^{2}:p(z,w)=0\}\]
and such that
\[\overline{V}\cap\partial\mathbb{D}^{2}=\overline{V}\cap\mathbb{T}^{2}. \tag{1.1}\]
Here, \(\partial\mathbb{D}^{2}\) represents the boundary of the \(\mathbb{D}^{2}\), and \(\mathbb{T}^{2}\) is the distinguished boundary of \(\mathbb{D}^{2}\). A distinguished variety is an algebraic variety that exits the bidisc through the distinguished boundary. The set \(\overline{V}\) is the closure of \(V\) within \(\overline{\mathbb{D}}^{2}\). We will use \(\partial V\) to denote the set described by (1.1). From a topological standpoint, \(\partial V\) represents the boundary of \(V\) within the zero set of \(p\) instead of encompassing the entirety of \(\mathbb{C}^{2}\).
Introduction
Let \(K\) be a compact subset of \(\mathbb{C}^{n}\). We say that \(K\) is _polynomially convex_ if \(K\) is a compact subset of \(\mathbb{C}\), and that \(K=K\). We say that \(K\) is _polynomially convex_ if \(K\) is a compact subset of \(\mathbb{C}\), and that \(K=K\).
continuous complex-valued functions on \(\mathbb{T}^{1}.\) Let \(\mathcal{A}\) denote the set of all \(f\in\mathcal{C}(\mathbb{T}^{1})\) which are boundary values of functions holomorphic on \(\mathbb{D}\) and continuous on \(\overline{\mathbb{D}}.\) In [23], the following question was asked:
_if \(g\in\mathcal{C}(\mathbb{T}^{1})\setminus\mathcal{A},\) does the closed algebra generated by \(g\) and \(\mathcal{A}\) equal \(\mathcal{C}(\mathbb{T}^{1})?\)_
In [23], it is shown that if \(g\) is real-valued or if \(g\) satisfies a Lipschitz condition, the algebra generated by \(g\) and \(\mathcal{A}\) equals \(\mathcal{C}(\mathbb{T}^{1}).\) Wermer [33] settled this question by proving the following:
**Result 1.1** (Wermer).: _If \(\mathcal{B}\) is any closed subalgebra of \(\mathcal{C}(\mathbb{T}^{1})\) with \(\mathcal{A}\subset\mathcal{B}\subset\mathcal{C}(\mathbb{T}^{1}).\) Then either \(\mathcal{A}=\mathcal{B}\) or \(\mathcal{B}=\mathcal{C}(\mathbb{T}^{1}).\)_
A uniform algebra \(\mathcal{U}\) defined on a compact subset \(K\) is said to be a _maximal subalgebra_ of \(\mathcal{C}(K)\) if, for any other subalgebra \(\mathcal{B}\) of \(\mathcal{C}(K)\) such that \(\mathcal{U}\subset\mathcal{B}\subset\mathcal{C}(K),\) it holds that either \(\mathcal{U}=\mathcal{B}\) or \(\mathcal{B}=\mathcal{C}(K)\). Result 1.1 is known as the _Wermer Maximality Theorem._ A similar related result due to Wermer is the following [34]: Let \(g\in C^{1}(\overline{\mathbb{D}}).\) Assume that the graph \(\mathsf{Gr}_{\overline{\mathbb{D}}}(g)\subset\mathbb{C}^{2}\) of \(g\) is polynomially convex. Let \(E:=\{z\in\overline{\mathbb{D}}:\frac{\partial g}{\partial\bar{z}}(z)=0\}.\) Then
\[[z,g;\overline{\mathbb{D}}]=\{f\in C(\overline{\mathbb{D}}):f|_{E}\in \mathcal{O}(E)\}.\]
It is natural to ask the version of these results to the higher dimensions. The question in the higher dimension has no clear answer like the Wermer maximality theorem. The natural object is to generalization of the second result of Wermer, even when considering the algebra generated by polynomials and a pluriharmonic function. For a domain \(\Omega\subset\mathbb{C}^{n},\) let \(PH(\Omega)\) denote the class of all pluriharmonic function on \(\Omega.\) The works of Cirka [32], Izzo [14, 15], Samuelsson and Wold [28], and Izzo, Samuelsson, and Wold [16] focused on the study of uniform algebras generated by holomorphic and pluriharmonic functions in higher dimensions. In [28], Samuelsson and Wold [28] proved the following results in the case of the bidisc \(\mathbb{D}^{2}.\)
**Result 1.2** (Samuelsson-Wold).: _Let \(h_{j}\in PH(\mathbb{D}^{2})\cap\mathcal{C}^{1}(\overline{\mathbb{D}}^{2})\) for \(j=1,\cdots,N.\) Then either there exists a holomorphic disc in \(\overline{\mathbb{D}}^{2}\) where all \(h_{j}\)'s are holomorphic, or \([z_{1},z_{2},h_{1},\cdots,h_{N};\overline{\mathbb{D}}^{2}]=\mathcal{C}( \overline{\mathbb{D}}^{2}).\)_
The following result can be thought of an analogue of the Wermer maximality theorem in case of the bidisc.
**Result 1.3** (Samuelsson-Wold).: _Let \(f_{j}\in\mathcal{C}(\mathbb{T}^{2})\) for \(j=1,\cdots,N\) with \(N\geq 1\), and assume that each \(f_{j}\) extends to a pluriharmonic function on \(\mathbb{D}^{2}\). Then either \([z_{1},z_{2},f_{1},\cdots,f_{N};\mathbb{T}^{2}]=\mathcal{C}(\mathbb{T}^{2})\), or there exists a non-trivial algebraic variety \(Z\subset\mathbb{C}^{2}\) with \(\overline{V}\setminus V\subset\mathbb{T}^{2},\) and the pluriharmonic extensions of the \(f_{j}\)'s are holomorphic on \(Z,\) where \(V=Z\cap(\overline{\mathbb{D}^{2}}\setminus\mathbb{T}^{2}).\)_
_Remark 1.4_.: In Result 1.3 if not all of the functions \(f_{1},\ldots,f_{N}\) is holomorphic in any analytic disc that lies in \(\partial\mathbb{D}^{2}\) and \([z_{1},z_{2},f_{1},\cdots,f_{N};\mathbb{T}^{2}]\neq\mathcal{C}(\mathbb{T}^{2}),\) then the algebraic variety that exists is a distinguished variety. As mentioned earlier, by a result of Agler and McCarthy [4], every distinguished variety in the bidisc is of the form \(\{(z,w)\in\mathbb{D}^{2}:\det(\Psi(z)-wI)=0\}\) for some matrix-valued holomorphic function
\(\Psi\) on \(\mathbb{D}^{2}\). Therefore, the variety that exists in Result 1.3 is also of the above mentioned determinant form. We do not know what connections are there with the matrix-valued funtion \(\Psi\) in [4] and the pluriharmonic functions in Result 1.3.
_Remark 1.5_.: It might occur that the variety in Result 1.3 appears in the boundary of the bidisc. In this case, the variety is not a distinguished variety, but such variety can also be explained from the operator theoretic point of view from a result due to Das and Sarkar [11, Theorem 4.3]. From the proof of Result 1.3 it is clear that the form of such variety is \(\{\lambda\}\times\mathbb{D}\) or \(\mathbb{D}\times\{\lambda\}\) for some \(\lambda\in\partial\mathbb{D}\), which matches with the description in [11, Theorem 4.3].
Consider the domain \(\Omega=\phi(\mathbb{D}^{2})\) in \(\mathbb{C}^{2}\) and we note that the distinguished boundary of \(\Omega\) for the algebra \(\mathcal{A}(\Omega)\) is \(\Gamma_{\Omega}=\phi(\mathbb{T}^{2}).\) We prove the following generalization of Result 1.2 and Result 1.3 for the above domain.
**Theorem 1.6**.: _Let \(h_{j}\in PH(\Omega)\cap\mathcal{C}^{1}(\overline{\Omega})\) for \(j=1,\cdots,N,\) and \(\phi^{-1}(\overline{\Omega})\subset\overline{\mathbb{D}}^{2}\). Then, either there exists a holomorphic disc in \(\overline{\Omega}\) where all \(h_{j}\)'s are holomorphic, or \([z_{1},z_{2},h_{1},\cdots,h_{N};\overline{\Omega}]=\mathcal{C}(\overline{ \Omega}).\)_
**Theorem 1.7**.: _Let \(f_{j}\in\mathcal{C}(\Gamma_{\Omega})\) for \(j=1,\cdots,N,N\geq 1\) and assume that each \(f_{j}\) extends to a pluriharmonic function on \(\Omega.\) If \(\phi^{-1}(\Gamma_{\Omega})\subset\mathbb{T}^{2}\). If \(f_{j}\) is not holomorphic on any analytic disc present in the boundary \(\partial\Omega\) for at least one \(j\), then either_
\[[z_{1},z_{2},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}),\]
_or there exists a distinguished variety \(V\) in \(\Omega\) such that the pluriharmonic extensions of the \(f_{j}\)'s are holomorphic on \(V.\)_
As a corollary we can extend Result 1.2 and Result 1.3 to the symmetrized bidisc. Recall that the symmetrized bidisc \(\mathbb{G}_{2}\) is the image of the bidisc under the _symmetrization map_\(\Pi:(z_{1},z_{2})\to(z_{1}+z_{2},z_{1}z_{2})\) i.e.,
\[\mathbb{G}_{2}=\{(z_{1}+z_{2},z_{1}z_{2}):|z_{1}|<1,|z_{2}|<1\}.\]
Since \(\Pi^{-1}(\Pi(\overline{\mathbb{D}}^{2}))=\Pi^{-1}(\overline{\mathbb{G}}_{2} )=\overline{\mathbb{D}}^{2}\), by using Result 2.1, we get that \(\overline{\mathbb{G}}_{2}\) is polynomially convex. If \(f:\mathbb{G}_{2}\to\mathbb{C}\) is a holomorphic function on \(\mathbb{G}_{2}\), then \(f\circ\Pi:\mathbb{D}^{2}\to\mathbb{C}\) is a symmetric function on \(\mathbb{D}^{2}.\) Therefore, if \(\mathcal{A}(\overline{\mathbb{G}}_{2})\) is the algebra of functions that are holomorphic on \(\mathbb{G}_{2}\) and continuous on \(\overline{\mathbb{G}}_{2}\), then the distinguished boundary \(\Gamma_{\mathbb{G}_{2}}\) of \(\mathbb{G}_{2}\) is the image \(\Pi(\mathbb{T}^{2})\) of the torus \(\mathbb{T}^{2}\) (the distinguished boundary of \(\mathbb{D}^{2}\)). Since \(\mathbb{G}_{2}\) is neither convex (not even biholomorphic to any convex domain [10]) nor smooth (not even the Lipschitz domain [8]), and hence, many results in the theory of several complex variables does not apply to \(\mathbb{G}_{2}.\) Several authors have studied this domain over the last three decades, and it has shown to be a domain with a highly rich complex geometry and function theory: see, among many other articles, [31, 20, 25, 17, 12, 10, 3, 2, 1, 6, 29].
There are significant similarities and contrasts between its geometry and function theory and those of the bidisc. Here we observe that Result 1.2 and Result 1.3 continues to hold if the bidisc is replaced by the symmetrized bidisc. More precisely:
**Corollary 1.8**.: _Let \(h_{j}\in PH(\mathbb{G}_{2})\cap\mathcal{C}^{1}(\overline{\mathbb{G}}_{2})\) for \(j=1,\cdots,N.\) Then either there exists a holomorphic disc in \(\overline{\mathbb{G}}_{2}\) where all \(h_{j}\)'s are holomorphic, or_
\[[z_{1},z_{2},h_{1},\cdots,h_{N};\overline{\mathbb{G}}_{2}]=\mathcal{C}( \overline{\mathbb{G}}_{2}).\]
**Corollary 1.9**.: _Let \(f_{j}\in\mathcal{C}(\Gamma_{\mathbb{G}_{2}})\) for \(j=1,\cdots,N,N\geq 1\) and assume that each \(f_{j}\) extends to a pluriharmonic function on \(\mathbb{G}_{2}.\) If \(f_{j}\) is not holomorphic on any analytic disc present in the boundary \(\partial\mathbb{G}_{2}\) for at least one \(j\), then either_
\[[z_{1},z_{2},f_{1},\cdots,f_{N};\Gamma_{\mathbb{G}_{2}}]=\mathcal{C}(\Gamma_ {\mathbb{G}_{2}}),\]
_or there exists a distinguished variety \(V\) in \(\mathbb{G}_{2}\) such that the pluriharmonic extensions of the \(f_{j}\)'s are holomorphic on \(V.\)_
_Remark 1.10_.: In view of a result by Pal and Shalit [24], we see that the variety that appears in Corollary 1.9 has the form of a zero set of a certain determinant. However, we do not know whether a similar type of determinant form can also given for the distinguished varieties that appear in Theorem 1.7.
## 2. Technical Results
In this section, we provide some known results and some preliminary lemmas that will be utilized to prove our results.
**Result 2.1** ([30]).: _If \(F:\mathbb{C}^{n}\to\mathbb{C}^{n}\) is a proper holomorphic map, and if \(K\subset\mathbb{C}^{n}\) is a compact set, then the set \(K\) is polynomially convex if and only if the set \(F^{-1}(K)\) is polynomially convex, and \(\mathcal{P}(K)=\mathcal{C}(K)\) if and only if \(\mathcal{P}(F^{-1}(K))=\mathcal{C}(F^{-1}(K)).\)_
**Result 2.2** (Remmert Proper Mapping theorem [26, 27]).: _Let \(M,N\) be complex spaces, and \(f:M\to N\) is a proper holomorphic map. If \(Z\) is an analytic subvariety in \(M\) then \(f(Z)\) is also an analytic subvariety in \(N.\) Moreover, if \(Z\) is irreducible then \(f(Z)\) is also irreducible subvariety of \(N.\)_
The following result is from the book [9, Page 29].
**Result 2.3**.: _(Chirka) Let \(\Omega_{1}\subset\mathbb{C}^{p},\Omega_{2}\subset\mathbb{C}^{m},\) are open subsets such that \(\Omega=\Omega_{1}\times\Omega_{2},\)\(p+m=n,\) and \(\text{proj}_{1}:(z,w)\to z.\) Let \(V\) be an analytic subset in \(\Omega\) such that \(\text{proj}_{1}:V\to\Omega_{1}\) is a proper map. Then \(\text{proj}_{1}(V)\) is an analytic subset in \(\Omega_{1}.\) Moreover, if \(\Omega=\mathbb{C}^{n},\)\(\Omega_{1}=\mathbb{C}^{p},\) and \(V\) is an algebraic subset in \(\mathbb{C}^{n},\) then \(\text{proj}_{1}(V)\) is also an algebraic subset in \(\mathbb{C}^{p}.\)_
The following lemma is well-known to experts. Since we have not found a explicit mention of this lemma in the literature, we decided to put it here for completeness.
**Lemma 2.4**.: _Let \(\Psi:\mathbb{C}^{n}\to\mathbb{C}^{n}\) be a proper polynomial map. Let \(Z\) be an algebraic variety in \(\mathbb{C}^{n},\) then \(\Psi(Z)\) is also an algebraic variety in \(\mathbb{C}^{n}.\)_
Proof.: Consider the algebraic variety \(V=\{(\Psi(z),z):z\in Z\}\) in \(\mathbb{C}^{n}\times\mathbb{C}^{n}\) and \(\Omega_{1}=\Omega_{2}=\mathbb{C}^{n}.\) We now show that that \(\text{proj}_{1}:V\to\Omega_{1}\) is a proper map. Let \(K\subset\mathbb{C}^{n}\) be a compact subset of \(\mathbb{C}^{n}.\) Then \(\text{proj}_{1}^{-1}\{K\}=(K\times\mathbb{C}^{n})\cap V=\{(\xi,\eta)\in K \times\mathbb{C}^{n}:(\xi,\eta)\in V\}=\{(\Psi(\eta),\eta)\in K\times\mathbb{C} ^{n}:\eta\in Z\}=\)compact (since \(\Psi\) is a proper map). Therefore, \(\text{proj}_{1}:V\to\Omega_{1}\) is a proper map. Hence, by Result 2.3, we conclude that \(\text{proj}_{1}(V)=\Psi(Z)\) is an algebraic variety.
_Remark 2.5_.: The case \(\Psi=\Pi\) is available in [24, Lemma 3.1].
Let \(\Psi:\mathbb{C}^{n}\to\mathbb{C}^{n}\) be a proper holomorphic polynomial map. Let \(\Omega:=\Psi(\mathbb{D}^{n})\) be a domain such that \(\Psi^{-1}(\Psi(\mathbb{D}^{n}))\subset\mathbb{D}^{n},\)\(\Psi^{-1}(\Psi(\partial\mathbb{D}^{n}))\subset\partial\mathbb{D}^{n},\) and \(\Psi^{-1}(\Psi(\mathbb{T}^{n}))\subset\mathbb{T}^{n}.\) The following lemma illustrates that every distinguished variety in \(\Omega\) can be derived from a distinguished variety in \(\mathbb{D}^{n}.\)
**Lemma 2.6**.: _Let \(Z\subset\Omega\). Then \(Z\) is a distinguished variety in \(\Omega\) if and only if there is a distinguished variety \(V\) in \(\mathbb{D}^{n}\) such that \(\Psi(V)=Z.\)_
Proof.: Given that \(\Psi\) is a proper map, it implies that \(\Psi\) is onto, and therefore, \(\Psi(\Psi^{-1}(Z))=Z\). Additionally, it can be easily demonstrated that \(\Psi^{-1}(Z)\) is an algebraic variety. Let us define \(V:=\Psi^{-1}(Z)\). Now, we need to prove the following: \(V\cap\partial\mathbb{D}^{n}\subset V\cap\mathbb{T}^{n}.\)
Consider an element \(\alpha\in V\cap\partial\mathbb{D}^{n}.\) This implies that \(\alpha\in\Psi^{-1}(Z)\cap\partial\mathbb{D}^{n}.\) Hence, we have \(\Psi(\alpha)\in Z\cap\Psi(\partial\mathbb{D}^{n})\). Since \(Z\) is a distinguished variety, we can conclude that \(\Psi(\alpha)\in Z\cap\Psi(\partial\mathbb{T}^{n})\). Consequently, we can deduce that \(\alpha\) lies in \(\Psi^{-1}(Z\cap\Psi(\mathbb{T}^{n}))=\Psi^{-1}(Z)\cap\Psi^{-1}(\Psi(\mathbb{T} ^{n}))\). By our assumption, together with this, we get that \(V\cap\partial\mathbb{D}^{n}\subset V\cap\mathbb{T}^{n}.\)
Conversely, let us assume that \(V\) is a subset of \(\mathbb{D}^{n}\) and is a distinguished variety. By using Lemma 2.4, we can conclude that \(\Psi(V)\) is an algebraic variety in \(\Omega\). Now, we claim that \(Z=\Psi(V)\) is a distinguished variety in \(\Omega\). Suppose \(\alpha\in Z\cap\Psi(\partial\mathbb{D}^{n})=\Psi(V)\cap\Psi(\partial\mathbb{D }^{n}).\) We need to show that \(\alpha\) also lies in \(\Psi(\mathbb{T}^{n})\). Since \(\alpha\in Z\cap\Psi(\partial\mathbb{D}^{n}),\) there exist \(\eta_{1}\in V\) and \(\eta_{2}\in\partial\mathbb{D}^{n}\) such that \(\Psi(\eta_{1})=\Psi(\eta_{2})=\alpha\). Consequently, \(\eta_{2}\) belongs to \(\Psi^{-1}(\Psi(\partial\mathbb{D}^{n})),\) which is a subset of \(\partial\mathbb{D}^{n}\). Thus, we have \(\eta_{2}\in V\cap\partial\mathbb{D}^{n},\) and as a result, \(\Psi(\eta_{2})\in\Psi(V\cap\partial\mathbb{D}^{n}).\) This implies that \(\alpha\) lies in \(\Psi(V\cap\mathbb{T}^{n}).\)
_Remark 2.7_.: The case \(\Omega=\mathbb{G}_{2}\) is available in [24, Lemma 3.1].
**Lemma 2.8**.: _Let \(g:G\subset\mathbb{C}^{N}\to\mathbb{C}^{N}\) be a proper holomorphic mapping and \(q:g(G)\to\mathbb{C}\) be a continuous function. If \(q\circ g:G\to\mathbb{C}\) is holomorphic, then \(q\) is holomorphic._
Proof.: Let us define \(\Omega:=g(G).\) Since \(g\) is proper holomorphic, \(\Omega\) is open. First, we assume \(z\in G\) and \(\det dg(z)\neq 0,\) where \(\det dg(z)\) is the determinant of the complex Jacobian matrix of \(g\) at \(z.\) Then there exists a neighborhood \(V\) of \(z\) and a neighborhood \(W\) of \(g(z)\) such that \(g^{-1}:W\to V\) is holomorphic. Therefore, \(q\circ g\circ g^{-1}=q\) is holomorphic at \(g(z).\) Next, we define \(X:=\{z\in G:\det dg(z)=0\}.\) Hence, \(q\) is holomorphic on \(\Omega\setminus g(X).\) Clearly, \(X\) is an analytic variety with \(\dim_{\mathbb{C}}X\leq(N-1).\) Since \(g\) is proper holomorphic mapping, by Result 2.2, \(g(X)\) is also an analytic variety in \(\Omega.\) Since \(q\) is continuous on \(\Omega\) and holomorphic on \(\Omega\setminus g(X),\) by Riemann's removable singularity theorem, we can say that \(q\) is holomorphic on \(\Omega.\)
Let \(\Psi:\mathbb{C}^{n}\to\mathbb{C}^{n}\) be a proper holomorphic map. Let \(\Omega:=\Psi(\mathbb{D}^{n})\) be a domain such that \(\Psi^{-1}(\Psi(\mathbb{D}^{n}))\subset\mathbb{D}^{n},\)\(\Psi^{-1}(\Psi(\partial\mathbb{D}^{n}))\subset\partial\mathbb{D}^{n},\) and \(\Psi^{-1}(\Psi(\mathbb{T}^{n}))\subset\mathbb{T}^{n}.\) We denote the distinguished boundary of \(\Omega\) for the algebra \(\mathcal{A}(\Omega)\) by \(\Gamma_{\Omega}.\) Clearly, \(\Gamma_{\Omega}\) is equal to \(\Psi(\mathbb{T}^{n}).\)
The following theorem might be of independent interest. We will use this in our proofs.
**Theorem 2.9**.: _Let \(N\geq 1\) and \(f_{1},\cdots,f_{N}\in\mathcal{C}(\Gamma_{\Omega}).\) Then \([z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{ \Omega})\) if and only if \(\mathsf{Gr}_{f}(\Gamma_{\Omega})\) is polynomially convex, where \(f=(f_{1},\cdots,f_{N}).\)_
Proof.: We denote \(X:=\mathsf{Gr}_{f}(\Gamma_{\Omega}).\) Since \([z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{ \Omega})\) implies \(\mathcal{P}(X)=\mathcal{C}(X),\) hence \(\widehat{X}=X.\)
Conversely, suppose that \(\widehat{X}=X.\) We consider the proper holomorphic map \(\Phi:\mathbb{C}_{z}^{n}\times\mathbb{C}_{w}^{N}\to\mathbb{C}_{z}^{n}\times \mathbb{C}_{w}^{N},\) define by
\[\Phi(z,w)=(\Psi(z),w).\]
Clearly,
\[\Phi^{-1}(X)=\mathsf{Gr}_{f\circ\Psi}(\mathbb{T}^{n})=:Y.\]
Since \(X\) is polynomially convex, \(Y\) is also polynomially convex (by Result 2.1). Let \(U\) be a neighborhood of \(\mathbb{T}^{n}\) such that \(z_{1}\neq 0\) on \(U.\) Define \(g(z_{1},z_{2},\cdots,z_{n})=\frac{1}{z_{1}}.\) Then \(g\) is holomorphic on \(U.\) Also, \(g\) is holomorphic on \(U\times\mathbb{C}^{N}.\) Since \(Y\subset U\times\mathbb{C}^{N},\) by the _Oka-Weil_ approximation theorem, there exists a sequence of polynomial \(P_{j}\) in \(\mathbb{C}_{z}^{n}\times\mathbb{C}_{w}^{N}\) such that \(P_{j}(z,w)\to g\) uniformly on \(Y.\) This implies \(P_{j}(z,(f\circ\Psi)(z))\to g=\frac{1}{z_{1}}=\overline{z}_{1}\) uniformly on \(\mathbb{T}^{n}.\) Hence \(\overline{z}_{1}\in[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi; \mathbb{T}^{n}].\) By the similar method we can show that \(\overline{z}_{j}\in[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi; \mathbb{T}^{n}],\ \forall\in\{1,\cdots,n\}.\) Hence, \([z_{1},\cdots,z_{n},\overline{z}_{1},\cdots,\overline{z}_{n};\mathbb{T}^{n}] \subset[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N}\circ\Psi;\mathbb{T}^{n}].\) Therefore,
\[[z_{1},\cdots,z_{n},\overline{z}_{1},\cdots,\overline{z}_{n};\mathbb{T}^{n}]= \mathcal{C}(\mathbb{T}^{n})=[z_{1},\cdots,z_{n},f_{1}\circ\Psi,\cdots,f_{N} \circ\Psi;\mathbb{T}^{n}]. \tag{2.1}\]
Note that \(\mathcal{P}(X)=\mathcal{C}(X)\) if and only if \(\mathcal{P}(\Phi^{-1}(X))=\mathcal{C}(\Phi^{-1}(X))\) (see Result 2.1) i.e., \(\mathcal{P}(Y)=\mathcal{C}(Y).\) Therefore, using (2.1), we get that
\[[z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{ \Omega}).\]
**Corollary 2.10**.: _Let \(N\geq 1,\) and \(f_{1},\cdots,f_{N}\in\mathcal{C}(\Gamma_{\mathbb{G}_{n}}).\) Then \([z_{1},\cdots,z_{n},f_{1},\cdots,f_{N};\Gamma_{\mathbb{G}_{n}}]=\mathcal{C}( \Gamma_{\mathbb{G}_{n}})\) if and only if \(\mathsf{Gr}_{f}(\Gamma_{\mathbb{G}_{n}})\) is polynomially convex, where \(f=(f_{1},\cdots,f_{N}).\)_
In [18, 19], Jimbo explored the structure of polynomial hulls concerning graphs of antiholomorphic polynomials on the torus. For the sake of completeness, we include Jimbo's result from [19] here since we will use it multiple times in this paper. Let \(\mathbb{T}^{2}\) be the torus in \(\mathbb{C}^{2}\) and \(P\) be an arbitrary polynomial in \(\mathbb{C}^{2}.\) In [19], Jimbo gave a description for \(\widehat{\mathsf{Gr}_{\overline{P}}(\mathbb{T}^{2})}.\) Let the polynomial \(P(z_{1},z_{2})\) be of degree \(m\) in \(z_{1}\) and of degree \(n\) in \(z_{2}.\) We write
\[P(z_{1},z_{2}))=\sum_{\begin{subarray}{c}0\leq i\leq m\\ 0\leq j\leq n\end{subarray}}a_{ij}z_{1}^{i}z_{2}^{j}.\]
Therefore, on \(\mathbb{T}^{2}\), we have
\[\overline{P(z_{1},z_{2})} =\frac{1}{z_{1}^{m}z_{2}^{n}}\sum_{\begin{subarray}{c}0\leq i\leq m \\ 0\leq j\leq n\end{subarray}}\overline{a}_{ij}z_{1}^{m-i}{z_{2}}^{n-j}\] \[=\frac{K(z_{1},z_{2})}{z_{1}^{m}z_{2}^{n}}=h(z_{1},z_{2}),\text{ where }K(z_{1},z_{2})=\sum_{\begin{subarray}{c}0\leq i\leq m\\ 0\leq j\leq n\end{subarray}}\overline{a}_{ij}z_{1}^{m-i}z_{2}^{n-j}.\]
Hence on \(\mathbb{T}^{2}\), we get that
\[\overline{P(z_{1},z_{2})}=h(z_{1},z_{2}),\text{ where }h(z_{1},z_{2})=\frac{K(z _{1},z_{2})}{z_{1}^{m}z_{2}^{n}}.\]
We define \(L:=\{z_{1}=0,|z_{2}|\leq 1\}\cup\{z_{2}=0,|z_{1}|\leq 1\}\) and
\[X=\left\{(z_{1},z_{2})\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{ 2}):\overline{P(z_{1},z_{2})}=h(z_{1},z_{2})\right\}. \tag{2.2}\]
We set
\[\triangle(z):=\left|\begin{matrix}\frac{\partial P(z)}{\partial z_{1}}&\frac{ \partial P(z)}{\partial z_{2}}\\ \frac{\partial h(z)}{\partial z_{1}}&\frac{\partial h(z)}{\partial z_{2}}\\ \end{matrix}\right|.\]
We can write
\[\triangle(z)=\frac{1}{z_{1}^{m+1}z_{2}^{n+1}}\prod_{j=1}^{l}q_{j}(z),\]
where each \(q_{j}\) is an irreducible polynomial in \(\mathbb{C}^{2}.\) We define the corresponding irreducible algebraic variety \(Z_{j}:=Z(q_{j})=\{z\in\mathbb{C}^{2}:q_{j}(z)=0\}.\) We assume \(\triangle(z)\not\equiv 0\) on \(X.\) Therefore, each \(q_{j}\) is a non-zero holomorphic polynomial in \(\mathbb{C}^{2}.\)
We denote \(Q_{j}=Z_{j}\cap\mathbb{T}^{2}.\)
**Result 2.11** (Jimbo).: _We let \(J=\{j\in\{1,\cdots,l\}:\emptyset\neq Q_{j}\neq\widehat{Q_{j}},\widehat{Q_{j}} \setminus L\subset X\}.\)_
1. _If_ \(J=\emptyset,\) _then_ \(\widehat{\mathsf{Gr}_{\overline{P}}(\mathbb{T}^{2})}=\mathsf{Gr}_{\overline{P }}(\mathbb{T}^{2}),\) _and_ \([z_{1},z_{2},\overline{P};\mathbb{T}^{2}]=\mathcal{C}(\mathbb{T}^{2});\)__
2. _If_ \(J\neq\emptyset,\) _then_ \[\widehat{\mathsf{Gr}_{\overline{P}}(\mathbb{T}^{2})}=\mathsf{Gr}_{\overline{ P}}(\mathbb{T}^{2})\cup\bigg{(}\cup_{j\in J}\mathsf{Gr}_{\overline{P}}( \widehat{Q_{j}})\bigg{)}.\]
## 3. proof of Theorems 1.6 and 1.7
Note that the map \(\phi:\mathbb{C}^{2}\to\mathbb{C}^{2}\) is defined as \(\phi(z)=(p_{1}(z),p_{2}(z)).\) We consider the proper holomorphic map \(\widetilde{\Psi}:\mathbb{C}^{2+N}\to\mathbb{C}^{2+N},\) defined as follows:
\[\widetilde{\Psi}(z_{1},z_{2},w_{1},\cdots,w_{N})=(\phi(z_{1},z_{2}),w_{1}, \cdots,w_{N})\,, \tag{3.1}\]
where \((z_{1},z_{2})\in\mathbb{C}^{2},\) and \((w_{1},\cdots,w_{N})\in\mathbb{C}^{N}.\) Recall that \(\Omega=\phi(\mathbb{D}^{2})\) and \(\Gamma_{\Omega}=\phi(\mathbb{T}^{2}).\)
Proof of Theorem 1.6.: We claim that \(\widetilde{\Psi}^{-1}(\mathsf{Gr}_{h}(\overline{\Omega}))=\mathsf{Gr}_{h\circ\phi }(\overline{\mathbb{D}}^{2})\): let
\[(\alpha,\beta)\in\widetilde{\Psi}^{-1}(\mathsf{Gr}_{h}(\overline{ \Omega})) \implies\widetilde{\Psi}(\alpha,\beta)\in\mathsf{Gr}_{h}(\overline{ \Omega})\] \[\implies(\phi(\alpha),\beta)\in\mathsf{Gr}_{h}(\overline{\Omega})\] \[\implies\beta=h(\phi(\alpha))\text{ and }\phi(\alpha)\in \overline{\Omega}.\]
Now
\[\phi(\alpha)\in\overline{\Omega}\implies\alpha\in\phi^{-1}(\phi(\alpha))\subset \phi^{-1}(\overline{\Omega})\subset\overline{\mathbb{D}}^{2}.\]
Therefore \(\widetilde{\Psi}^{-1}(\mathsf{Gr}_{h}(\overline{\Omega}))\subset\mathsf{Gr}_{ h\circ\phi}(\overline{\mathbb{D}}^{2})\).
Conversely, let
\[(p,q)\in\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2}) \implies q=(h\circ\phi)(p)\text{ and }p\in\overline{\mathbb{D}}^{2}\] \[\implies q=h(\phi(p))\text{ and }\Pi(p)\in\overline{\Omega}\] \[\implies(\phi(p),q)\in\mathsf{Gr}_{h}(\overline{\Omega})\] \[\implies\widetilde{\Psi}(p,q)\in\mathsf{Gr}_{h}(\overline{\Omega})\] \[\implies(p,q)\in\widetilde{\Psi}^{-1}\left(\mathsf{Gr}_{h}( \overline{\Omega})\right).\]
Hence \(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\subset\widetilde{\Psi}^{- 1}(\mathsf{Gr}_{h}(\overline{\Omega})).\) Therefore, \(\widetilde{\Psi}^{-1}(\mathsf{Gr}_{h}(\overline{\Omega}))=\mathsf{Gr}_{h\circ \phi}(\overline{\mathbb{D}}^{2}).\) Since \(\widetilde{\Psi}\) is proper holomorphic mapping and \(\widetilde{\Psi}^{-1}(\mathsf{Gr}_{h}(\overline{\Omega}))=G_{h\circ\phi}( \overline{\mathbb{D}}^{2})\), by Result 2.1, we can say that \(\mathcal{P}\left(\mathsf{Gr}_{h}(\overline{\Omega})\right)=\mathcal{C}\left( \mathsf{Gr}_{h}(\overline{\Omega})\right)\) if and only if \(\mathcal{P}\left(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right)= \mathcal{C}\left(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right).\)
We note that \(h\circ\phi\) is pluriharmonic on \(\mathbb{D}^{2}\) and continuous on \(\overline{\mathbb{D}}^{2}.\) Therefore, two cases hold.
**Case I:**\(\mathcal{P}\left(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right)= \mathcal{C}\left(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right).\) In this case we have \(\mathcal{P}\left(\mathsf{Gr}_{h}(\overline{\Omega})\right)=\mathcal{C}\left( \mathsf{Gr}_{h}(\overline{\Omega})\right).\)
**Case II:**\(\mathcal{P}\left(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right)\neq \mathcal{C}\left(\mathsf{Gr}_{h\circ\phi}(\overline{\mathbb{D}}^{2})\right).\) Therefore, by Result 1.2, there exists an analytic disc \(g:\mathbb{D}\hookrightarrow\overline{\mathbb{D}}^{2}\) where \((h_{j}\circ\phi)\circ g:\mathbb{D}\hookrightarrow\overline{\mathbb{D}}^{2}\) is holomorphic for all \(j=1,\cdots,N.\) If we take \(\gamma:=\phi\circ g\), then clearly \(\gamma:\mathbb{D}\hookrightarrow\overline{\Omega}\) is an analytic disc in \(\overline{\Omega}\) such that \(h_{j}\) is holomorphic on \(\gamma(\mathbb{D})\) (by Lemma 2.8) for all \(j=1,\cdots,N.\) This proves the theorem.
Proof of Theorem 1.7.: Let \(h_{j}\) denotes the pluriharmonic extension of \(f_{j}\) to \(\Omega\) and write \(h=(h_{1},\cdots,h_{N}):\overline{\Omega}\rightarrow\mathbb{C}^{N}.\) We have \(\widetilde{\Psi}\) is proper holomorphic mapping and \(\widetilde{\Psi}^{-1}(\mathsf{Gr}_{h}(\Gamma_{\Omega}))=\mathsf{Gr}_{h\circ \phi}(\mathbb{T}^{2}).\) Therefore, by Result 2.1, \(\mathsf{Gr}_{h}(\Gamma_{\Omega})\) is polynomially convex if and only if \(\mathsf{Gr}_{h\circ\phi}(\mathbb{T}^{2})\) is polynomially convex. We note that \(h\circ\phi\) is pluriharmonic on \(\mathbb{D}^{2}\) and continuous on \(\overline{\mathbb{D}}^{2}.\) Therefore, two cases hold.
**Case I:**\(\mathsf{Gr}_{h}(\Gamma_{\Omega})\) is polynomially convex. In view of Theorem 2.9, we have
\[[z_{1},z_{2},f_{1},\cdots,f_{N};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).\]
**Case II:**\(\mathsf{Gr}_{h}(\Gamma_{\Omega})\) is not polynomially convex. Consequently, \(\mathsf{Gr}_{h\circ\phi}(\mathbb{T}^{2})\) is not polynomially convex. Therefore, by Result 1.3, there exists a distinguished variety
\(Z\subset\mathbb{D}^{2}\) where \((h_{j}\circ\phi)\) is holomorphic for all \(j=1,\cdots,N.\) Since \(\phi\) is a proper holomorphic mapping, by Lemma2.4, we have \(\phi(Z)\) is also an algebraic variety. Since \(\phi\) is proper holomorphic, \((h_{j}\circ\phi)\) is holomorphic on \(Z,\) then \(h_{j}\) is also holomorphic on \(\phi(Z)\) (by Lemma2.8). Since \(\phi\) sends distinguished variety of \(\mathbb{D}^{2}\) to distinguished variety of \(\Omega\) (Lemma2.6), we have \(\phi(Z)\cap b\Omega\subset\Gamma_{\Omega}.\)
## 4. Description of Polynomial Hull
In this section, we provide a description of the polynomial convex hull of the graph of an anti-holomorphic polynomial over the distinguished boundary of the domain \(\Omega,\) where \(\Omega\) is the image of the bidisc under certain proper polynomial map from \(\mathbb{C}^{2}\) to \(\mathbb{C}^{2}.\)
Let \(F=(f_{1},f_{2},\cdots,f_{n}):\mathbb{C}^{n}\to\mathbb{C}^{n}\) be a proper map. Let
\[J_{f}(z)=\begin{vmatrix}\frac{\partial f_{1}}{\partial z_{1}}(z)&\frac{ \partial f_{1}}{\partial z_{2}}(z)&\cdots&\frac{\partial f_{1}}{\partial z_{ n}}(z)\\ \vdots&\vdots&\cdots&\vdots\\ \frac{\partial f_{n}}{\partial z_{1}}(z)&\frac{\partial f_{n}}{\partial z_{2} }(z)&\cdots&\frac{\partial f_{n}}{\partial z_{n}}(z)\end{vmatrix}.\]
The _critical locus_ of \(f\) is the complex analytic variety \(Z(J_{f})=\{z\in\mathbb{C}^{n}:J_{f}(z)=0\}\subset\mathbb{C}^{n}.\) The _branch locus_\(B(f)\) of \(f\) is the image of the critical locus. Since \(f\) is proper,
\[f:\mathbb{C}^{n}\setminus f^{-1}(B(f))\to\mathbb{C}^{n}\setminus B(f)\]
is a covering map of finite degree \(d;\)\(d\) is said to be the _topological degree_ of \(f.\)
**Definition 4.1**.: Two proper map \(\phi,\tilde{\phi}:\mathbb{C}^{2}\to\mathbb{C}^{2}\) are said to be _equivalent_ if there exist \(f,g\in\operatorname{Aut}(\mathbb{C}^{2})\) such that \(\phi=f\circ\tilde{\phi}\circ g.\)
Consider two holomorphic polynomials, \(p_{1}\) and \(p_{2},\) defined in \(\mathbb{C}^{2}\). Let \(\phi(z)=(p_{1}(z),p_{2}(z))\) represent a proper holomorphic mapping from \(\mathbb{C}^{2}\) to \(\mathbb{C}^{2},\) equivalent to \(\tilde{\phi}(z_{1},z_{2})=(z_{1}^{m},z_{2}^{n})\) for some natural numbers \(m\) and \(n\). There is a characterization due to Lamy [21] (see also Bisi and Polizzi [7]) for \(m=1\) and \(n=2\) as follows: a proper polynomial map \(f:\mathbb{C}^{2}\to\mathbb{C}^{2}\) with a topological degree of \(2\) is equivalent to \(g(z_{1},z_{2})=(z_{1},z_{2}^{2}).\)
Let \(P(z_{1},z_{2})\) be any polynomial in \(\mathbb{C}^{2}\). We aim to calculate \(\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})}\). It is evident that \(\widetilde{\Psi}^{-1}(\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega}))=\mathsf{ Gr}_{\overline{P}\circ\phi}(\mathbb{T}^{2})=\mathsf{Gr}_{\overline{P}\circ\phi}( \mathbb{T}^{2})\) (\(\widetilde{\Psi}\) is given by (3.1)). Consequently, \(\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})=\widetilde{\Psi}\left(\mathsf{Gr }_{\overline{P}\circ\phi}(\mathbb{T}^{2})\right)\). In this scenario, the following result holds.
**Lemma 4.2**.: \(\widehat{\widetilde{\Psi}(Y)}=\widetilde{\Psi}\left(\widehat{Y}\right),\) _where \(Y=\mathsf{Gr}_{\overline{P}\circ\phi}(\mathbb{T}^{2}).\)_
Proof.: Since \(\widetilde{\Psi}\) is a proper holomorphic map, by using Result2.1, we have that \(\widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right)\) is polynomially convex. Therefore
\[\widehat{Y}\subset\widetilde{\Psi}^{-1}\widehat{\widetilde{\Psi}(Y)}\subset \widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right).\]
This implies, \(\widetilde{\Psi}(\widehat{Y})\subset\widehat{\widetilde{\Psi}(Y)}\).
Next, we show that \(\widetilde{\Psi}^{-1}\left(\widetilde{\Psi}(\widehat{Y})\right)\subset\widehat{Y}\). To prove this, let \((\alpha_{1},\alpha_{2},\beta)\in\widetilde{\Psi}^{-1}\left(\widetilde{\Psi}( \widehat{Y})\right).\) Then there exists \((\xi_{1},\xi_{2},\eta)\in\widehat{Y}\) such that \(\widetilde{\Psi}(\alpha_{1},\alpha_{2},\beta)=\Psi(\xi_{1},\xi_{2},\eta)\). This implies, \(\phi(\alpha_{1},\alpha_{2})=\phi(\xi_{1},\xi_{2})\) and \(\beta=\eta\). Since, \(\phi\) is proper polynomial map and is equivalent to \(\tilde{\phi}(z_{1},z_{2})=(z_{1}^{m},z_{2}^{n})\), there exist \(f,g\in\operatorname{Aut}(\mathbb{C}^{2})\) such that \(\phi=f\circ\tilde{\phi}\circ g\). Then
\[\phi(\alpha_{1},\alpha_{2})=\phi(\xi_{1},\xi_{2})\] \[\Longrightarrow(f\circ\tilde{\phi}\circ g)(\alpha_{1},\alpha_{2} )=(f\circ\tilde{\phi}\circ g)(\xi_{1},\xi_{2})\] \[\Longrightarrow(\tilde{\phi}\circ g)(\alpha_{1},\alpha_{2})=( \tilde{\phi}\circ g)(\xi_{1},\xi_{2})\] \[\Longrightarrow g_{1}^{m}(\alpha_{1},\alpha_{2})=g_{1}^{m}(\xi_{1}, \xi_{2})\text{ and }g_{2}^{n}(\alpha_{1},\alpha_{2})=g_{2}^{n}(\xi_{1},\xi_{2}),\text{ where }g=(g_{1},g_{2}).\] \[\Longrightarrow(\alpha_{1},\alpha_{2})=g^{-1}\{(\lambda_{m}^{k}g_ {1}(\xi_{1},\xi_{2}),\lambda_{n}^{r}g_{2}(\xi_{1},\xi_{2})\}=(a_{k},b_{r}),\]
where \(\lambda_{l}=\cos\frac{2\pi}{l}+i\sin\frac{2\pi}{l}\), \(k\in\{0,\cdots,m-1\}\) and \(r\in\{0,\cdots,n-1\}\).
It remains to show that \((a_{k},b_{r},\eta)\in\widehat{Y}\). If possible, assume that \((a_{k},b_{r},\eta)\notin\widehat{Y}\) for some \(k\in\{0,\cdots,m-1\},r\in\{0,\cdots,n-1\}\). Then there exists a polynomial \(\chi\) in \(\mathbb{C}^{2}_{z}\times\mathbb{C}_{w}\) such that
\[|\chi(a_{k},b_{r},\eta)|>\sup_{Y}|\chi(z,w)|. \tag{4.1}\]
Let us define \(F(z_{1},z_{2}):=(\lambda_{m}^{k}z_{1},\lambda_{n}^{r}z_{2})\), and \(\tilde{F}(z_{1},z_{2},w):=((g^{-1}\circ F\circ\ g)(z),w)\). Since \(\phi^{-1}(\phi(\mathbb{T}^{2}))\subset\mathbb{T}^{2}\) (hence \((g^{-1}\circ F\circ\ g)(z)\in\mathbb{T}^{2}\) if \(z\in\mathbb{T}^{2}\)), using (4.1), we get that
\[|(\chi\circ\tilde{F})(\xi,\eta)|>\sup_{Y}|(\chi\circ\tilde{F})(z,w)|. \tag{4.2}\]
Since \(\tilde{F}\in\operatorname{Aut}(\mathbb{C}^{3})\), (4.2) says that \((\xi,\eta)\notin\widehat{Y}\) and this is a contradiction. Hence \((a_{k},b_{r},\eta)\in\widehat{Y}\). Therefore, \(\widetilde{\Psi}^{-1}\left(\widetilde{\Psi}(\widehat{Y})\right)=\widehat{Y}\). Since \(\widetilde{\Psi}\) is proper holomorphic map, by using Result 2.1, we can say that \(\widetilde{\Psi}(\widehat{Y})\) is polynomially convex. Therefore, \(\widehat{\widetilde{\Psi}(Y)}\subset\widetilde{\Psi}(\widehat{Y})\). This proves the lemma.
By using Lemma 4.3, we can say that
\[\widehat{\operatorname{\mathsf{Gr}}_{\overline{P}}(\Gamma_{\Omega})}= \widetilde{\Psi}\left(\widehat{\operatorname{\mathsf{Gr}}_{\overline{P\circ \phi}}(\mathbb{T}^{2})}\right).\]
Therefore, to give a description for \(\widehat{\operatorname{\mathsf{Gr}}_{\overline{P}}(\Gamma_{\Omega})}\), it is enough to compute \(\widehat{\operatorname{\mathsf{Gr}}_{\overline{P\circ\phi}}(\mathbb{T}^{2})}\).
### Description of Hull on Symmetrized Bidisc
Let \(P(z_{1},z_{2})\) be any polynomial in \(\mathbb{C}^{2}\). By Lemma 4.2, we calculate \(\widehat{\operatorname{\mathsf{Gr}}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})}\). If we take \(p_{1}(z)=z_{1}+z_{2}\) and \(p_{1}(z)=z_{1}z_{2}\), then \(\phi=\Pi\) and \(\widetilde{\Psi}(z,w)=(\Pi(z),w)\) is a proper map from \(\mathbb{C}^{3}\) to \(\mathbb{C}^{3}\). It is easy to show that \(\Pi\) a proper polynomial map of topological degree \(2\), and hence equivalent to \((z_{1},z_{2}^{2})\). Clearly, \(\widetilde{\Psi}^{-1}(\operatorname{\mathsf{Gr}}_{\overline{P}}(\Gamma_{ \mathbb{G}_{2}}))=\operatorname{\mathsf{Gr}}_{\overline{P\circ\Pi}}(\mathbb{T}^{2 })=\operatorname{\mathsf{Gr}}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})\). Therefore, \(\operatorname{\mathsf{Gr}}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}}))=\widetilde{ \Psi}\left(\operatorname{\mathsf{Gr}}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})\right).\)
By Lemma 4.2, we get that
**Lemma 4.3**.: \(\widehat{\widetilde{\Psi}\left(Y\right)}=\widetilde{\Psi}\left(\widehat{Y}\right),\) _where \(Y=\mathsf{Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2}).\)_
By using Lemma 4.3, we can say that
\[\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})}=\widetilde{\Psi }\left(\widehat{\mathsf{Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})}\right).\]
Therefore, to give a description for \(\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})},\) it is enough to compute \(\widehat{\mathsf{Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})}.\)
## 5. Examples
**Example 5.1**.: Let \(P(z_{1},z_{2})=z_{1}-z_{2}.\) Then \([z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]\neq\mathcal{C}(\Gamma_{ \mathbb{G}_{2}}).\)
**Explanation** In view of Corollary 2.10, to demonstrate that \([z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]\neq\mathcal{C}(\Gamma_{ \mathbb{G}_{2}}),\) it suffices to establish that the graph of \(\overline{P}\) over \(\Gamma_{\mathbb{G}_{2}}\) is not polynomially convex. To achieve this, it is sufficient to show that the graph of \(\overline{P\circ\Pi}\) over \(\Gamma_{\mathbb{G}_{2}}\) lacks polynomial convexity. Following the notation in Result 2.11, we define \(h(z)=\frac{1}{z_{1}}+\frac{1}{z_{2}}-\frac{1}{z_{1}z_{2}}.\) Then
\[\triangle(z)=\left|\begin{matrix}\frac{\partial(P\circ\Pi)}{\partial z_{1}}& \frac{\partial(P\circ\Pi)}{\partial z_{2}}\\ \frac{\partial h}{\partial z_{1}}&\frac{\partial h}{\partial z_{2}}\end{matrix} \right|=\left|\begin{matrix}1-z_{2}&1-z_{1}\\ \frac{-1}{z_{1}^{2}}+\frac{1}{z_{1}^{2}z_{2}}&\frac{-1}{z_{2}^{2}}+\frac{1}{z_ {2}^{2}z_{1}}\end{matrix}\right|=\frac{1}{z_{1}^{2}z_{2}^{2}}(z_{1}-z_{2})(z_ {1}-1)(z_{2}-1).\]
We define \(q_{1}:=z_{1}-1,\ q_{2}=z_{2}-1,q_{3}:=z_{1}-z_{2},\) and \(Z_{j}=\{z\in\mathbb{C}^{2}:q_{j}(z)=0\},j=1,2,3.\) Therefore,
\[\Sigma =\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^ {2}):\triangle(z)=0\right\}\] \[=\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{ 2})\right\}\cap[\cup_{j=1}^{3}Z_{j}],\]
and
\[X =\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{ 2}):\overline{(P\circ\Pi)(z)}=h(z)\right\}\] \[=\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{ 2}):\overline{z_{1}+z_{2}-z_{1}z_{2}}=\frac{1}{z_{1}}+\frac{1}{z_{1}}-\frac{1}{ z_{1}z_{2}}\right\}.\]
Here \(Q_{j}=Z_{j}\cap\mathbb{T}^{2}.\) Clearly,
\[\widehat{Q_{1}} =\left\{z\in\mathbb{C}^{2}:z_{1}=1,|z_{2}|\leq 1\right\}\neq Q_{1};\] \[\widehat{Q_{2}} =\left\{z\in\mathbb{C}^{2}:z_{2}=1,|z_{1}|\leq 1\right\}\neq Q_{2};\] \[\widehat{Q_{3}} =\left\{z\in\mathbb{C}^{2}:z_{1}=z_{2},|z_{1}|\leq 1\right\}\neq Q_{3}.\]
It is evident that \(\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\subset X\) holds true only for \(j=1,2.\) On the other hand, we note that \((\frac{1}{2},\frac{1}{2})\in\widehat{Q_{3}}\setminus(\mathbb{T}^{2}\cup L),\) yet \((\frac{1}{2},\frac{1}{2})\notin X.\) Therefore, by Result 2.11, we deduce that:
\[\widehat{\mathsf{Gr}_{\overline{P\circ\Pi}}(\mathbb{T}^{2})}=\mathsf{Gr}_{ \overline{P\circ\Pi}}(\mathbb{T}^{2})\cup\mathsf{Gr}_{\overline{P\circ\Pi}}( \widehat{Q_{1}})\cup\mathsf{Gr}_{\overline{P\circ\Pi}}(\widehat{Q_{2}}).\]
Hence
\[\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})} =\Psi\left(\mathsf{Gr}_{\overline{P_{0}\Pi}}(\mathbb{T}^{2})\right) \cup\Psi\left(\mathsf{Gr}_{\overline{P_{0}\Pi}}(\widehat{Q_{1}})\right)\cup \Psi\left(\mathsf{Gr}_{\overline{P_{0}\Pi}}(\widehat{Q_{2}})\right)\] \[=\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup\{(1+z,z,w):w=\overline{P(1+z,z)},z\in\overline{\mathbb{D}}\}\] \[\cup\{(1+z,z,w):w=\overline{P(1+z,z)},z\in\overline{\mathbb{D}}\}\] \[=\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup\{(1+z,z,w):w=\overline{P(1+z,z)},z\in\overline{\mathbb{D}}\}\] \[=\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup\{(1+z,z,1):z\in\overline{\mathbb{D}}\}.\]
**Example 5.2**.: \(P(z_{1},z_{2})=z_{1}-2z_{2}.\) Then \([z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]=\mathcal{C}(\Gamma_{ \mathbb{G}_{2}}).\)
**Explanation** In light of Corollary 2.10, in order to establish that \([z_{1},z_{2},\overline{P};\Gamma_{\mathbb{G}_{2}}]=\mathcal{C}(\Gamma_{ \mathbb{G}_{2}}),\) it is sufficient to demonstrate the polynomial convexity of the graph of \(\overline{P}\) over \(\Gamma_{\mathbb{G}_{2}}\). To accomplish this, it is enough to prove that the graph of \(\overline{P\circ\Pi}\) over \(\mathbb{T}^{2}\) is polynomially convex. Following the notation in Result 2.11, we have \(h(z)=\frac{1}{z_{1}}+\frac{1}{z_{2}}-\frac{2}{z_{1}z_{2}}.\)
\[\triangle(z)= \left|\begin{array}{cc}\frac{\partial(P\circ\Pi)}{\partial z_{ 1}}&\frac{\partial(P\circ\Pi)}{\partial z_{2}}\\ \frac{\partial h}{\partial z_{1}}&\frac{\partial h}{\partial z_{2}}\end{array} \right|=\left|\begin{array}{cc}1-2z_{2}&1-2z_{1}\\ \frac{-1}{z_{1}^{2}}+\frac{2}{z_{1}^{2}z_{2}}&\frac{-1}{z_{2}^{2}}+\frac{2}{z _{2}^{2}z_{1}}\end{array}\right|\] \[= \frac{1}{z_{1}^{2}z_{2}^{2}}(z_{1}+z_{2}-2-2z_{1}z_{2})(z_{2}-z_ {1}).\]
We define \(q_{1}:=z_{1}+z_{2}-2-2z_{1}z_{2},\ q_{2}:=z_{2}-z_{1}.\) and \(Z_{j}=\{z\in\mathbb{C}^{2}:q_{j}(z)=0\},j=1,2.\) Therefore,
\[\Sigma =\left\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup \mathbb{T}^{2}):\triangle_{(z,w)}=0\right\}\] \[=\left\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{ T}^{2})\right\}\cap[\cup_{j=1}^{2}Z_{j}],\]
and
\[X =\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{ 2}):\overline{(P\circ\Pi)(z)}=h(z)\right\}\] \[=\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{ 2}):\overline{z_{1}+z_{2}-2z_{1}z_{2}}=\frac{1}{z_{1}}+\frac{1}{z_{2}}-\frac{2 }{z_{1}z_{2}}\right\}.\]
Here \(Q_{j}=Z_{j}\cap\mathbb{T}^{2}.\) We now claim that
\[\widehat{Q_{1}}=\{z\in\mathbb{C}^{2}:z_{1}+z_{2}-2z_{1}z_{2}-2=0,|z_{1}|=1,|z_{ 2}|=1\}=Q_{1}.\]
Clearly, \(\widehat{Q_{1}}\subset\{z\in\mathbb{C}^{2}:z_{1}+z_{2}-2z_{1}z_{2}-2=0,|z_{1} |\leq 1,|z_{2}|\leq 1\}.\) Let \((\alpha,\beta)\in\{z\in\mathbb{C}^{2}:z_{1}+z_{2}-2z_{1}z_{2}-2=0,|z_{1}|\leq 1,|z_{2}|\leq 1\}\setminus Q_{1}.\) First, we assume that \(|\beta|<1.\) Since \(\alpha+\beta-2\alpha\beta-2=0,\) hence
\[|2-\alpha|=|\beta||1-2\alpha|<|1-2\alpha|. \tag{5.1}\]
Let \(\alpha=u+iv.\) Then from (5.1), we get that
\[(2-u)^{2}+v^{2}<(1-2u)^{2}+4v^{2}\] \[\implies 4+u^{2}+v^{2}-4u<1+4(u^{2}+v^{2})-4u\] \[\implies u^{2}+v^{2}>1\text{ i.e., }|\alpha|>1.\]
Hence, we conclude that \((\alpha,\beta)\notin\widehat{Q_{1}}.\) In the case where \(|\alpha|<1,\) we can similarly demonstrate that \(|\beta|>1,\) leading to the same conclusion, \((\alpha,\beta)\notin\widehat{Q_{1}}.\) As a result, we establish that \(Q_{1}\) is polynomially convex.
Furthermore, consider \(\widehat{Q_{2}}=\{z\in\mathbb{C}^{2}:z_{1}=z_{2},|z_{1}|\leq 1\}\neq Q_{2}.\) Notably, \((\frac{1}{2},\frac{1}{2})\in\widehat{Q_{2}}\setminus(\mathbb{T}^{2}\cup L),\) while \((\frac{1}{2},\frac{1}{2})\notin X.\) Hence, by Result 2.11, we can deduce that:
\[\widehat{\mathsf{Gr}_{\overline{P}\circ\overline{\Pi}}(\mathbb{T}^{2})}= \mathsf{Gr}_{\overline{P}\circ\overline{\Pi}}(\mathbb{T}^{2}).\]
This implies:
\[\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}^{2}})}=\Psi\left( \mathsf{Gr}_{\overline{P}\circ\overline{\Pi}}(\mathbb{T}^{2})\right)= \mathsf{Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}}).\]
**Example 5.3**.: Let \(p_{1}(z_{1},z_{2})=2z_{1}+z_{2}^{2},\ p_{2}(z_{1},z_{2})=z_{1}-z_{2}^{2},\ P(z_{1},z_{2})=z_{1}-z_{2}\) and \(\phi(z_{1},z_{2})=(p_{1}(z_{1},z_{2}),p_{2}(z_{1},z_{2})).\) Therefore \(\Omega=\phi(\mathbb{D}^{2}).\) Then \([z_{1},z_{2},\overline{P};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).\)
**Explanation** According to Theorem 2.9, it follows that \([z_{1},z_{2},\overline{P};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})\) if, and only if, \(\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})\) exhibits polynomial convexity. Furthermore, the polynomial convexity of \(\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})\) is equivalent to the polynomial convexity of \(\mathsf{Gr}_{\overline{P}\circ\phi}(\mathbb{T}^{2}).\)
Here \(\overline{P\circ\phi}=\overline{z_{1}+2z_{2}^{2}}=\frac{1}{z_{1}}+\frac{2}{z_ {2}^{2}}=:h(z)\) on \(\mathbb{T}^{2}.\)
\[\triangle(z)=\left|\begin{array}{cc}\frac{\partial(P\circ\phi)}{\partial z_ {1}}&\frac{\partial(P\circ\phi)}{\partial z_{2}}\\ \frac{\partial h}{\partial z_{1}}&\frac{\partial h}{\partial z_{2}}\end{array} \right|=\left|\begin{array}{cc}1&4z_{2}\\ \frac{-1}{z_{1}^{2}}&\frac{-4}{z_{2}^{3}}\end{array}\right|=\frac{1}{z_{1}^{2 }z_{2}^{3}}(z_{1}+z_{2}^{2})(z_{2}^{2}-z_{1}).\]
We define \(q_{1}:=z_{1}+z_{2}^{2},\ q_{2}:=z_{2}^{2}-z_{1},\) and \(Z_{j}=\{z\in\mathbb{C}^{2}:q_{j}(z)=0\},j=1,2.\) Therefore,
\[\Sigma =\left\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{ T}^{2}):\triangle_{(z,w)}=0\right\}\] \[=\left\{(z,w)\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{ T}^{2})\right\}\cap[\cup_{j=1}^{2}Z_{j}],\]
and
\[X =\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2 }):\overline{(P\circ\phi)(z)}=h(z)\right\}\] \[=\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^{2 }):\overline{z_{1}+2z_{2}^{2}}=\frac{1}{z_{1}}+\frac{2}{z_{2}^{2}}\right\}.\]
Here \(Q_{j}=Z_{j}\cap\mathbb{T}^{2}.\) Clearly,
\[\widehat{Q_{1}} =\{z\in\mathbb{C}^{2}:z_{1}+z_{2}^{2}=0,|z_{1}|\leq 1,|z_{2}|\leq 1 \}\neq Q_{1},\ \ \text{and}\] \[\widehat{Q_{2}} =\{z\in\mathbb{C}^{2}:z_{2}^{2}-z_{1}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\}\neq Q _{2}.\]
It is easy to see that \(\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\nsubseteq X\) for \(j=1,2.\) Therefore, by a Result 2.11, we get that
\[\widehat{\mathsf{Gr}_{P\circ\phi}}(\widehat{\mathbb{T}}^{2})=\mathsf{Gr}_{ \overline{P\circ\phi}}(\mathbb{T}^{2}).\]
Hence
\[\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})}=\Psi\left(\mathsf{Gr}_{ \overline{P\circ\phi}}(\mathbb{T}^{2})\right)=\mathsf{Gr}_{\overline{P}}( \Gamma_{\Omega}).\]
**Example 5.4**.: Let \(p_{1}(z_{1},z_{2})=z_{1}+z_{2},\ p_{2}(z_{1},z_{2})=z_{1}^{2}+z_{2}^{2},\ P(z_{1},z_{2})=z_{1}^{2}+z_{2}\) and \(\phi(z_{1},z_{2})=(p_{1}(z_{1},z_{2}),p_{2}(z_{1},z_{2})).\) Therefore \(\Omega=\phi(\mathbb{D}^{2}).\) Then \([z_{1},z_{2},\overline{P};\Gamma_{\Omega}]\neq\mathcal{C}(\Gamma_{\Omega}).\)
**Explanation** Based on Theorem 2.9, we can assert that \([z_{1},z_{2},\overline{P};\Gamma_{\Omega}]\neq\mathcal{C}(\Gamma_{\Omega})\) if, and only if, \(\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})\) lacks polynomial convexity. Furthermore, \(\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})\) possesses polynomial convexity if, and only if, \(\mathsf{Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})\) is polynomially convex. Therefore, it is enough to show that \(\mathsf{Gr}_{\overline{P\circ\phi}}(\mathbb{T}^{2})\) is not polynomailly convex.
Here \(P\circ\phi=2(z_{1}^{2}+z_{1}z_{2}+z_{2}^{2}).\) Hence,
\[\overline{P\circ\phi}=\overline{2(z_{1}^{2}+z_{1}z_{2}+z_{2}^{2})}=2\left( \frac{1}{z_{1}^{2}}+\frac{1}{z_{2}^{2}}+\frac{1}{z_{1}z_{2}}\right)=:h(z)\ \text{on}\ \mathbb{T}^{2}.\]
\[\triangle(z)=\left|\begin{array}{ccc}\frac{\partial(P\circ\phi)}{\partial z_ {1}}&\frac{\partial(P\circ\phi)}{\partial z_{2}}\\ \frac{\partial h}{\partial z_{1}}&\frac{\partial h}{\partial z_{2}}\\ \end{array}\right|=\left|\begin{array}{ccc}2(2z_{1}+z_{2})&2(2z_{2}+z_{1})\\ 2(\frac{-2}{z_{1}^{3}}-\frac{-1}{z_{1}^{2}z_{2}})&2(\frac{-2}{z_{2}^{3}}- \frac{-1}{z_{1}z_{2}^{2}})\\ \end{array}\right|\]
\[=\frac{8\alpha^{-1}}{z_{1}^{3}z_{2}^{3}}(z_{1}+z_{2})(z_{2}-z_{1})(z_{1}- \alpha z_{2})(z_{2}-\alpha z_{1}),\ \text{where}\ \alpha=e^{\frac{2\pi i}{3}}.\]
We define \(q_{1}:=z_{1}+z_{2},\ q_{2}:=z_{2}-z_{1},\,q_{3}=z_{1}-\alpha z_{2},\ q_{4}=z_{2}- \alpha z_{1},\) and \(Z_{j}=\{z\in\mathbb{C}^{2}:q_{j}(z)=0\},j=1,2,3,4.\) Therefore,
\[\Sigma =\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^ {2}):\triangle(z)=0\right\}\] \[=\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^ {2})\right\}\cap[\cup_{j=1}^{3}Z_{j}],\]
and
\[X =\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^ {2}):\overline{(P\circ\phi)(z)}=h(z)\right\}\] \[=\left\{z\in\overline{\mathbb{D}}^{2}\setminus(L\cup\mathbb{T}^ {2}):\overline{2(z_{1}^{2}+z_{1}z_{2}+z_{2}^{2})}=2\left(\frac{1}{z_{1}^{2}}+ \frac{1}{z_{2}^{2}}+\frac{1}{z_{1}z_{2}}\right)\right\}.\]
Here \(Q_{j}=Z_{j}\cap\mathbb{T}^{2}.\) Clearly,
\[\widehat{Q_{1}} =\{z\in\mathbb{C}^{2}:z_{1}+z_{2}=0,|z_{1}|\leq 1,|z_{2}|\leq 1 \}\neq Q_{1};\] \[\widehat{Q_{2}} =\{z\in\mathbb{C}^{2}:z_{2}-z_{1}=0,|z_{1}|\leq 1,|z_{2}|\leq 1\}\neq Q _{2};\] \[\widehat{Q_{3}} =\{z\in\mathbb{C}^{2}:z_{1}-\alpha z_{2}=0,|z_{1}|\leq 1,|z_{2}|\leq 1 \}\neq Q_{3};\] \[\widehat{Q_{4}} =\{z\in\mathbb{C}^{2}:z_{2}-\alpha z_{1}=0,|z_{1}|\leq 1,|z_{2}|\leq 1 \}\neq Q_{4}.\]
Again \(\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\nsubseteq X\) for \(j=1,2,\) and \(\widehat{Q_{j}}\setminus(\mathbb{T}^{2}\cup L)\subset X\) for \(j=3,4.\) Therefore, by Result 2.11, we get that
\[\widehat{\mathsf{Gr}_{P\circ\overline{\phi}}(\mathbb{T}^{2})}=\mathsf{Gr}_{ \overline{P\circ\overline{\phi}}}(\mathbb{T}^{2})\cup\mathsf{Gr}_{\overline{P \circ\overline{\phi}}}(\widehat{Q_{3}})\cup\mathsf{Gr}_{\overline{P\circ \overline{\phi}}}(\widehat{Q_{4}}).\]
Hence
\[\widehat{\mathsf{Gr}_{\overline{P}}(\Gamma_{\Omega})}=\Psi\left(\mathsf{Gr}_ {\overline{P\circ\overline{\phi}}}(\mathbb{T}^{2})\right)=\mathsf{Gr}_{ \overline{P}}(\Gamma_{\Omega})\cup\Psi\left(\mathsf{Gr}_{\overline{P\circ \overline{\phi}}}(\widehat{Q_{3}})\right)\cup\Psi\left(\mathsf{Gr}_{ \overline{P\circ\overline{\phi}}}(\widehat{Q_{4}})\right).\]
**Acknowledgements.** We would like to express our sincere gratitude to Professor Franc Forstneric for pointing out Result 2.3 in [32] and showing us the proof of Lemma 2.4. The first named author was partially supported by a Matrics Research Grant (MTR/2017/000974) of SERB, Dept. of Science and Technology, Govt. of India, for the beginning of this work and is supported by a Core Research Grant (CRG/2022/003560) of SERB, Dept. of Science and Technology, Govt. of India, for the later part of the work. The second named author's work received partial support from an INSPIRE Fellowship (IF 160487) provided by the Dept. of Science and Technology, Govt. of India, during the early stage of this work. Presently, this research is supported by a research grant from SERB (Grant No. CRG/2021/005884), Dept. of Science and Technology, Govt. of India.
|
2303.00125 | Strong-coupling magnetophononics: Self-blocking, phonon-bitriplons, and
spin-band engineering | Magnetophononics, the modulation of magnetic interactions by driving
infrared-active lattice excitations, is emerging as a key mechanism for the
ultrafast dynamical control of both semiclassical and quantum spin systems by
coherent light. We demonstrate that, in a quantum magnet with strong
spin-phonon coupling, resonances between the driven phonon and the spin
excitation frequencies exhibit an intrinsic self-blocking effect, whereby only
a fraction of the available laser power is absorbed by the phonon. Using the
quantum master equations governing the nonequilibrium steady states of the
coupled spin-lattice system, we show how self-blocking arises from the
self-consistent alteration of the resonance frequencies. We link this to the
appearance of mutually repelling collective spin-phonon states, which in the
regime of strong hybridization become composites of a phonon and two triplons.
We then identify the mechanism and optimal phonon frequencies by which to
control a global nonequilibrium renormalization of the lattice-driven spin
excitation spectrum and demonstrate that this effect should be observable in
ultrafast THz experiments on a number of known quantum magnetic materials. | M. Yarmohammadi, M. Krebs, G. S. Uhrig, B. Normand | 2023-02-28T23:14:48Z | http://arxiv.org/abs/2303.00125v1 | # Strong-coupling magnetophononics: Self-blocking, phonon-bitriplons, and spin-band engineering
###### Abstract
Magnetophononics, the modulation of magnetic interactions by driving infrared-active lattice excitations, is emerging as a key mechanism for the ultrafast dynamical control of both semiclassical and quantum spin systems by coherent light. We demonstrate that, in a quantum magnet with strong spin-phonon coupling, resonances between the driven phonon and the spin excitation frequencies exhibit an intrinsic self-blocking effect, whereby only a fraction of the available laser power is absorbed by the phonon. Using the quantum master equations governing the nonequilibrium steady states of the coupled spin-lattice system, we show how self-blocking arises from the self-consistent alteration of the resonance frequencies. We link this to the appearance of mutually repelling collective spin-phonon states, which in the regime of strong hybridization become composites of a phonon and two triplons. We then identify the mechanism and optimal phonon frequencies by which to control a global nonequilibrium renormalization of the lattice-driven spin excitation spectrum and demonstrate that this effect should be observable in ultrafast THz experiments on a number of known quantum magnetic materials.
## I Introduction
Rapid advances in laser technology [1] have made it possible not only to probe but also to pump quantum materials in a controlled manner on ultrafast timescales and at all the frequencies relevant to excitations in condensed matter [2; 3; 4]. This has led to phenomena ranging from Floquet engineering of electronic band structures [5] to enhanced superconductivity [6] and switching of the metal-insulator transition [7]. A wide range of experimental and theoretical efforts is now under way to extend such ultrafast control to every aspect of strongly correlated materials beyond the charge, including lattice, orbital, spin, nematic, and chiral degrees of freedom [8].
Among these, spin systems offer perhaps the ultimate quantum many-body states due to their intrinsically high entanglement and relatively low energy scales, which lead to rather clean experimental realizations. Ultrafast switching, modulation, transport, and destruction of semiclassical ordered magnetism have been achieved using light of different frequencies [9; 10; 11]. However, a direct coupling to a magnetic order parameter is often not appropriate for the dynamical control of quantum magnetic materials, and increasing attention is focused on using the lattice as an intermediary [12; 13; 14; 15; 16]. While "nonlinear phononics" [17; 18; 19] exploits the anharmonic lattice potential, to date for low-frequency magnetic control [20], "magnetophononics" [21] uses harmonic phonons to effect the highly nonlinear modulation of exchange-type interactions [22].
The magnetophononic mechanism is ideally suited to the task at hand, namely studying how driving by coherent light can influence the magnetic properties of an insulating low-dimensional quantum spin system. Unless the magnetic interactions are highly anisotropic, the direct coupling of electromagnetic waves to spins is very weak. Using the lattice to mediate this coupling means choosing an infrared (IR) phonon to excite by THz laser radiation so that a coherent lattice oscillation is triggered. Intense irradiation results in a phonon occupation sufficiently high that the (super)exchange couplings between the localized spins undergo a significant alteration [23; 22; 15], leading to readily detectable changes in the properties of the magnetic subsystem. While the THz laser can be used to select any IR-active phonon in the spectrum of available lattice excitations, this phonon introduces a frequency that _a priori_ has no direct connection to the intrinsic excitation frequencies of the spin system. Driving a very fast phonon mode (\(\omega_{0}\)) would put the spin system in the true Floquet regime, where one might seek spin-excitation bands shifted by \(\pm n\omega_{0}\) (for small integer \(n\)). Finding a very slow phonon mode would allow the spin correlations and excitations to be modulated over the course of a single phonon period. Between these two limits, strong excitation of the collective spin modes at their intrinsic frequencies would go beyond these modifications of the existing magnetic states by opening the possibility of creating fundamentally different types of composite collective state, including hybrid spin-spin and spin-phonon composites.
Here we analyze the physics of the magnetophononic mechanism at strong spin-phonon coupling by considering the nonequilibrium steady states (NESS) of a minimal model consisting of an alternating quantum spin
chain coupled to a bulk optical phonon mode. When the phonon frequency matches the spectrum of magnetic excitations, we find strong feedback effects between the spin and lattice sectors that produce a number of unconventional phenomena. We demonstrate an intrinsic self-blocking effect, by which a driven phonon in resonance with the peak density of magnetic excitations absorbs little of the driving laser power. We compute the driving-induced mutual renormalization of the lattice and spin excitations, and link the self-blocking to the distinctive phonon-bitriplon hybrid excitations that emerge for phonon frequencies near the spin-band edges. We then demonstrate how all possible phonon frequencies that lie within the spin excitation band can act with varying efficiency to cause a driving-induced global reshaping of the spin spectrum. We discuss the consequences of self-blocking and of this dynamical spectral renormalization for pump-probe experiments on some quantum magnetic materials known to have strong spin-phonon coupling.
The framework for our study is one we have discussed in detail in Ref. [24], where we set out to establish and analyze the equation-of-motion approach to a magnetophononically driven system. In this work we introduced the dimerized chain as a generic model for a gapped quantum spin system, a bulk optical phonon as the most straightforward implementation of the driving mechanism, and the remainder of the phonon spectrum as the dissipative bath. We applied the Lindblad formulation [25] to derive the quantum master equations [26] and used these to perform a detailed study of the NESS of the coupled system in the regime where a weak spin-phonon coupling restricted its response largely to linear orders. This analysis revealed the ingredients and parameters of the minimal magnetophononic model, characterized both phonon and spin NESS by their frequency-dependence and wave-vector content, computed the energy flow throughout the driven and dissipative system, related this to the system temperature, and identified the onset of nonlinear feedback effects.
The present study extends the weak-coupling analysis in three key directions. The first is to strong spin-phonon coupling, to identify and investigate the phenomenology of the driven system when the mutual feedback between the spin and lattice sectors becomes strong. Because one fundamental consequence of strong coupling is strong shifts in the characteristic mode energies, the second direction is to perform systematic scans of the driving frequency. Here we comment that the reality of current experiment is somewhat removed from the NESS protocol, using ultrashort and ultra-intense pulses that both contain a broad range of frequencies and produce high-order response processes; as a result, both of these directions constitute essential steps towards an accurate description of experiment. The third direction is that, if the strong drive is used to establish a NESS whose properties differ significantly from those of the equilibrium system, an independent "probe" laser is required to read these properties, preferably by a range of methods sensitive to the magnetic as well as to the lattice sector, and we introduce such a probe.
Our considerations are directly relevant to at least two quantum magnetic materials, CuGeO\({}_{3}\) and (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\), which were found over 20 years ago to be quasi-one-dimensional alternating spin chains with extremely strong spin-phonon coupling. In CuGeO\({}_{3}\), this coupling is strong enough to drive a spin-Peierls transition into the dimerized phase below \(T_{\rm sp}=14\) K [27], whereas (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\) is intrinsically dimerized and was found by raising the temperature to show strong renormalization of the phonons by the spin sector [28]. While the spectra of spin and lattice excitations were studied in detail in both materials [28; 29; 30; 31; 32; 33; 34; 35; 36], they have yet to be considered from the standpoint of matching drivable (IR) phonons to particular frequency regimes within their spin spectra.
The structure of this article is as follows. In Sec. II we introduce the two phonon-coupled alternating spin-chain models we study and the equation-of-motion method by which we compute their driven dynamics. In Sec. III we analyze the phenomenon of self-blocking. The properties of the phonon-bitriplon hybrid states are presented in Sec. IV. Section V studies the dynamical modifications of the spin excitation band and thus demonstrates the potential for spin-band engineering in quantum magnetic materials. The relevance of these findings to two very strongly spin-phonon-coupled quantum spin-chain materials, CuGeO\({}_{3}\) and (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\), is discussed in Sec. VI and Sec. VII contains a brief summary and conclusion.
## II Models and Methods
Following the logic of Ref. [24], we consider a model for a magnetophononically driven quantum spin system that is minimally complex but nevertheless contains all of the components essential for capturing the physics of real materials. We do not focus on long-ranged magnetic order, because this is not generally stable in a truly low-dimensional magnet. Thus we consider an alternating spin chain, i.e. a one-dimensional quantum magnet with gapped spin excitations. Without loss of generality, and in order to make progress with a straightforward perturbative treatment of the spin system, we use a chain with a substantial dimerization, setting the interaction on the weaker bonds (\(J^{\prime}\)) to half the size of the stronger bonds (\(J\)). The IR phonon acting as the intermediary between the driving laser and the spin system can couple to the magnetic interactions in a wide variety of different ways. Here we restrict our considerations to the leading (linear) term in a Taylor expansion and analyze the two distinct coupling geometries shown in Fig. 1, where (i) the phonon modulates the strong (intradimer) bond [Fig. 1(a)], to which we refer henceforth as the "\(J\)-model," and (ii) the phonon modulates the weak (interdimer) bond [Fig. 1(b)], which we call the "\(J^{\prime}\)-model" [37]. We will show that the two models yield very similar results for certain magnetophononic phenomena but are
quite different for other phenomena.
### Hamiltonian dynamics
The Hamiltonian of the spin system takes the form
\[H_{\mathrm{s}}=\sum_{i=1}^{N}\left(J\vec{S}_{1,i}\cdot\vec{S}_{2,i}+\lambda J \vec{S}_{2,i}\cdot\vec{S}_{1,i+1}\right), \tag{1}\]
where \(\lambda=J^{\prime}/J,\,i\) labels the dimer, 1 and 2 denote the two spins within each dimer, and periodic boundary conditions are assumed. The full phonon Hamiltonian is
\[H_{\mathrm{p,BZ}}=\sum_{q\in\mathrm{BZ}}\omega_{\mathrm{ph}}(q)b_{q}^{\dagger}b _{q}, \tag{2}\]
where we have omitted an additional quantum number for the different phonon branches. The acoustic phonons make the largest contributions to the damping, both of optical phonons and of spin excitations, and as a result their effects are included in the phenomenological damping coefficients to be introduced below. For the purposes of magnetophononic driving, we use the frequency of the incoming THz laser radiation to select a single, IR-active optical phonon, which without loss of generality can be dispersionless. The only relevant phonon momentum is \(q=0\), because of the dispersion relation (extremely high speed) of the incident light, and hence any phonon dispersion plays no role. The only phonon term in the Hamiltonian of the driven system is then
\[H_{\mathrm{p}}=\omega_{0}b_{0}^{\dagger}b_{0}, \tag{3}\]
where we use \(b_{0}\) and \(\omega_{0}\) as shorthand for \(b_{q=0}\) and \(\omega_{\mathrm{ph}}(q=0)\). A further Hamiltonian term is required to describe the driving of this phonon by the electric field, \(E_{0}(t)\), of the laser,
\[H_{\mathrm{l}}=\sum_{i=1}^{N}E_{0}(t)(b_{i}^{\dagger}+b_{i}) = NE_{0}(t)\hat{d}, \tag{4}\]
where \(\hat{d}=(b_{0}^{\dagger}+b_{0})/\sqrt{N}\) is the operator specifying the local atomic displacement. The linear dependence on \(N\) indicates clearly that any finite driving induces a finite value of the phonon displacement observable, \(q(t)=\langle\hat{d}\rangle\), and hence a macroscopic occupation (\(n_{0}\propto N\)) of the \(q=0\) boson.
To complete the magnetophononic Hamiltonian we specify the two types of spin-phonon coupling shown in Fig. 1. For the \(J\)-model,
\[H_{\mathrm{sp},J}=\sum_{i=1}^{N}g(b_{i}+b_{i}^{\dagger})[\vec{S}_{1,i}\cdot \vec{S}_{2,i}-\langle\vec{S}_{1,i}\cdot\vec{S}_{2,i}\rangle_{\mathrm{eq}}], \tag{5}\]
where the second term denotes the equilibrium value of the spin interaction on the strong bonds of the chain, and its presence ensures that the dimerization, \(\lambda\), does not change in absence of the driving term. For the \(J^{\prime}\)-model,
\[H_{\mathrm{sp},J^{\prime}}=\sum_{i=1}^{N}g^{\prime}(b_{i}+b_{i}^{\dagger})[ \vec{S}_{2,i}\cdot\vec{S}_{1,i+1}-\langle\vec{S}_{2,i}\cdot\vec{S}_{1,i+1} \rangle_{\mathrm{eq}}]. \tag{6}\]
The spin-phonon coupling coefficients have units of energy, and for convenience we will normalize \(g\) to \(J\) and \(g^{\prime}\) to \(J^{\prime}\). While the two coupling types are dichotomous, in almost any real material one may expect the atomic displacements associated with any phonon mode to include components that alter all of the magnetic interaction terms in the system.
We proceed by diagonalizing the spin system, for which we introduce bond operators expressing the creation and annihilation of singlet and triplet states on each dimer, \(i\). In the relevant limit, where small numbers of triplets form the elementary excitations (henceforth "triplons") above a sea of singlets, the exact identity [38; 39]
\[S_{1(2),i}^{\alpha}=\tfrac{1}{2}[\pm(s_{i}^{\dagger}t_{\alpha,i}+t_{\alpha,i}^ {\dagger}s_{i})-i\epsilon_{\alpha\beta\gamma}t_{\beta,i}^{\dagger}t_{\gamma,i}], \tag{7}\]
can be reduced to
\[S_{1(2),i}^{\alpha}=\pm\tfrac{1}{2}(t_{\alpha,i}+t_{\alpha,i}^{\dagger})+ \mathcal{O}(t^{\dagger}t), \tag{8}\]
Figure 1: **Magnetophononically driven alternating spin chain.** (a) Schematic representation of a spin chain with interaction parameters \(J\) and \(J^{\prime}\), spin damping \(\gamma_{\mathrm{s}}\), and spin-phonon coupling \(g\) to the strong bond (\(J\)); we refer to this system as the \(J\)-model. (b) Analogous model with spin-phonon coupling \(g^{\prime}\) only to the weak bond (\(J^{\prime}\)) of the alternating spin chain, to which we refer as the \(J^{\prime}\)-model. In both panels, blue ellipses denote dimer singlets and the red ellipse a triplon excitation. The phonon frequency is \(\omega_{0}\) and its damping is \(\gamma\), the pump laser drives the system at any frequency, \(\omega\), with electric field \(E_{0}\), and a weaker probe beam addresses it at frequency \(\Omega\) with field \(E_{1}\ll E_{0}\).
where \(\alpha\in\{x,y,z\}\) denotes both the spin component and the triplon flavor. The full expression [Eq. (7)] shows explicily how triplon creation is accompanied by singlet annihilation, and vice versa, ensuring the hard-core property of the triplons. It is also the basis of a systematic perturbative approach that could be used to perform accurate calculations for alternating chains with much weaker dimerization (\(\lambda\to 1\)) [40; 41]. However, for the moderate values of \(\lambda\) that we consider (\(\lambda\lesssim 1/2\)), a reliable description of the elementary magnetic excitations is obtained by using only the first term of Eq. (8) (i.e. restricting the Hamiltonian to bilinear terms) and neglecting the hard-core property of the triplons [24]. A Fourier transformation and a Bogoliubov transformation, the latter using the basis of diagonal triplons \(\tilde{t}_{\alpha,k}\)[24], brings the spin Hamiltonian to the form
\[H_{\rm s}=\sum_{k,\alpha}\omega_{k}\tilde{t}_{\alpha,k}^{\dagger}\tilde{t}_{ \alpha,k} \tag{9}\]
with dispersion relation
\[\omega_{k}=J\sqrt{1-\lambda\cos k}. \tag{10}\]
To apply these transformations to the Hamiltonian describing the spin-phonon coupling, we introduce the wave-vector-dependent coefficients
\[y_{k} =J[1-\lambda\cos k/2]/\omega_{k} \tag{11a}\] \[y^{\prime}_{k} =J^{\prime}\cos k/(2\omega_{k}). \tag{11b}\]
With these we express the \(J\)-model in the form
\[H_{\rm sp,J}=g\hat{d}\sum_{k,\alpha}\big{[}y_{k}\tilde{t}_{\alpha,k}^{ \dagger}\tilde{t}_{\alpha,k}+\tfrac{1}{2}y^{\prime}_{k}\big{(}\tilde{t}_{ \alpha,k}^{\dagger}\tilde{t}_{\alpha,-k}^{\dagger}+\text{H.c.}\big{)}\big{]} \tag{12}\]
and for the \(J^{\prime}\)-model we obtain
\[H_{\rm sp,J^{\prime}}=-\frac{g^{\prime}\hat{d}}{2\lambda}\sum_{k,\alpha}y^{ \prime}_{k}\big{[}2\tilde{t}_{\alpha,k}^{\dagger}\tilde{t}_{\alpha,k}+\tilde{ t}_{\alpha,k}^{\dagger}\tilde{t}_{\alpha,-k}^{\dagger}+\text{H.c.}\big{]}. \tag{13}\]
These two equations allow us to observe the leading differences and similarities of the two models. The most striking difference is that the prefactor \(gy_{k}\) of \(\tilde{t}_{\alpha,k}^{\dagger}\tilde{t}_{\alpha,k}\) in the \(J\)-model is changed to \(-g^{\prime}y^{\prime}_{k}/\lambda\) in the \(J^{\prime}\)-model, which amounts to a sizable decrease because \(|y^{\prime}_{k}|\ll y_{k}\) for most \(k\) values. An intriguing similarity arises in the prefactor of the pair-creation and -annihilation terms, where \(gy^{\prime}_{k}\) is changed to \(-g^{\prime}y^{\prime}_{k}/\lambda\). The sign of the prefactor matters little, because it can be changed by the unitary transformation \(\tilde{t}_{\alpha,k}\to i\tilde{t}_{\alpha,k}\), and thus we anticipate that similar results are to be expected if one compares \(J\)- and \(J^{\prime}\)-models with the property \(g/J=g^{\prime}/J^{\prime}\) (i.e. \(g=g^{\prime}/\lambda\)). We will illustrate this situation in Secs. III and IV.
The spin-phonon coupling contains trilinear bosonic terms incorporating two triplon operators and the displacement operator, \(\hat{d}\), of the driving IR phonon. We treat these trilinear terms by a dynamical mean-field approach. For the "spin part" of this term, the mean-field procedure consists of replacing \(\hat{d}\) by its expectation value, \(\langle\hat{d}\rangle=q(t)\), and keeping the action of the triplon operators. For the "phonon part" of this term, we replace the spin part by its expectation value to obtain for the \(J\)-model
\[H_{\rm sp,p,J} =g\hat{d}\sum_{k,\alpha}\big{[}y_{k}\langle\tilde{t}_{\alpha,k}^{ \dagger}\tilde{t}_{\alpha,k}\rangle+\tfrac{1}{2}y^{\prime}_{k}\langle\tilde{ t}_{\alpha,k}^{\dagger}\tilde{t}_{\alpha,-k}^{\dagger}+\text{H.c.}\rangle\big{]}\] \[=gN\hat{d}\,(\mathcal{U}_{J}+\mathcal{V}_{J}),\] (14a) where we used \[\mathcal{U}_{J} =\frac{1}{N}\sum_{k,\alpha}y_{k}\langle\tilde{t}_{\alpha,k}^{ \dagger}\tilde{t}_{\alpha,k}\rangle \tag{15a}\] \[\mathcal{V}_{J} =\frac{1}{N}\sum_{k}y^{\prime}_{k}\,\text{Re}\,\langle\tilde{t}_{ \alpha,k}^{\dagger}\tilde{t}_{\alpha,-k}^{\dagger}\rangle. \tag{15b}\]
For the \(J^{\prime}\)-model we obtain the analogous form
\[H_{\rm sp,p,J^{\prime}} =-\frac{g^{\prime}\hat{d}}{2\lambda}\sum_{k,\alpha}y^{\prime}_{k} \big{\langle}2\tilde{t}_{\alpha,k}^{\dagger}\tilde{t}_{\alpha,k}+\tilde{t}_{ \alpha,k}^{\dagger}\tilde{t}_{\alpha,-k}^{\dagger}+\text{H.c.}\big{\rangle} \tag{16a}\] \[=-(g^{\prime}/\lambda)N\hat{d}\,(\mathcal{U}_{J^{\prime}}+ \mathcal{V}_{J^{\prime}}), \tag{16b}\]
where \(\mathcal{V}_{J^{\prime}}=\mathcal{V}_{J}\) and
\[\mathcal{U}_{J^{\prime}}=\frac{1}{N}\sum_{k,\alpha}y^{\prime}_{k}\langle\tilde{ t}_{\alpha,k}^{\dagger}\tilde{t}_{\alpha,k}\rangle; \tag{17}\]
we draw attention to the replacement \(y_{k}\to y^{\prime}_{k}\) relative to the \(J\)-model. We stress in addition that the phonon oscillation and the expectation value of the total spin system are both extensive quantities, as a result of which their relative quantum fluctuations tend to zero in the thermodynamic limit (\(N\to\infty\)), which provides an excellent justification for the mean-field decoupling we employ.
### Quantum master equations
If a real quantum mechanical system is driven continuously, the absorbed energy will cause heating, which will on some timescale push the system beyond its quantum regime, if not also to very high temperatures (with modern laser intensities one may even surpass the melting point). A systematic treatment of the energy flow requires the considerations of an open quantum system, where relaxation and dissipation channels are included in addition to the external drive. For the spin chain pumped by an IR optical phonon (Fig. 1), we showed in Ref. [24] that the dissipation should be included on two levels, specifically the damping of the driven IR phonon and a direct damping of the triplon modes. Both are assumed to have their microscopic origin in the ensemble of phonon modes, particularly the acoustic ones, and
both are treated by means of the adjoint Lindblad master equation [26]
\[\frac{d}{dt}\langle O\rangle(t)=i \langle[H,O(t)]\rangle \tag{18}\] \[+\frac{1}{2}\sum_{j}\gamma_{j}\langle[L_{j}^{\dagger},O(t)]L_{j}+L _{j}^{\dagger}[O(t),L_{j}]\rangle,\]
where \(H\) is the Hamiltonian of the isolated system (the spin sector and the driven phonon), \(O(t)\) is an operator in the Heisenberg picture, \(\{L_{j}\}\) are Lindblad operators (operators of the isolated system that link it to its environment, the "bath"), and \(\{\gamma_{j}\}\) are the corresponding damping coefficients (decay rates). The Lindblad framework requires that the coefficients \(\{\gamma_{j}\}\) be relatively weak, but places no constraint on the terms within the isolated system, meaning that it can be applied for all values of \(g\) and \(g^{\prime}\) [Eqs. (5) and (6)].
To describe the dissipation of the phonon, we follow the conventional choice [26]\(L_{1}=b_{0}^{\dagger}\), \(L_{2}=b_{0}\), and parameterize the decay rates using
\[\gamma_{1}=\gamma n(\omega_{0}),\qquad\gamma_{2}=\gamma[1+n(\omega_{0})], \tag{19}\]
where \(n(\varpi)\) is the bosonic occupation number at energy \(\hbar\varpi\). The dynamics of the phonon are then that of a driven and damped harmonic oscillator,
\[\frac{d}{dt}q(t) =\omega_{0}p(t)-\tfrac{1}{2}\gamma q(t) \tag{20a}\] \[\frac{d}{dt}p(t) =-\omega_{0}q(t)-2\widetilde{E}(t)-\tfrac{1}{2}\gamma p(t)\] (20b) \[\frac{d}{dt}n_{\text{ph}}(t) =-\widetilde{E}(t)p(t)-\gamma n_{\text{ph}}(t), \tag{20c}\]
where \(p(t)=i\langle b_{0}^{\dagger}-b_{0}\rangle/\sqrt{N}\) is the momentum conjugate to \(q(t)\),
\[n_{\text{ph}}(t)=\frac{1}{N}\langle b_{0}^{\dagger}b_{0}\rangle \tag{21}\]
is the number of phonons per dimer, and \(\widetilde{E}(t)\), which denotes the effective electric field acting on the phonon in the presence of its coupling to the spin system, is defined for the \(J\)-model by
\[\widetilde{E}(t)=E_{0}(t)+g[\mathcal{U}_{J}(t)+\mathcal{V}_{J}(t)] \tag{22}\]
and for the \(J^{\prime}\)-model by
\[\widetilde{E}(t)=E_{0}(t)-(g^{\prime}/\lambda)[\mathcal{U}_{J^{\prime}}(t)+ \mathcal{V}_{J^{\prime}}(t)]. \tag{23}\]
To describe the dissipation of the triplons, we proceed in a similar way by choosing the Lindblad operators for each triplon \(k\)-mode to be \(L_{1}=\vec{t}^{\dagger}_{\alpha,k}\) and \(L_{2}=\tilde{t}_{\alpha,k}\), with the corresponding decay rates given by
\[\gamma_{1}=\gamma_{\text{s}}n(\omega_{k}),\qquad\gamma_{2}=\gamma_{\text{s}} [1+n(\omega_{k})]. \tag{24}\]
Taking \(\gamma_{1}\) and \(\gamma_{2}\) to be independent of \(\alpha\) is a consequence of the isotropy of the spin system, but taking them to be independent of the momentum, \(\hbar k\), is a simplifying approximation that we make to avoid overburdening the model with a multitude of parameters. This assumption can be justified by the fact that we consider a one-dimensional spin system whose energy is dissipated into a bath of three-dimensional phonons. In this geometry, for any given wave vector, \(k\), in the chain direction there remain two perpendicular directions over which one has to sum in order to capture the full phonon bath, whence one does not expect a strong dependence on \(k\) within this continuum of dissipation channels.
Here we remind the reader that the Lindblad operators \(\vec{t}^{\dagger}_{\alpha,k}\) and \(\tilde{t}_{\alpha,k}\) correspond to the creation and annihilation of a quasiparticle with spin. As a consequence, these assumed bath processes do not conserve the total spin and thus are not in fact consistent with the form of the spin-phonon coupling assumed in Eqs. (5) and (6), where the isotropy of the spin part means that a phononic damping process cannot change the spin quantum number. As explained in detail in Ref. [24], this inconsistency would be repaired for a spin-isotropic material by assuming a more complex and spin-conserving form for the bath operators (for example \(C_{kq}=\vec{t}^{\dagger}_{\alpha,k}\tilde{t}_{\alpha,q}\)) and for a material with anisotropic spin interactions, usually a consequence of finite spin-orbit couplings, by adopting a more complex form for the spin-phonon coupling. However, to make progress in elucidating the phenomenology of strong-coupling magnetophononics in the most transparent way possible, we proceed with the present minimalist formulation of the problem captured by the spin-chain model of Sec. IIA.
The equations of motion for the spin sector of the \(J\)-model that result from the Lindblad master equation [Eq. (18)] take the form
\[\frac{d}{dt}u_{k}(t) =2gq(t)y^{\prime}_{k}w_{k}(t)-\gamma_{\text{s}}u_{k}(t) \tag{25a}\] \[\frac{d}{dt}v_{k}(t) =-2[\omega_{k}+gq(t)y_{k}]w_{k}(t)-\gamma_{\text{s}}v_{k}(t)\] (25b) \[\frac{d}{dt}w_{k}(t) =2[\omega_{k}+gq(t)y_{k}]v_{k}(t)-\gamma_{\text{s}}w_{k}(t)\] (25c) \[\qquad+2gq(t)y^{\prime}_{k}[u_{k}(t)+\tfrac{3}{2}],\]
where
\[u_{k}(t) =\sum_{\alpha}\langle\vec{t}^{\dagger}_{\alpha,k}\tilde{t}_{ \alpha,k}\rangle, z_{k}(t) =\sum_{\alpha}\langle\vec{t}^{\dagger}_{\alpha,k}\vec{t}^{\dagger}_{ \alpha,k}\rangle, \tag{26a}\] \[v_{k}(t) =\operatorname{Re}z_{k}(t), w_{k}(t) =\operatorname{Im}z_{k}(t). \tag{26b}\]
Analogously, for the \(J^{\prime}\)-model we obtain
\[\frac{d}{dt}u_{k}(t) =-2(g^{\prime}/\lambda)q(t)y^{\prime}_{k}w_{k}(t)-\gamma_{\text{s} }u_{k}(t) \tag{27a}\] \[\frac{d}{dt}v_{k}(t) =-2[\omega_{k}-(g^{\prime}/\lambda)q(t)y^{\prime}_{k}]w_{k}(t)- \gamma_{\text{s}}v_{k}(t)\] (27b) \[\frac{d}{dt}w_{k}(t) =2[\omega_{k}-(g^{\prime}/\lambda)q(t)y^{\prime}_{k}]v_{k}(t)- \gamma_{\text{s}}w_{k}(t)\] (27c) \[\qquad-2(g^{\prime}/\lambda)q(t)y^{\prime}_{k}[u_{k}(t)+\tfrac{3}{2}].\]
The full system of equations of motion then contains three equations for the driving phonon and \(3N\) for the triplons (for a system consisting of \(N\) dimers). Because every triplon \(k\)-mode is coupled to the phonon variables, and the latter to sums over all the triplons, the system cannot be split into \(N\) separate sets of differential equations. The inversion symmetry of the chain ensures that \(y_{k}=y_{-k}\), \(y^{\prime}_{k}=y^{\prime}_{-k}\), \(u_{k}=u_{-k}\), \(z_{k}=z_{-k}\), and \(\omega_{k}=\omega_{-k}\), which reduces the \(3N\) triplon equations to \(3(N+1)/2\) for odd \(N\). We solve these coupled differential equations numerically with an adaptive Runge-Kutta solver, which allows long times, \(t\), to be accessed reliably. The chain lengths we consider vary between \(N=1001\) and \(4001\) in order to ensure that finite-size effects are well controlled in all regimes.
In the analyses to follow, we will characterize the system by introducing a number of measures. For the spin system it is convenient to use the number of excited triplons per dimer, which is given by
\[n_{\rm x}(t)=\frac{1}{N}\sum_{k}u_{k}(t). \tag{28}\]
If a system is driven at frequency \(\omega\), for a time sufficiently long that it has reached a NESS, no other frequency will appear in the expectation values of the observables, regardless of the available fundamental frequencies in the system (notably \(\omega_{0}\) when this differs from \(\omega\)). Only higher harmonics at integer multiples of \(\omega\) will appear, and these are expected in any coupled system [24]. In order to focus on the important average values of the time-dependent quantities, for any expectation value \(X(t)\) we define
\[X_{0}=\frac{1}{T}\int_{t}^{t+T}X(t)dt, \tag{29}\]
which represents the average of \(X\) over one period, \(T=2\pi/\omega\). Below we will consider quantities including \(n_{\rm ph0}\), \(n_{\rm x0}\), and \(u_{k,0}\). Only if one considers the transients appearing directly after switching on the drive do the average values, \(X_{0}\), acquire a time-dependence, \(X_{0}(t)\)[24].
The focus of our calculations is on the NESS of the driven, dissipative system. We consider a representative alternating chain with moderate dimerization, \(\lambda=0.5\) in Eq. (10), which places the edges of the two-triplon excitation band at \(2\omega_{\rm min}=1.414J\) and \(2\omega_{\rm max}=2.449J\)[24]. When the spin-phonon coupling is weak, NESS formation requires a timescale of approximately \(5/\gamma_{\rm s}\), i.e. five time constants of the spin system [24]. However, at large \(g\) values one may anticipate strong feedback processes between the phonon and spin sectors, making it necessary to examine the situation in detail, and potentially to wait for significantly longer times to ensure NESS formation. An example of complementary phonon and spin NESS, each characterized by their number operators, is shown in the time domain for a nonresonant phonon frequency in Figs. 2(a) and 2(b) at weak, strong, and very strong values of \(g\). Here we have set the driving frequency to the phonon frequency (\(\omega=\omega_{0}\)), but neither lies within the two-triplon band. It is clear that both time traces contain increasingly complex combinations of harmonics as \(g\) is increased, with second-harmonic contributions dominating at the chosen frequency, and also that the amplitude of the oscillatory part of the phonon occupation
Figure 2: **Magnetophononic driving in the time domain.** Examples of phonon (a) and spin NESS (b), characterized respectively by \(n_{\rm ph}(t)\) and \(n_{\rm x}(t)\), shown for a \(J\)-model with typical driving and damping parameters. The driving frequency is set to the phonon frequency, \(\omega=\omega_{0}=0.707J\), which is set to half the lower two-triplon band-edge frequency of the isolated spin system. The development of the phonon (c) and spin responses (d) from \(t=0\) illustrates how feedback from the spin sector arrests the growth of the phonon occupation at a very early stage, although the slow, damped oscillations in the average values mean that the NESS is reached at approximately the same time in all cases. Switching off the drive at \(t=1500J^{-1}\) demonstrates that relaxation is governed entirely by the relevant damping coefficients, \(\gamma\) (c) and \(\gamma_{\rm s}\) (d).
behaves non-monotonically as a function of \(g\), becoming suppressed at very strong \(g\). The oscillatory part of the triplon occupation rises very strongly with \(g\) on exiting the weak-coupling regime, but is also suppressed at very strong coupling. The corresponding static parts of both occupations are considered in Sec. III below. Concerning the timescale for NESS formation, Fig. 2(c) shows how the initial rise of \(n_{\rm ph}\) is truncated by the rise of \(n_{\rm x}\) [Fig. 2(d)]. In this nonresonant regime, at \(g=0.3J\) there remains a significant time lag between the driving phonon and the following triplon occupations, where the latter limit the former and convergence requires one slow oscillation cycle, whose length is determined by the feedback process. At \(g=0.5J\), the lag in response is much shorter and several slow oscillation cycles are required.
We close our technical presentation with a number of comments. The equations of motion are valid at all times from the onset of driving (\(t=0\)) to infinity and for all applied electric fields, as well as for all phonon occupations up to the Lindemann melting criterion (\(n_{\rm ph}\approx 3\)). With the present simplified treatment of the spin sector, they are valid up to a triplon occupation of order \(n_{\rm x}\approx 0.2\), beyond which a more sophisticated numerical treatment should be used to account for the hard-core occupation constraint. Because the equations of motion are based on a mean-field decoupling of the spin and lattice sectors, the treatment we present becomes more approximate at low phonon frequencies, specifically those below \(\omega_{0}=0.2\)-\(0.3J\)[24]. Nevertheless, one may verify by considering the energy flow through the strongly spin-phonon-coupled system that the mean-field approximation remains very accurate at all phonon frequencies close to resonance with the spin system.
Finally, one may question the stability of the alternating chain in the presence of phononic driving, particularly when this is very strong or very slow. In fact the sharp fall in the driven phonon occupation at very small \(\omega_{0}\) in Figs. 3(a) and 3(d) below is related to a ground-state instability of the chain, where a stimulated distortion can occur (the average phonon displacement, \(q_{0}\), becomes finite) in the presence of sufficiently slow phonons. One may show that the stability criterion takes the form \(\omega_{0}^{c}>F(\lambda)g^{2}\lambda^{2}J\), and that for a \(J\)-model with \(\lambda=0.5\) and \(g/J=0.5\) this critical value is \(\omega_{0}^{c}\simeq 0.07J\), while in a \(J^{\prime}\)-model with \(\lambda=0.5\) and \(g^{\prime}/J^{\prime}=0.5\) it is \(\omega_{0}^{c}\simeq 0.14J\).
## III Self-Blocking
### NESS protocol
We consider first the NESS established by steady laser driving at the frequency of the target IR phonon, i.e. \(\omega=\omega_{0}\). In Fig. 3(a) we show \(n_{\rm ph0}\) at this frequency choice, which we denote \(\overline{n}_{\rm ph0}\), as \(\omega_{0}\) is varied across the full frequency range for the \(J\)-model. We use a laser electric-field strength (\(E_{0}=0.2\gamma\), expressed in energy units with \(\hbar=1\)) and phonon damping (\(\gamma=0.02\omega_{0}\)) that we maintain constant for the remainder of the analysis, and refer to these henceforth as standard driving and damping conditions. At small \(g\), \(\overline{n}_{\rm ph0}\) is effectively constant for all \(\omega_{0}\), but as \(g\) is increased we observe an increasing suppression of \(\overline{n}_{\rm ph0}\) that sets in precisely where the density of two-triplon excitations is highest, i.e. at \(2\omega_{\rm min}=1.414J\) and \(2\omega_{\rm max}=2.449J\). This resonant effect becomes gigantic at strong \(g\), suppressing the phonon occupation by nearly three orders of magnitude at \(2\omega_{\rm min}\).
We have named this effect "self-blocking" because the magnetic system acts to block its own energy uptake by suppressing the driven phonon. This behavior is surprising if one expects stronger energy absorption when more spin excitations coincide with the driving laser frequency. Its explanation lies in the fact [24] that in magnetophononic driving the spin system is not coupled to the light, but only to the driven phonon. In the NESS protocol, the light frequency is fixed but the effective phonon frequency is altered by its hybridization with the spin system, whose dependence on \(g\) we analyze in detail below, and thus the laser driving becomes increasingly non-resonant. Analytically, the prefactor of the phonon momentum, \(p(t)\), in the master equation for \(n_{\rm ph}(t)\) [Eq. (20c)] is not the driving electric field, \(E_{0}(t)\), but the quantity \(\widetilde{E}(t)\) specified in Eqs. (22) and (23). This effective feedback from the spin system is both strongly nonlinear in \(g\) and strongly negative, acting to cancel \(E_{0}(t)\) almost completely when \(\omega_{0}\) is at resonance with the band edges [Fig. 3(a)]. Despite the approximate symmetry of the two-triplon band, self-blocking is weaker by a factor of 10 at \(2\omega_{\rm max}\) due to matrix elements within the feedback process.
Turning to the response of the spin system, Fig. 3(b) shows the corresponding average triplon occupancy, \(\overline{n}_{\rm x0}\). The most striking feature is the strong rounding of the in-band response as \(g\) is increased. The band-edge peaks are entirely blunted by the strong suppression of \(\overline{n}_{\rm ph0}\) [Fig. 3(a)]. We stress that the effective limiting value \(\overline{n}_{\rm x0}\approx 0.1\) visible in Fig. 3(b) is purely a consequence of the giant self-blocking, and is not connected with the hard-core nature of the triplon excitations, which has not been included in the formalism of Sec. II. This rounding indicates an increasing localization of the spin response, by which the band character of the triplons becomes less relevant under strong driving by the entirely local phonon. Figure 3(b) also displays a somewhat counter-intuitive non-monotonic behavior for frequencies close to the band edges, where increasing \(g\) leads to a lower number of excited triplons due to the lower number of driving phonons caused by the self-blocking. Normalizing \(\overline{n}_{\rm x0}\) to \(\overline{n}_{\rm ph0}\), as shown in Fig. 3(c), reveals a set of near-identical response curves sorted in ascending order of \(g\), and hence that larger values of the spin-phonon coupling do indeed lead to larger numbers of excited triplons per excited phonon.
Still one might suspect that self-blocking is a special feature of a strongly dimerized \(J\)-model, in the sense that a \(k=0\) phonon strongly coupled to the intradimer bonds
could push the system into a completely local limit of isolated spin dimers. However, to show that self-blocking is a general feature of a magnetophononic model, that occurs also when the geometry does not allow the phonon to cut the spin system into local subsystems, we consider the properties of the \(J^{\prime}\)-model, illustrated in Fig. 3(d). It is clear on the qualitative level that the self-blocking phenomenon is identical, with strong suppression of the absorbed laser energy, and hence of the driven phonon occupation, setting in at the band edges and rising dramatically with \(g^{\prime}\). On the quantitative level, if one compares models with the same values of \(g/J\) and \(g^{\prime}/J^{\prime}\) the results for \(\overline{n}_{\rm ph0}\) and \(\overline{n}_{\rm x0}\) [Fig. 3(e)], and hence for the normalized quantity \(\overline{n}_{\rm x0}/\overline{n}_{\rm ph0}\) [Fig. 3(f)], are similar to the degree that they cannot be distinguished on logarithmic intensity scales. This is a consequence of the close similarities between the spin-phonon coupling terms, and hence between the equations of motion, discussed in Sec. II. We comment also that in both models the dominant self-blocking effects are concentrated around the edges of the two-triplon spectrum of the isolated spin system, indicating that, despite any tendency towards localization favored by the strong phonon coupling, the band character of the spin system is largely preserved.
Away from the two-triplon band, in Figs. 3(a) for the \(J\)-model and Fig. 3(d) for the \(J^{\prime}\)-model we also observe a significant suppression of phonon energy entering the system at any frequency \(\omega_{0}<2\omega_{\rm min}\). This non
resonant self-blocking is also nonlinear in \(g\), exceeding one order of magnitude at \(g=0.5J\) and \(g^{\prime}=0.5J^{\prime}\). Its appearance only in the low-\(\omega_{0}\) regime, but not at \(\omega_{0}>2\omega_{\rm max}\), points to an origin in multiple harmonic processes (\(2\omega_{\rm min}\leq n\omega_{0}\leq 2\omega_{\rm max}\)) [24]. Although only the two-phonon harmonic (\(n=2\)) at \(\omega_{\rm min}\) is visible directly, stronger \(g\) distributes the response of the system to a given \(n\omega_{0}\) across a broader range of frequencies. By contrast, a driving phonon at the band center (\(\omega_{0}=2J\)) has vanishing matrix elements (\(y^{\prime}_{\pi/2}=0\)) with the resonant spin modes, and hence \(\overline{n}_{\rm ph0}\) recovers almost to its \(g=0\) or \(g^{\prime}=0\) values for all \(g\) or \(g^{\prime}\).
To understand the context of these results, we stress again that our results are obtained for an idealized NESS experiment, where, as stated in Sec. II, in the long-time limit there are no frequencies in the system other than \(\omega\), which has been selected equal to \(\omega_{0}\), and multiples thereof. In this sense the panels of Fig. 3 must be interpreted "vertically," because there is no possibility of spectral-weight transfer between different frequencies. Although the characteristic resonant frequencies of the spin-phonon-coupled system are shifted as \(g\) is increased, and thus the system is simply off-resonance for energy uptake, in the \(\omega=\omega_{0}\) NESS protocol there is no other option. Hence the self-blocking observed in Fig. 3 can be sought in experiments constructed to achieve a NESS, and we have found in this limit that it is a truly giant phenomenon.
### Pulsed protocol
For a broader view, however, one does wish to understand the full response of the driven system beyond the
Figure 4: **Strongly hybridized excitations.** (a) Phonon occupation, \(n_{\rm ph0}\), shown as a function of driving frequency, \(\omega\), for \(J\)-models with a phonon frequency of \(\omega_{0}=1.35J\) (vertical dashed line) and a range of different \(g\) values. The standard driving and damping parameters of Fig. 3 are used. (b) As in panel (a) for \(\omega_{0}=1.45J\). (c) \(n_{\rm x0}(\omega)\) for \(\omega_{0}=1.35J\), corresponding to panel (a). (d) \(n_{\rm x0}(\omega)\) for \(\omega_{0}=1.45J\), corresponding to panel (b). (e) Peak-pair frequencies, labelled \(\omega_{1}^{\rm hyb}\) and \(\omega_{2}^{\rm hyb}\), taken from panels (a), (b), (c), and (d), shown for all calculated \(g\) values; black dashed lines indicate a \(g^{2}\) form. (f) \(n_{\rm ph0}(\omega)\) shown for \(J\)-models with a phonon frequency of \(\omega_{0}=2.4J\) at the same set of \(g\) values and with the same standard driving and damping. (g) As in panel (f) for \(\omega_{0}=2.5J\). (h) \(n_{\rm x0}(\omega)\) for \(\omega_{0}=2.4J\), corresponding to panel (f). (i) \(n_{\rm x0}(\omega)\) for \(\omega_{0}=2.5J\), corresponding to panel (g). (j) Peak-pair frequencies taken from panels (f), (g), (h), and (i); black dashed lines indicate a \(g^{2}\) form.
NESS protocol. It is well known even in the absence of driving that strong spin-phonon coupling leads to hybridization and anticrossing of the spin and phonon excitations, such that the bare spin and phonon dispersion relations are no longer the characteristic resonant frequencies. Before presenting our results for systems with fixed phonon frequencies, \(\omega_{0}\), driven at a range of different frequencies, \(\omega\), we note that a conventional ultrafast experiment already introduces a spectrum of driving frequencies within the envelope of each ultrashort pulse, and hence allows automatically for the "horizontal" transfer of spectral weight (i.e. between frequencies). For this reason we use the terminology "pulsed protocol," although in the remainder of the present work we will compare the NESS obtained in systems driven continuously at variable \(\omega\) (with constant laser intensity), leaving the accurate modelling of pulsed driving to a future study.
We focus primarily on the \(J\)-model and, because self-blocking is strongest at the band edges, in Fig. 4 we consider phonon frequencies, marked by the vertical dashed lines, just below and above each of the band edges. Figure 4(a) makes clear that the phononic response of a system driven at a frequency just below the lower band edge (here \(\omega_{0}=1.35J\)) has a conventional Lorentzian resonance centered at the bare phonon frequency for small \(g\), but is weakened and pushed away from the band edge at stronger \(g\). Figure 4(b) shows the analogous result when \(\omega_{0}\) lies just inside the spin band, where the phonon peak is damped very strongly with increasing \(g\), and also moves away from the band edge by the same level-repulsion effect. Here it is accompanied by the development of a second feature, appearing at \(2\omega_{\rm min}\) at \(g=0.1J\), which is repelled below the band edge as \(g\) increases. The accompanying spin response [Figs. 4(c) and 4(d)], which we analyze in Sec. IV below, shows that the appearance of two mutually repelling peaks is generic, and that the characteristic excitation frequencies are shifted quite significantly away from \(\omega_{0}\) and \(2\omega_{\rm min}\) at large \(g\). The situation for phonon frequencies (\(\omega_{0}=2.4J\) and \(2.5J\)) bracketing the upper band edge is exactly analogous, although with a slightly weaker mutual repulsion [Figs. 4(f) to 4(i)].
In the context of self-blocking, these results show how the phononic response is shifted "horizontally," losing its overlap with the bare response curve, as \(g\) increases. This shift is the physical reason underlying the rapid drop in the quantity \(\overline{n}_{\rm ph0}\) (i.e. for driving at \(\omega=\omega_{0}\)). Nevertheless, we stress again that driving at frequency \(\omega\) can only produce a response at frequencies \(n\omega\), with no spectral-weight transfer to neighboring frequencies: Fig. 4 was prepared by driving at every individual frequency \(\omega\) represented on the \(x\)-axes and by solving Eqs. (22) and (23) to obtain the NESS at each \(\omega\). This does mean that Fig. 4 can be used to pose the question of whether any self-blocking is present if the laser frequency is adjusted to resonance with the peak of the phononic response at each value of \(g\). In this situation one observes that phonons lying outside the spin excitation band undergo only a small reduction with increasing \(g\) (factors of 2 or less up to \(g=0.5J\)), whereas phonons lying inside the band are suppressed by 1-2 orders of magnitude due to their hybridization with many two=triplon scattering states.
## IV Hybrid Phonon-Bitriplon States
As the discussion of Sec. IIIB made clear at the qualitative level, the hybridization of phononic and magnetic excitations is fundamental to the properties of any spin-phonon-coupled system, and thus to the phenomenology of magnetophononic driving. To elucidate the nature of the states created by the coupling of the driving phonon to triplon pairs, we proceed to a quantitative analysis of the frequency shifts and consider the extent of hybridization within these composite collective entities.
Returning to the features of Fig. 4, we have already remarked on the appearance of two mutually repelling excitation features when the system is driven at a phonon frequency close to the band edge. Because the second set of excitations is not evident in Figs. 4(a) and 4(g), where the in-band phononic response to an out-of-band \(\omega_{0}\) is very weak, in Figs. 5(a) and 5(b) we show the same data on a logarithmic intensity scale, confirming the presence of a second peak that grows with \(g\). In Figs. 5(c) and 5(d) we show the analogous data obtained for \(J^{\prime}\)-models with the same range of \(g^{\prime}/J^{\prime}\) values. Clearly, as in Sec. IIIA, the physics of hybrid states in the \(J^{\prime}\)-model is identical to the \(J\)-model up to minor quantitative details (that can be traced to the matrix elements of the triplon pair-creation
Figure 5: **Phononic response to band-edge driving.** (a) Phonon occupation, \(n_{\rm ph0}(\omega)\), shown on a logarithmic axis at \(\omega_{0}=1.35J\) for \(J\)-models with selected values of \(g\). The standard driving and damping parameters of Fig. 3 are used. (b) As in panel (a) at \(\omega_{0}=2.5J\). [The same data are shown on a linear scale in Figs. 4(a) and 4(g).] (c) \(n_{\rm ph0}(\omega)\) shown on a logarithmic axis at \(\omega_{0}=1.35J\) for \(J^{\prime}\)-models with selected values of \(g^{\prime}\). (d) As in panel (c) for \(\omega_{0}=2.5J\).
and -annihilation terms in Eqs. (12) and (13)), and thus we do not discuss the \(J^{\prime}\)-model further in this section.
A key additional observation concerning the second set of excitation features is that they appear for small \(g\) at the band-edge frequencies, \(2\omega_{\rm min}\) in Figs. 4(a) to 4(d), 5(a), and 5(c) and \(2\omega_{\rm max}\) in Figs. 4(f) to 4(i), 5(b), and 5(d), before being repelled further from the bare phonon frequency as \(g\) is increased. Thus one obtains a picture of "magnetic" hybrid states being induced within the spin sector by the influence of the driven "phononic" state, before stronger \(g\) values cause a strong admixture of lattice and spin character.
One way to confirm the hybrid nature of these states is to begin in the regime where weak hybridization is guaranteed. In Fig. 6(a) we show the phonon and triplon occupations obtained when the driving frequency, \(\omega_{0}=0.6J\), lies far below the two-triplon band. While \(n_{\rm ph0}(\omega)\) undergoes only minor changes, indicating that this is a rather well localized phononic mode, its hybridization is clearly strong enough to shift its frequency out of the regime covered by Fig. 3(a). \(n_{\rm x0}(\omega)\) indicates that a spin response does emerge with \(g\) despite the nonresonant self-blocking, and in Fig. 6(b) we show that this magnetic dressing of the phononic mode contains all the \(k\)-components of \(n_{\rm x0}(\omega)\). In Figs. 6(c) and 6(d) we show the analogous results for a driving frequency, \(\omega_{0}=3.0J\), lying well above the two-triplon band, where the hybridization effects are qualitatively similar but quantitatively are much weaker. Here we do not attempt to discern the weak effects of these off-resonant driving processes at the band edges.
Returning to the regime of strong hybridization controlled by \(g\), driving frequencies near both band edges are shown in Fig. 4. For a complete analysis of the mutual repulsion, we gather the characteristic frequencies of these phonon and spin spectra in Figs. 4(e) and 4(j), which display clearly the \(g^{2}\) evolution in frequency shift expected of hybrid excitations. To quantify the admixture of lattice and spin character, at the lower band edge one may define a hybridization parameter \(s=g/|\omega_{0}-2\omega_{\rm min}|\), and similarly with \(2\omega_{\rm max}\) for the upper band edge. A language of "phononic" and "magnetic" hybrids is useful at \(\omega_{0}=0.6J\) and \(3.0J\) (Fig. 6), where \(s<1\) for all \(g\). However, when \(s\approx 10\) both hybrids are strongly magnetic and phononic, and indeed the 50:50 weight distribution evident at larger \(g\) in Figs. 4(c), 4(d), 4(h), and 4(i) suggests states that are maximally hybridized. For the hybrids repelled outside the band, the coinciding peaks in \(n_{\rm ph0}(\omega)\) and \(n_{\rm x0}(\omega)\) identify them as a strongly triplon-dressed version of the "phononic" hybrid shown in Fig. 6. The in-band hybrids lie in a continuum of propagating triplon-pair states, and thus manifest themselves as broader peaks, whose maxima lie at slightly different energies in \(n_{\rm ph0}(\omega)\) [Figs. 4(b) and 4(f)] and in \(n_{\rm x0}(\omega)\) [Figs. 4(d) and 4(h)].
For the driving and damping of our system, all of the strongly hybridized states are to a good approximation "phonon-bitriplons," in which each phonon hybridizes with one triplon pair (\(\vec{t}_{k}^{\dagger}\vec{t}_{-k}\)) of zero net momentum. For specificity we reserve the term "phonon-bitriplon" for the composite collective entity formed at \(s\geq 1\). However, it is clear that all of the hybrid states forming in a system with a spin-phonon coupling of the form described by Eqs. 5 and 6 involve the dressing of phonons by bi-triplons, and conversely. We comment that the formation of composites from a boson pair and a single boson of a different kind is not common in condensed matter physics. A more common scenario is composite formation involving a boson and a pair of fermions, for example where a photon and an exciton form a polariton. Scattering processes of one boson into two different bosons are somewhat more familiar, with phonon-bimagnon processes being discussed both theoretically [42; 43] and experimentally [44; 45] in the optical spectra of cuprate quantum magnets. Phonon-bimagnon processes at GHz frequencies are now applied increasingly in the field of magnonics [46], while photon-bimagnon processes are engineered in cavity spintronics [47]. Similar physics could also be realized using ultracold atoms [48], where the optical lattice blurs the distinction between photon and phonon, although we are not aware of an experiment. On a very different energy scale, in particle physics the virtual decay of the Higgs boson into pairs of W or Z bosons [49; 50] is an off-shell process with intermediate \(s\), where the level repulsion of Figs. 4(e) and 4(j) is known as a "Higgs mass renormalization."
Figure 6: **Weakly hybridized excitations.** (a) Phonon occupation, \(n_{\rm ph0}(\omega)\) (dotted lines), and triplon occupation, \(n_{\rm x0}(\omega)\) (solid lines), shown for \(J\)-models with a phonon frequency of \(\omega_{0}=0.6J\) and the same set of \(g\) values as in Fig. 4. The standard driving and damping parameters of Fig. 3 are used. (b) \(k\)-resolved components of the average \(u_{k}(\omega)\) at \(\omega_{0}=0.6J\) for \(g=0.5J\). (c) \(n_{\rm ph0}(\omega)\) (dotted lines) and \(n_{\rm x0}(\omega)\) (solid lines) shown for a phonon frequency of \(\omega_{0}=3.0J\). (d) \(k\)-resolved components of the average \(u_{k}(\omega)\) at \(\omega_{0}=3.0J\).
## V Spin-band engineering
In Sec. IV we have shown how the driven phonon creates an additional hybrid state in the spectrum of the system, but we have not yet shown whether phononic driving can alter the excitation spectrum (i.e. the effective bulk properties) of the spin sytem. At first sight one might think that spin-band modification is not possible while the driven phonon is completely harmonic, and that this requires the anharmonic terms considered in nonlinear phononics. However, we will show that the coupling to the magnetic subsystem introduces an intrinsic nonlinearity, and thus that the spin spectrum can indeed be altered to a significant extent by a harmonically driven optical phonon.
### First-order band engineering
The leading order of a Magnus expansion [51] for time-dependent Hamilton operators consists of the time-averaged Hamiltonian. If one considers the \(J\)-model, Eqs. (25b) and (25c) describe oscillations of \(v_{k}\) and \(w_{k}\) at an average frequency \(\omega_{k}+gq_{0}y_{k}\) that differs for each wave vector (neglecting higher-order corrections in \(g\)), and hence the average value of \(v_{k}\) vanishes. Averaging Eq. (20b) implies that
\[0=-\omega_{0}q_{0}-2\widetilde{E}_{0}, \tag{30}\]
and hence that
\[q_{0}=-2g\mathcal{U}_{J0}/\omega_{0} \tag{31}\]
in the \(J\)-model, meaning that the driving causes the equilibrium (stationary) position of the phonon to be displaced by a finite amount, \(q_{0}\). By similar considerations for the \(J^{\prime}\)-model we obtain
\[q_{0}^{\prime}=2g^{\prime}\mathcal{U}_{J^{\prime}0}/(\lambda\omega_{0}). \tag{32}\]
In Fig. 7(a) we compute \(q_{0}\) in the \(J\)-model, using a laser frequency resonant with the driving phonon (\(\omega=\omega_{0}\), for which we use the notation \(\overline{q}_{0}\)), and in Fig. 7(b) we show our results for \(\overline{q}_{0}^{\prime}\) in the \(J^{\prime}\)-model.
In contrast to the results of Secs. III and IV, the stationary displacements differ dramatically between the two models. The values of \(\overline{q}_{0}\) are always negative and reach \(4\%\) of the lattice dimension at their maximum extent, which is one order of magnitude larger than the values of \(\overline{q}_{0}^{\prime}\). The phonon frequencies most effective in controlling \(\overline{q}_{0}\) are neither those at the band edges, where giant self-blocking suppresses the displacement almost completely, nor those at the band center, where the driving terms decouple, but the "quarter-band" ones around \(k=\pi/4\) and \(3\pi/4\). A broader range of mid-band frequencies is effective in modulating \(\overline{q}_{0}^{\prime}\), where the essential qualitative feature is a change in the sign of the displacement as the driving frequency is moved through the band center. The origin of these differences can be found by
Figure 8: **Spin-band engineering.** Renormalized upper (a,b) and lower (c,d) edges of the two-triplon excitation band of the \(J\)-model (a,c) and the \(J^{\prime}\)-model (b,d), shown as functions of the spin-phonon coupling, \(g\), for standard driving by phonons of frequencies \(\omega_{0}=1.7J\) and \(2.2J\). Note the differing frequency ranges relevant for the two models.
Figure 7: **Driven stationary phonon displacement.** (a) Average phonon displacement, \(\overline{q}_{0}\) (solid lines), shown as a function of the phonon frequency for \(J\)-models with three different \(g\) values and with standard driving and damping. (b) \(\overline{q}_{0}^{\prime}\) (solid lines) shown as a function of the phonon frequency for \(J^{\prime}\)-models with three different \(g^{\prime}\) values. Note the different \(y\)-axis scales on the two panels. The dashed white lines show the quantities computed on the right-hand sides of Eqs. (31) and (32).
a closer inspection of the differential equations. In \(\mathcal{U}_{J}\) the matrix element is \(y_{k}\), while in \(\mathcal{U}_{J^{\prime}}\) it is \(y^{\prime}_{k}\), which changes sign at \(k=\pi/2\). The evolution of both quantities, shown by the dashed white lines in Fig. 7, indicates that their dominant contributions result from the resonant wave vector, \(k_{\rm res}\), selected by the driving phonon according to \(\omega_{0}=2\omega_{k_{\rm res}}\). The excellent agreement between the two sets of curves in Fig. 7 illustrates clearly the validity of Eqs. (31) and Eqs. (32).
These stationary phonon displacements have a direct effect on the spin bands by modifying the magnetic interactions. Here we consider only the linear-order terms, which in the \(J\)-model cause the change \(J\to\tilde{J}=J(1+gq_{0})\), and hence renormalize the triplon dispersion [Eq. (10)] to
\[\tilde{\omega}_{k}=J(1+gq_{0})\sqrt{1-\lambda\cos k/(1+gq_{0})} \tag{33}\]
while in the \(J^{\prime}\)-model \(J^{\prime}\to\tilde{J}^{\prime}=J^{\prime}(1+g^{\prime}q_{0})\) yields the renormalized dispersion
\[\tilde{\omega}_{k}=J\sqrt{1-\lambda(1+g^{\prime}q_{0})\cos k}. \tag{34}\]
At this linear level, the triplons retain a cosinusoidal dispersion and the band-engineering effects of the phonon separate cleanly. A phonon mode coupling strictly to the intradimer bond of an alternating spin chain will renormalize the band center downwards without changing the band width, and a phonon mode coupling strictly to the interdimer bond will renormalize the band width, upwards or downwards, without changing the band center.
With a view to later detection (Sec. V.3), we define the lower and upper edges of the renormalized bands as
\[\tilde{\omega}_{\rm min} =\tilde{\omega}_{k=0}, \tag{35a}\] \[\tilde{\omega}_{\rm max} =\tilde{\omega}_{k=\pi}, \tag{35b}\]
and in Fig. 8 we show the evolution of the two-triplon band extrema with \(g\) and \(g^{\prime}\) for two phonon frequencies chosen near the lower and upper maxima of \(|\overline{q}_{0}|\) [Fig. 7(a)]. Qualitatively, in the \(J\)-model we observe the downwards renormalization of the band center by both phonons contrasting with a weak band-narrowing (for \(\omega_{0}=2.2J\)) or band-broadening (for \(\omega_{0}=1.7J\)) in the \(J^{\prime}\)-model. Quantitatively, the \(\omega_{0}=2.2J\) phonon is more effective in the \(J\)-model and the \(\omega_{0}=1.7J\) phonon in the \(J^{\prime}\)-model, but far the most important observation is that our standard driving leads to percent effects on the band center [\(J\)-model, Figs. 8(a) and 8(c)] but only hundredths of a percent on the band width [\(J^{\prime}\)-model, Figs. 8(b) and 8(d)].
In Fig. 9 we investigate the scope for increasing band-engineering effects by increasing the laser electric field (mindful of the increased thermal management this mandates). For the chosen value of \(g\) and \(g^{\prime}\), we find that the two-triplon band extrema display a quadratic evolution only as far as our standard driving value, \(E_{0}=0.2\gamma\), before entering an extended regime of largely linear dependence. Only the band center for driving by a phonon with \(\omega_{0}=1.7J\) displays a more complex behavior, due
Figure 10: **Stationary phonon displacement and spin damping.** (a) Average phonon displacement, \(\overline{q}_{0}\), shown as a function of the phonon frequency for \(J\)-models with \(g=0.3J\) and with standard driving and phonon damping, but with three different values of the spin damping, \(\gamma_{8}\). (b) \(\overline{q}^{\prime}_{0}\) shown as a function of the phonon frequency for \(J^{\prime}\)-models with \(g^{\prime}=0.3J^{\prime}\) for three different values of \(\gamma_{8}\).
Figure 9: **Spin-band engineering by laser intensity.** Renormalized upper (a,b) and lower (c,d) edges of the two-triplon excitation band of a \(J\)-model with \(g/J=0.3\) (a,c) and a \(J^{\prime}\)-model with \(g^{\prime}/J^{\prime}=0.3\) (b,d), shown as functions of the driving laser electric field, \(E_{0}\), for driving by phonons of frequencies \(\omega_{0}=1.7J\) and \(2.2J\) and with standard damping. For reference, the standard driving field shown in all other figures is \(E_{0}=0.2\gamma\).
presumably to extreme feedback effects setting in at very strong fields. We remark that such strong fields, used as ultrashort pulses to control the system temperature, can yield 10% effects on the engineered band center and 1% effects on the band width.
We conclude our survey of linear-order spin-band engineering by examining its dependence on the spin damping coefficient, \(\gamma_{\mathrm{s}}\). To do this, in Fig. 10 we show not the band extrema but the stationary phonon displacements \(\overline{q}_{0}\) and \(\overline{q}_{0}^{\prime}\). While the qualitative behavior of the driven stationary displacement is not changed by \(\gamma_{\mathrm{s}}\), it is clear that the quantitative extent follows an approximate \(1/\gamma_{\mathrm{s}}\) trend, and hence that the observation of spin-band engineering will require systems in which the intrinsic damping of the spin modes is weak. This result also underlines the fact that the finite driven values of \(\overline{q}_{0}\) and \(\overline{q}_{0}^{\prime}\) result from nonlinearities introduced by the coupling to the spin system.
### Second-order band engineering
For completeness we analyze the second-order term in the Magnus expansion. While every order of the expansion can be computed systematically [51], the convergence is rapid in the parameter regime of our present considerations and a full understanding of our calculated results can be obtained from the first- and second-order terms only. We consider the minimal Hamiltonian
\[H^{(k)}=\omega_{k}(t_{+}^{\dagger}t_{+}+t_{-}^{\dagger}t_{-})+\mu(t)(t_{+}^{ \dagger}t_{-}^{\dagger}+t_{+}t_{-}), \tag{36}\]
where \(t_{+}\) creates a triplon of flavor \(\alpha\) at \(k\neq 0\) and \(t_{-}\) a triplon of flavor \(\alpha\) at \(-k\). The oscillating amplitude of the cross-term is \(\mu(t)=gy_{k}^{\prime}q(t)\) in the \(J\)-model and \(\mu(t)=-gy_{k}^{\prime}q(t)/\lambda\) in the \(J^{\prime}\)-model; we assume that \(\mu(t)=\mu_{0}\cos(\omega t)\) for a suitably chosen time offset, which in view of the dominant cosinusoidal behavior of \(q(t)\)[24] is well justified. In the interaction picture, the dynamics of the diagonal terms is contained in the operators and the time-dependent Hamiltonian takes the form
\[H_{\mathrm{I}}^{(k)}=\mu(t)(e^{2i\omega_{k}t}t_{+}^{\dagger}t_{-}^{\dagger}+e ^{-2i\omega_{k}t}t_{+}t_{-}). \tag{37}\]
The time-averaged action can be computed by the Magnus expansion as an asymptotic series in \(\mu_{0}\), where as in Subsec. V.1 the first order is given by the time average over one period (\(T=2\pi/\omega\)). However, away from resonance, meaning that the system is not driven near \(\omega=2\omega_{k}\), there is no first-order term, and this situation arises if one considers the changes to the band edges (\(\omega_{k}=\omega_{\mathrm{min}}\) or \(\omega_{k}=\omega_{\mathrm{max}}\)) caused by driving the system with a phonon whose frequency lies well inside or outside the band.
The general form of the second-order term is
\[H_{\mathrm{M,2}}=\frac{-i}{2t}\int_{0}^{t}dt_{1}\int_{0}^{t_{1}}dt_{2}[H(t_{1 }),H(t_{2})]. \tag{38}\]
Here we insert
\[[H_{\mathrm{I}}^{(k)}(t_{1}),H_{\mathrm{I}}^{(k)}(t_{2})]=\] \[\mu_{0}^{2}\cos(\omega t_{1})\cos(\omega t_{2})e^{2i\omega_{k}(t_ {1}-t_{2})}[t_{+}^{\dagger}t_{-}^{\dagger},t_{+}t_{-}]-\mathrm{H.c.}. \tag{39}\]
and perform the inner integration over \(t_{2}\in[0,t_{1}]\) to obtain
\[\int_{0}^{t_{1}}[H_{\mathrm{I}}^{(k)}(t_{1}),H_{\mathrm{I}}^{(k)} (t_{2})]dt_{2}=\] \[\frac{4i\omega_{k}\mu_{0}^{2}\cos(\omega t_{1})[\cos(2\omega_{k} t_{1})-\cos(\omega t_{1})]}{4\omega_{k}^{2}-\omega^{2}}\widehat{B}, \tag{40}\]
where \(\widehat{B}\) denotes \([t_{+}t_{-},t_{+}^{\dagger}t_{-}^{\dagger}]=t_{+}^{\dagger}t_{+}+t_{-}^{ \dagger}t_{-}+1\). The \(t_{1}\)-integration [Eq. (38)] effects a time average in which only the \(\cos^{2}(\omega t_{1})\) term contributes \(1/2\), whereas all the other combinations vanish, and thus we derive the second-order correction
\[H_{\mathrm{M,2}}=\frac{\omega_{k}\mu_{0}^{2}}{\omega^{2}-4\omega_{k}^{2}} \widehat{B}. \tag{41}\]
The corresponding energy shift of the two-triplon excitation band is
\[\delta\omega_{k}^{(2)}=\frac{\omega_{k}\mu_{0}^{2}}{\omega^{2}-4\omega_{k}^{2}}, \tag{42}\]
which by continuity will extend also to the band edges at \(k=0\) and \(k=\pi\).
For driving in resonance with an in-band phonon, \(2\omega_{\mathrm{min}}<\omega=\omega_{0}<2\omega_{\mathrm{max}}\), we deduce from Eq. (42) that \(\delta\omega_{\mathrm{min}}^{(2)}\) is positive whereas \(\delta\omega_{\mathrm{max}}^{(2)}\) is negative, and thus that the second-order contribution is a band-narrowing.
Figure 11: **Second-order correction.** Frequency shift of the upper (a,b) and lower (c,d) edges of the two-triplon excitation band obtained at second order for a \(J\)-model with \(g/J=0.3\) (a,c) and a \(J^{\prime}\)-model with \(g^{\prime}/J^{\prime}=0.3\) (b,d), shown as functions of the driving laser electric field, \(E_{0}\), for driving by phonons of frequencies \(\omega_{0}=1.7J\) and \(2.2J\) and with standard damping.
In Fig. 11 we show this correction as function of the driving laser electric field at a fixed value of \(g=0.3=g^{\prime}\). The highly non-linear, and even non-monotonic, behavior is a consequence of the fact that the oscillation amplitude, \(q(t)\), is contained within the quantity \(\mu_{0}\). Nevertheless, these second-order effects are very small when compared with the first-order ones (Subsec. V.1), and for this reason we do not compute any higher orders. However, we will show next that the second-order corrections are indeed detectable in our calculations.
### Detection of tailored spin bands
We turn now to the question of how to measure driving-induced spin-band renormalization in the magnetophononic protocol. Because the pump electric field, at frequency \(\omega\), is required to excite the target phonon(s) creating the desired NESS, a full characterization of the NESS properties will require the introduction of an additional frequency. Thus we introduce a further field component,
\[E(t)=E_{0}\cos(\omega t)+E_{1}\cos(\Omega t), \tag{43}\]
where as before \(E_{0}\) is the strong pump drive and now \(E_{1}\) is a significantly weaker "probe" drive, represented by the small green waves in Fig. 1. Because we are investigating NESS, both field components are continuous and the time delay used in true pump-probe studies is absent.
In the remainder of this subsection we consider a single driving phonon whose frequency is located within the two-triplon band (\(2\omega_{\rm min}<\omega_{0}<2\omega_{\rm max}\)) and pump it resonantly (\(\omega=\omega_{0}\)), while scanning the detection frequency, \(\Omega\), around the lower and upper band edges where the driving-induced changes are expected to be clearest. As the most sensitive diagnostic of the edges of the modified two-triplon band, we take the liberty of showing not \(n_{\rm x0}(\omega_{0},\Omega)\) but the respective components \(u_{k=0,0}\) to characterize the lower band edge and \(u_{k=\pi,0}\) for the upper. These quantities are readily computed from the equations of motion of Sec. II and we comment below on current experimental developments in the use of coherent light as a direct probe of magnetic phenomena.
Figure 12 shows the value of \(u_{k=0,0}\) measured on scanning \(\Omega\) through the lower band edge for different values of the spin-phonon coupling in both the \(J\)- and the \(J^{\prime}\)-model. In contrast to Figs. 4(c), 4(d), 4(h), and 4(i), which showed two band-edge peaks in \(n_{\rm x0}(\omega)\) when changing the driving frequency in the presence of a near-resonant phonon mode, here the system is driven at \(\omega=\omega_{0}=1.7J\) and \(2.2J\), \(s<1\), and we observe a single band-edge peak in the magnetic response. The shift of this peak away from the equilibrium band edge (shown by the red shading) increases strongly with \(g\) in the \(J\)-model [Figs. 12(a) and 12(c)], where in accordance with Fig. 8 it is downward for both phonons (and, given their relative separation from the lower band edge, surprisingly similar in magnitude). By contrast, the shift increases only weakly with \(g^{\prime}\) in the \(J^{\prime}\)-model, to the extent that
Figure 12: **Detection of spin-band engineering.** Average value of \(u_{k=0}\), shown as a function of the probe frequency, \(\Omega\), around the lower band edge of (a,c) \(J\)-models with selected values of \(g/J\) and (b,d) \(J^{\prime}\)-models with selected values of \(g^{\prime}/J^{\prime}\). A standard driving field, \(E_{0}\), is applied at \(\omega=\omega_{0}=1.7J\) in panels (a) and (b) and at \(2.2J\) in panels (c) and (d). The probe field is set to \(E_{1}=0.2E_{0}\) and standard damping is used.
Figure 13: **Dependence of pump-probe protocol on pump amplitude.** Average value of \(u_{k=0}\), shown as a function of the probe frequency, \(\Omega\), around the lower band edge of (a,c) a \(J\)-model and (b,d) a \(J^{\prime}\)-model with \(g/J=0.5=g^{\prime}/J^{\prime}\) for different pump amplitudes, \(E_{0}\). The pump field is applied at \(\omega=\omega_{0}=1.7J\) in panels (a) and (b) and at \(2.2J\) in panels (c) and (d). The probe field is set to \(E_{1}=0.2E_{0}\) and standard damping is used.
its expected change of sign is not discernible between Figs. 12(b) and 12(d) because the shift is so weak in the latter case.
For a fully quantitative interpretation of the probe spectrum, we proceed to perform a systematic variation of all relevant parameters. In Fig. 13 we vary the pump and probe amplitudes (maintaining \(E_{1}/E_{0}=0.2\)) at fixed \(g/J=0.5\) and \(g^{\prime}/J^{\prime}=0.5\). While the area under the peaks depends quadratically on \(E_{0}\) in both \(J\)- and \(J^{\prime}\)-models, an approximately quadratic shift of the peak position is clear only in the \(J\)-model, where this effect is very strong [Figs. 13(a) and 13(c)]. In the \(J^{\prime}\)-model, it may be possible to discern a very weak quadratic variation of the peak positions with \(E_{0}\), but there are clearly other contributions to their shift from the band edge.
In Fig. 14 we vary the spin damping, finding a minor broadening of the resonance peaks in line with general expectations. The reduction of the peak shifts with increasing \(\gamma_{\rm s}\) are at first sight less intuitive, but results from the fact that the total response of the spin system is reduced by stronger damping, as already observed in the stationary phonon displacements, \(\overline{q}_{0}\) and \(\overline{q}^{\prime}_{0}\), shown in Fig. 10.
Finally, in Fig. 15 we vary the probe amplitude. Again we consider both \(J\)- and \(J^{\prime}\)-models, but only the \(\omega_{0}=2.2J\) phonon, in order to compare the situation with no pump field, \(E_{0}=0\), to standard driving. Applied alone [Figs. 15(a) and 15(b)], the weak probe field changes neither the position nor the width of the resonance peak, and only its height depends quadratically on \(E_{1}\). However, the peak appears with an additional shift, which we denote simply by \(\delta\omega_{\rm s}\) in each panel of Figs. 15, that results from the weak hybridization of the band-edge states with a phonon whose frequency lies well inside the band. Quite generally, such a shift appears at both ends of the spectrum of the spin-phonon system at equilibrium, as the hybridization assures the addition of a single mode, such that both band edges are shifted outwards. When the phonon frequency is far from the band edges, the band-edge states can be regarded as \(s<1\) analogs of the phonon-bitriplon of Sec. IV, but as \(\omega_{0}\) is moved systematically towards one or other band edge then a true phonon-bitriplon would form at this edge. The probe field is added to provide the weak driving at band-edge frequencies that is required for a detailed characterization of the equilibrium and nonequilibrium response (meaning without and with \(E_{0}\)). We comment that the hybridization effect leading to \(\delta\omega_{\rm s}\) is very similar in the \(J-\) and \(J^{\prime}\)-models, as may be expected given the relevant matrix elements, which are \(gy^{\prime}_{k}\) and \(-g^{\prime}y^{\prime}_{k}/\lambda\). This weak frequency shift is readily accounted for in order to quan
Figure 14: **Dependence of pump-probe protocol on spin damping.** Average value of \(u_{k=0}\), shown as a function of the probe frequency, \(\Omega\), around the lower band edge of (a,c) a \(J\)-model and (b,d) a \(J^{\prime}\)-model with \(g/J=0.5=g^{\prime}/J^{\prime}\) for different values of the spin damping, \(\gamma_{\rm s}\). Standard driving is applied at \(\omega=\omega_{0}=1.7J\) in panels (a) and (b) and at \(2.2J\) in panels (c) and (d). The probe field is set to \(E_{1}=0.2E_{0}\) and standard phonon damping is used.
Figure 15: **Dependence of pump-probe protocol on probe amplitude.** Average value of \(u_{k=0}\), shown as a function of the probe frequency, \(\Omega\), around the lower band edge of (a,c) a \(J\)-model and (b,d) a \(J^{\prime}\)-model with \(g/J=0.5=g^{\prime}/J^{\prime}\) for different probe amplitudes, \(E_{1}\). A phonon of frequency \(\omega_{0}=2.2J\) is present. The pump field is set to zero in panels (a) and (b) and to standard driving at \(\omega=2.2J\) in panels (c) and (d). Standard damping is used. The red shading represents the frequency range of the equilibrium two-triplon band (i.e. in the absence of driving), the blue shading the driving-renormalized band, and the purple color their superposition. \(\delta\omega_{\rm s}\) denotes a constant (\(E_{1}\)-independent) shift of the observed peak from the equilibrium (a,b) and nonequilibrium (c,d) band edges, which arises due to weak hybridization (\(s<1\)) of the band-edge two-triplon modes with the in-band phonon, and thus is directed away from the band center.
tify accurately the band-engineering effects produced by applying the pump field [Figs. 15(c) and 15(d)].
Figure 16(a) illustrates, using the example of the upper band edge in a \(J\)-model, the three effects contributing to the positions of the resonance peaks detected in the pump-probe protocol. It is clear that the first-order shift, which we denote \(\delta\omega_{\rm max}^{(1)}\), is largest in absolute value, and is negative (Subsec. V.1). The next effect by magnitude is the hybridization shift, \(\delta\omega_{\rm s}\), which is positive at the upper band edge. However, analyzing the computed peak positions reveals that these two contributions are not sufficient for an accurate description, whereas the small discrepancy is well accounted for by \(\delta\omega_{\rm max}^{(2)}\).
Figures 16(b) and 16(c) represent the relative signs and sizes of these three contributions at both band edges in the \(J\)-model, comparing them with an equivalent \(J^{\prime}\)-model for two different phonon frequencies. Here we introduce additional, self-explanatory notation to distinguish the hybridization shifts of both models at both band edges. This format makes clear that \(\delta\omega^{(1)}\) is the largest shift and is always downwards, whereas \(\delta\omega^{\prime(1)}\) is smaller and can change sign with the location of the driving phonon in the lower or upper half of the band. By contrast, \(\delta\omega_{\rm s}\) and \(\delta\omega^{(2)}\) are always similar to \(\delta\omega_{\rm s}^{\prime}\) and \(\delta\omega^{\prime(2)}\), and have the same (opposing) signs in all cases. We remark again that the quantitative results are similar for the upper and lower band edges, which is why we have analyzed \(u_{k=\pi,0}\) and \(u_{k=0,0}\) interchangeably, instead of doubling the length of our discussion. We remark once more that only \(\delta\omega^{(1)}\) and \(\delta\omega^{(2)}\) are consequences of the magnetophononic driving, whereas \(\delta\omega_{\rm s}\) is an equilibrium effect of spin-phonon hybridization.
For complementary insight into our pump-probe results, in Fig. 17(a) we show the phonon spectrum, \(n_{\rm phb}(\omega_{0},\Omega)\), matching the spin response of Fig. 15(c), i.e. for probe frequencies around the lower band edge, and in Fig. 17(b) the phononic response for probing around the upper band edge. In the absence of a probe beam, the phonon occupation is essentially flat around the band edges, with no discernible features forming in these regions when the only driving is resonant with the available in-band phonon at \(\omega_{0}=2.2J\). However, increasing the probe intensity reveals that the formation of the predominantly magnetic spectral features at the band edge in Figs. 12, 13, 14, and 15 is accompanied by a small dip in \(n_{\rm phb}(\omega_{0},\Omega)\). This weak response indicates the weak phononic character of these hybrid states; the fact that it is negative is a consequence of the removal of phonon energy required in the excitation of the band-edge hybrid.
We conclude our analysis of spin-band engineering by stressing that our focus has been to illustrate all of the qualitative phenomena present when pumping with a light-driven phonon and probing with a separate laser in
Figure 16: **Three frequency shifts in driven spin chains.** Representation of the frequency shifts contributing to the positions of resonance peaks measured by scanning the probe frequency, \(\Omega\). (a) Frequency shifts at the upper band edge, shown by considering the occupation, \(u_{k=\pi,0}\), of \(k=\pi\) triplon modes as a function of \(\Omega\) for a \(J\)-model with \(g=0.5J\), \(E_{0}/\gamma=0.2\), \(E_{1}=0.2E_{0}\), and \(\omega_{0}=2.2J\). The horizontal bars and arrows quantify the contributions \(\delta\omega_{\rm max}^{(1)}\), \(\delta\omega_{\rm max}^{(2)}\), and \(\delta\omega_{\rm s}\). \(\Omega_{\rm max}\) indicates the probe frequency at which the resonance peak is observed. (b) Schematic vertical representation of the three frequency shifts at both band edges for a \(J\)-model (dark red) and a \(J^{\prime}\)-model (light red) driven by a phonon whose frequency lies in the lower half of the two-triplon band. (c) Equivalent representation of frequency shifts for \(J\)- and \(J^{\prime}\)-models driven by a phonon whose frequency lies in the upper half of the band. Panels (b) and (c) provide a qualitative comparison of the signs and sizes of the three shifts in each case, but are not drawn precisely to scale.
Figure 17: **Phononic response in the pump-probe protocol.** (a) Phononic occupation, \(n_{\rm ph0}\), shown as a function of the probe frequency, \(\Omega\), around (a) the lower and (b) the upper band edge of a \(J\)-model with \(g/J=0.5\) for different probe amplitudes, \(E_{1}\), when standard driving at \(\omega=\omega_{0}=2.2J\) and standard damping are applied. We draw attention to the scales of the two \(y\)-axes.
the same frequency range. For this purpose we have not considered the effects of the phonon coordinate on the magnetic interactions beyond linear order. We have also not dwelled on maximizing the energy shifts we induce, and thus we have shown mostly percent effects. However, we comment that the band shifts exceeding 10% that are found at stronger pump fields (Fig. 13) can for a one-dimensional system mean that well over 50% of the spin spectral weight is shifted completely out of its previous energy range. In the present study we have also used a rather generic spin band, whose gap is approximately equal to its band width, whereas focusing on a particularly narrow-band system would lead to much stronger relative shifts of spectral weight.
## VI Strong spin-phonon coupling in quantum magnetic materials
### CuGeO\({}_{3}\)
The strong spin-phonon coupling in CuGeO\({}_{3}\) was revealed by the fact that it drives a spin-Peierls transition at \(T_{\rm sp}=14\) K [27]. While it is clear that the leading physics of this system is a dimerization of the spin chain formed by the strongest (Cu-O-Cu) superexchange bonds, details of the magnon dispersion and of other thermodynamic measurements led to the introduction of both a significant next-neighbor coupling, \(J_{2}\), and a non-negligible interchain coupling. A recent ultrafast investigation that used soft x-ray frequencies to probe the low-lying electronic states [23] also suggested that the observed damping was a consequence of coherently excited phonon modes coupling strongly to short-ranged magnetic correlations.
The crystal structure of CuGeO\({}_{3}\) has space group P\(mma\), which is reduced to C\(mca\) when the spin-Peierls transition enlarges the unit cell (while maintaining the orthorhombic symmetry). IR-active phonons are available over a wide range of energies in the high-temperature structure [29], and the rather large unit cell of the spin-Peierls phase makes their number significant, although for laser-driving purposes we note that all of the \(A_{u}\) modes are silent. Inelastic neutron scattering (INS) has been used to characterize all the phonon modes of the high-temperature phase [32; 34], finding the strongest response at frequencies of 3.2 and 6.8 THz, and suggesting that the spin-Peierls transition is of a type occurring without an accompanying soft mode [52; 53]. Here we comment that ultrafast methods appear to offer a qualitatively different approach to the investigation of low-lying phonons around and below the spin-Peierls temperature [54]. In Fig. 18 we illustrate three phonon modes of the high-temperature structure that have been identified by comparing electronic structure calcuations [54] with experiment [29]. One of these [Fig. 18(a)] is IR-silent whereas the other two are expected to be promising candidates for coherent laser driving. We note also that most of these phonon modes tend to involve motions of all the atoms in the system, and hence they will have both \(J\)- and \(J^{\prime}\)-model character in the language of our two simplifying models (Fig. 1).
The magnetic excitation spectrum, also measured by INS [30], shows relatively broad triplon bands, by which is meant that their gap is considerably smaller than their band width. In Figs. 19(a) and 19(b) we show the one-triplon dispersion along and across the chain direction, from which the early two-dimensional fit of Ref. [31] deduced the illustrative superexchange parameters \(J=10.7\) meV, which sets the one-triplon band center, \(J^{\prime}=8.3\) meV (i.e. \(\lambda=0.78\)), \(J_{2}=0\), and \(J_{a}=1.5\) meV (interchain); we remark here that later studies provided a more refined global parameter set [55]. Figure 19(c) represents the full energy range of the two-triplon excitation spectrum for these parameters and also shows the in-band locations of four IR-symmetric phonon modes of the high-temperature structure, as measured by Ref. [29]. As noted above, the \(A_{u}\) mode [Fig. 18(a)] is IR
Figure 18: **Selected phonon modes in CuGeO\({}_{3}\).** Representation of the atomic displacements (yellow arrows) in three phonon excitations of the spin-Peierls phase of CuGeO\({}_{3}\). Cu ions are shown in blue, Ge in gray, and O in red. The alternating (CuO\({}_{2}\)) spin chains are oriented along the \(\hat{c}\) axis and the phonon symmetries and frequencies correspond to those identified in Refs. [29] and [34]. While the \(A_{u}\) mode (a) is silent to IR excitation, the \(B_{1u}\) (b) and \(B_{3u}\) modes (c) are expected to be readily driven by strong electric fields.
silent, but the \(B_{1u}\), \(B_{2u}\), and \(B_{3u}\) modes all have similar oscillator strengths [29]. We therefore propose the \(B_{1u}\) mode shown in Fig. 18(b) as a good candidate for driving in the lower half of the two-triplon band and the \(B_{3u}\) mode of Fig. 18(c) as a good candidate for driving in the upper half of the band, while we do not show the atomic motions in the \(B_{2u}\) mode because it is rather close to the two-triplon band center. We stress that, because ultrafast driving experiments are also performed with ultra-intense electric fields, the choice of drivable phonons is by no means restricted to the modes depicted in Fig. 19, and our qualitative message is rather that CuGeO\({}_{3}\) remains an excellent candidate material for observing the phenomena we analyze (Secs. III, IV, and V). However, we note also that the relevant experiments do need to be at performed at low temperatures, \(T\ll T_{\rm sp}=14\) K, in order to preserve the dimerized state of the system.
### (Vo)\({}_{2}\)P\({}_{2}\)O\({}_{7}\)
As a low-dimensional \(S=1/2\) quantum spin system with an excitation gap, (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\) first attracted attention as a candidate to realize the two-leg ladder geometry. However, INS measurements of the triplon dispersion [35] soon revealed its nature as a quasi-one-dimensional alternating spin chain, and nuclear magnetic resonance (NMR) revealed a large and complex structural unit cell with alternation in all three lattice directions [56]. Further theoretical analysis then deduced the presence of frustrated interchain coupling, leading to a magnetic model containing two species of dimerized spin chain, lying on alternating planes and with an effective coupling that is weak as a consequence of interchain frustration [36]. For experimental purposes, the intrinsic dimerization of the dominant spin chains avoids the need for temperatures as low as those required in CuGeO\({}_{3}\).
The crystal structure of (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\) also has orthorhombic symmetry, with space group P\(ca2_{1}\), and a very large unit cell (104 atoms). This structure lacks in
Figure 19: **One-and two-triplon spectra of CuGeO\({}_{3}\).** (a) Triplon dispersion along the chain direction (\(\hat{c}\)) in the spin-Peierls phase of CuGeO\({}_{3}\). (b) Interchain (\(\hat{b}\)) triplon dispersion. Data in both panels were taken from Ref. [30] and fits from Ref. [31]. (c) Extrema, 2\(\omega_{\rm min}\) and 2\(\omega_{\rm max}\), of the two-triplon spectrum of CuGeO\({}_{3}\) at low temperatures, showing the locations of the four phonon modes, including the three depicted in Fig. 18.
Figure 20: **Selected phonon modes in (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\).** Representation of the atomic displacements (yellow arrows) in three phonon excitations of the (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\) lattice. V ions are shown in purple, P in olive green, and O in red. The dimerized spin chains are oriented along the \(\hat{b}\) axis and the frustrated interchain bonds lie in the \(ab\) plane. The normal modes illustrated are three examples with larger oscillator strengths found in a lattice-dynamics calculation performed using phonopy and based on density-functional-theory calculations performed with Quantum Espresso [57].
version symmetry, and hence all of the phonons may have either IR or Raman character, depending on the polarization of the light. Thus an extremely large number of IR-active phonons is available over the full range of energies. Initial studies of the phonon spectrum by Raman scattering [28] found by raising the temperature that a very strong renormalization of the phonons by the spin sector takes place. Theoretical fits [36] suggested that the spin-phonon coupling should be very strong, \(g\simeq 0.5J\), which was the basis for our extending the analyses of Secs. III, IV, and V to this value of \(g\) and \(g^{\prime}\).
Figure 20 shows the atomic displacements in three phonon excitations with relatively large oscillator strengths, which were found in exploratory electronic structure calculations of (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\)[59]. These calculations used Quantum Espresso [57] to obtain a stable and insulating structural solution by assuming antiferromagnetic order, but were neither optimized for correlation effects on the V ions nor compared to any experiments. Nonetheless, they do illustrate the key features of (i) a very large number of phonon modes and (ii) a general increase in the average oscillator strength of these phonons with frequency, arising because modes with larger dipole moments are stiffer. Despite the significant quantitative mismatch in frequencies, it is not unrealistic to suggest that the 121 cm\({}^{-1}\) (3.63 THz) \(B_{1}\) mode shown in Fig. 20(a) and the 167 cm\({}^{-1}\) (5.01 THz) \(B_{1}\) mode shown in Fig. 20(b), which has a weaker \(A_{1}\) partner at 162 cm\({}^{-1}\) (4.85 THz) in the calculation, are candidates for the 70 and 123 cm\({}^{-1}\) modes [2.10 and 3.69 THz, the latter with a partner at 118 cm\({}^{-1}\) (3.54 THz)] that appear most strongly in experiment [28]. One may anticipate that these two modes are the most suitable low-frequency candidates for coherent laser driving, but we stress again that many suitable modes are present at higher frequencies, including the 212 cm\({}^{-1}\) (6.35 THz) \(A_{1}\) mode shown in Fig. 20(c).
Following the fit of Ref. [36], Figs. 21(a) and 21(b) show the one-triplon dispersions along and across the direction of the alternating chains in the two inequivalent planes (which we denote \(A\) and \(B\)). Again both triplon bands are broad, with gaps rather smaller than their band widths [35], and with fitting parameters \(J_{A}=12.3\) meV, \(J_{A}^{\prime}=8.1\) meV (i.e. \(\lambda_{A}=0.66\)), \(J_{aA}=1.0\) meV, \(J_{bA}=1.4\) meV (the latter pair mutually frustrating interchain interactions) and \(J_{B}=10.4\) meV, \(J_{B}^{\prime}=8.0\) meV (i.e. \(\lambda_{B}=0.77\)), \(J_{aB}=1.1\) meV, \(J_{bB}=1.6\) meV. Figure 21(c) represents the full energy range of the two-triplon excitation spectra, assuming only processes involving pairs of A and pairs of B triplons, and again shows the locations of the three phonons whose normal modes are displayed in Fig. 20. We conclude that (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\) should offer an excellent materials platform for realizing all three of the magnetophononic phenomena revealed in our study.
### Ultrafast band-engineering experiments
Experiments designed to follow the theoretical protocol of creating true spin NESS, at frequencies resonant with the spin spectrum and in bulk-driven quantum magnets, require a thin-film geometry and very efficient thermal transfer in order to maintain a low sample temperature [24]. In principle, self-blocking allows some relaxation of the constraints on pump intensity, driving time, and sample thickness, although in practice strong electromagnetic driving can induce heating by a variety of channels. An ultrafast pulsed protocol avoids extreme heating problems through the very short driving time, but usually involves strong electric fields and samples of \(\mu\)m up to mm thicknesses, and thus the pulse repetition time should remain long. In the context of modifying the spin excitation spectrum, we comment that \(\tilde{J}(q_{0})\) and \(\tilde{J}^{\prime}(q_{0})\) are in general highly nonlinear functions of \(q_{0}\), and where conventional experimental probes usually require only a low-order expansion, coherent laser driving can produce very large \(q_{0}\) values [22].
A final issue concerns the optimal type of experiment to perform. Coherent light remains a rather insensitive direct probe of magnetism, and experiments performed to date, such as absorption, reflection, and polarization rotation (birefringence), probe only some effects of the lattice that reflect the spin-phonon coupling. To identify other probes of novel magnetic states applicable to a complete analysis of a driven CuGeO\({}_{3}\) or (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\) system,
Figure 21: **One-and two-triplon spectra of (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\).** (a) Dispersions of the two triplon branches along the chain direction (\(\hat{b}\)) in (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\). (b) Interchain (\(\hat{a}\)) triplon dispersions. Data in both panels were taken from Ref. [35], other than the zone-boundary points in panel (a), which were taken from Ref. [58]. The fits in both panels were taken from Ref. [36]. (c) Extrema, \(2\omega_{\min,B}\) and \(2\omega_{\max,A}\), of the two-triplon spectrum of (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\), showing the locations of the three phonon modes depicted in Fig. 20.
we will continue to review the rapidly evolving technological developments in the measurement of quantities such as magnetic circular dichroism, the magneto-optic Kerr effect, or second-harmonic polarimetry as time-resolved variants become available across an increasing spectrum of probing frequencies (up to and including x-rays). The formalism of Sec. II remains fully applicable to the expectation values measured by these more direct probes of magnetic order, correlations, and excitations.
## VII Discussion and Conclusion
Ultrafast laser technology has enabled qualitative advances in the study of nonequilibrium phenomena in correlated condensed matter. Extending the reach of ultrafast driving methods to the rich variety of complex many-body states available in quantum magnetic materials requires overcoming the fact that the direct coupling of light to spin is generally rather weak, and thus inefficient. For this purpose we investigate the magnetophononic channel, in which the driving laser couples to an infrared-active optical phonon and the associated lattice displacements modulate the magnetic interactions. This approach offers highly frequency-specific driving possibilities by exploiting the resonances both between laser and phonon and between the driven phonon and the excitations of the quantum spin system. Intense driving electric fields and strong spin-phonon coupling then allow one to probe the properties of a correlated quantum magnet driven far from its equilibrium state.
The characteristic energy scales in quantum magnetic materials are typically rather low, making their quantum many-body states very sensitive to heating, as a result of which no serious analysis can avoid taking the energy flow into account. To model the problem of a quantum magnet with both driving and dissipation through the lattice phonons, we adopt the Lindblad treatment of open quantum systems and analyze the non-equilibrium stationary state (NESS) of system subjected to continuous laser driving. Within this framework we consider a very straightforward example of a gapped spin system, the dimerized spin chain, whose elementary excitations are triplons that can be treated as conventional (rather than hard-core) bosons for sufficiently strong dimerization. We adopt a similarly minimal model for the driven phonon, namely an infrared-active optical mode coupled to only to the strong bonds (\(J\)-model) or only to the weak bonds (\(J^{\prime}\)-model) of the dimerized chain.
Having previously used this minimal magnetophononic model to establish the framework for the weak-coupling, or linear-response, regime of the \(J\)-model [24], the primary focus of our present study is the regime of strong feedback, or back-action, of the driven spin system on the driving phonon. Particularly when the phonon frequency is chosen close to the band edges of the two-spin excitation spectrum, a strong spin-phonon coupling causes strong hybridization into composite collective states whose characteristic frequencies differ significantly from those of their constituents. In the model we study, these collective states are phonon-bitriplons, a somewhat rare example of a composite formed from three bosons of two different types.
In a NESS experiment, the shift of characteristic frequencies causes a dramatic "self-blocking" effect, by which the spin system acts as a strong negative feedback on the driven phonon, pushing its resonance off the driving frequency and thus drastically suppressing the effective driving field. Only in an experiment with a range of driving frequencies, such as those present in an ultrashort pulse, would one observe the shifts of spectral weight associated with the level-repulsion caused by the spin-phonon hybridization. Even in this situation, however, the self-blocking caused by strong mixing with off-resonant spin levels remains significant. We comment here that our present study retained the NESS protocol for all possible driving frequencies, and did not include the intense and instantaneous nature of an ultrashort pulse.
While driving phonons resonant with the two-spin band edges is an excellent way to create composite collective hybrid states, an important consequence of self-blocking is that it is not a very efficient way to engineer the bulk properties of a quantum magnet. Optical control is the only technique available to switch these properties on the characteristic timescales of the spin system, and our analysis reveals the important insight that the frequencies most efficient for this purpose lie in specific regions within the two-spin excitation spectrum. The dominant band-engineering effects arise at linear order as a consequence of a stationary displacement of the driven phonon, which results from the steady population of excited triplons created by its action on the spin system. While the second-order contribution is weak, it is also detectable for the typical parameters of a phonon-driven quantum magnet, and thus is required for a quantitative description of experiment. To detect spin-band engineering, we introduce a weak "probe" beam, which in the NESS protocol is actually a further continuous driving term, applied in addition to the pump but at a completely independent frequency. (We remind the reader that the focus of our present study is not on conventional pump-probe physics, which use a time delay between pump and probe pulses to investigate transient phenomena at switch-on.) This technique yields clear additional signals in the phonon and spin response at the band-edge frequencies of the renormalized, or optically engineered, band. Applying a band-edge probe electric field to a system with a driven mid-band phonon reveals an extra intrinsic frequency shift in the detected signal, caused by off-resonant spin-phonon hybridization, that should also be included in any quantitative analysis.
Our minimal quantum magnetic model contains only two types of bond, within (\(J\)) and between (\(J^{\prime}\)) the spin dimers, and coupling the driven phonon to each bond type separately yields some valuable insight. Certain
aspects of the response, which with strong spin-phonon coupling is dominated by the feedback between the two sectors, are very similar, in particular the self-blocking and the formation of composite collective states at the band edges. This can be traced to the fact that the matrix elements for the driven phonon to create spin excitation are the same, up to a sign, when the relative couplings (\(g/J\) and \(g^{\prime}/J^{\prime}\)) are the same. By contrast, the band-engineering effect of the driven phonon is completely different between the two situations, which is a consequence of how the back-action from the spin system modifies the phonon. Only in the \(J\)-model does the driving produce a rather large stationary shift of the equilibrium atomic displacement, whereas in the \(J^{\prime}\)-model this effect is at least an order of magnitude weaker; once again this behavior can be traced to the matrix elements in the relevant equations of motion. Although it is easy to conclude that phonons coupling to the strong bonds are better suited for spin-band engineering, in fact the difference is also qualitative, in that modulating the intradimer bond changes the band center whereas modulating the bonds between dimers can be used to alter the band width. Although the effect on the band center is always a reduction, the band width can be renormalized both upwards or downwards by choosing the frequency of the driving phonon.
In the present work we have focused on a spin chain as a representative quantum magnet. However, the generic features of the phenomenology we unveil are not restricted to spin chains, and could also be found in spin ladders and in valence-bond states in two and three dimensions. One important ingredient of our model is the sharp peaks in the density of states at both band edges, which concentrates the strongest response of the spin system to two narrow ranges of frequency, and this property can be found in many quantum magnets with a spin gap or with narrow bands, such as those induced by frustrated couplings. Even for magnetically ordered states in three dimensions, there is a large jump in the density of states at the upper band edge [14]. Band engineering, which relies on mid-band rather than band-edge phonons, is less dependent on the structure of the density of states, and is a more direct consequence of strong spin-phonon coupling at the strongest bonds of the system. The concept of phononic driving is of course applicable throughout condensed matter, and future work in quantum magnetism can be expected to investigate its manifestations in itinerant as well as in localized systems, in ordered as well as in non-ordered magnets, and in systems with the wealth of complex forms of order found only in magnetic materials, including \(3Q\) textures, quadrupolar order, chiral order, nematic order, and still others.
Here we have restricted our considerations to linear magnetophononics, in that we consider only a single driven phonon to explore the leading nonequilibrium phenomena. At this level, the dominant effects are produced by the \(q_{0}\)-linear correction to the magnetic interactions, and we do not consider higher-order terms in \(J(q_{0})\) (the second-order correction we discuss appears in \(q(t)\), generated by the equations of motion). Nonlinear magnetophononics [22] considers the simultaneous effects of two or more driving phonons, and in this situation the quadratic correction to \(J(q_{a},q_{b})\) contains terms modulating the magnetic interactions at frequencies of \(2\omega_{a}\), \(2\omega_{b}\), \(\omega_{a}+\omega_{b}\), and \(\omega_{a}-\omega_{b}\). The sum and difference frequencies enlarge very considerably the range of possibilities available for matching the driving frequency to the characteristic energies of the spin system. In particular, phonon difference frequencies are the key to magnetophononic driving in systems where the spin energy scale is very small, which is the situation relevant to a high percentage of quantum magnetic materials. Observing the signals of such nonlinear driving require one or both of strong spin-phonon coupling and ultra-intense electric fields.
Finally, we have discussed two materials matching our models, to show that the phenomena we identify should be detectable in CuGeO\({}_{3}\) and especially in (VO)\({}_{2}\)P\({}_{2}\)O\({}_{7}\). However, a primary task of theoretical physics is to build models matching the materials on which experiments are performed. For this task, the minimal magnetophononic model we have constructed is extremely versatile, in that more complex spin systems (for example higher-dimensional, ordered, or with anisotropic interactions), more complex phonons (for example dispersive, multiple, or coupling to both \(J\) and \(J^{\prime}\) bonds), and also more complex dissipative processes (in particular spin-conserving ones) are readily accommodated within the Lindblad formulation to generate qualitatively similar equations of motion. As noted above, although we have considered only the expectation values of the most fundamental phonon and triplon operators, all more complex observables are formed from these and hence our framework is easily adapted to compute the quantities probed by experiment, and in particular those obtained from direct optical probes of magnetic correlations.
###### Acknowledgements.
We are indebted to N. Colonna, S. Das, and L. Spitz for performing DFT and phonon spectrum calculations, and for their assistance in producing Figs. 18 and 20. We thank S. Behrensmeier for assistance with Figs. 19 and 21, and F. B. Anders, D. Bossini, K. Deltenre, F. Giorgianni, Ch. Ruegg, L. Spitz, and R. Valenti for helpful discussions. We are grateful to the German Research Foundation (DFG) for financial support through projects UH 90-13/1 and B8 of ICRC 160, as well as to the Mercator Research Center Ruhr for support through the Mercur Cooperation Ko-2021-0027. We acknowledge the National Science Foundation for financial support through award numbers DMR-1945529, PHY-1607611, and PHY-1748958 and the Welch Foundation for support through award number AT-2036-20200401. This project was partially funded by The University of Texas at Dallas Office of Research and Innovation through the SPIRe program. |
2309.06268 | ssVERDICT: Self-Supervised VERDICT-MRI for Enhanced Prostate Tumour
Characterisation | Purpose: Demonstrating and assessing self-supervised machine learning fitting
of the VERDICT (Vascular, Extracellular and Restricted DIffusion for Cytometry
in Tumours) model for prostate. Methods: We derive a self-supervised neural
network for fitting VERDICT (ssVERDICT) that estimates parameter maps without
training data. We compare the performance of ssVERDICT to two established
baseline methods for fitting diffusion MRI models: conventional nonlinear least
squares (NLLS) and supervised deep learning. We do this quantitatively on
simulated data, by comparing the Pearson's correlation coefficient,
mean-squared error (MSE), bias, and variance with respect to the simulated
ground truth. We also calculate in vivo parameter maps on a cohort of 20
prostate cancer patients and compare the methods' performance in discriminating
benign from cancerous tissue via Wilcoxon's signed-rank test. Results: In
simulations, ssVERDICT outperforms the baseline methods (NLLS and supervised
DL) in estimating all the parameters from the VERDICT prostate model in terms
of Pearson's correlation coefficient, bias, and MSE. In vivo, ssVERDICT shows
stronger lesion conspicuity across all parameter maps, and improves
discrimination between benign and cancerous tissue over the baseline methods.
Conclusion: ssVERDICT significantly outperforms state-of-the-art methods for
VERDICT model fitting, and shows for the first time, fitting of a complex
three-compartment biophysical model with machine learning without the
requirement of explicit training labels. | Snigdha Sen, Saurabh Singh, Hayley Pye, Caroline M. Moore, Hayley Whitaker, Shonit Punwani, David Atkinson, Eleftheria Panagiotaki, Paddy J. Slator | 2023-09-12T14:31:33Z | http://arxiv.org/abs/2309.06268v2 | # SVDDICT: Self-Supervised VERDICT-MRI for Enhanced Prostate Tumour Characterisation
###### Abstract
**Purpose**: Demonstrating and assessing self-supervised machine learning fitting of the VERDICT (Vascular, Extracellular and Restricted Diffusion for Cytometry in Tumours) model for prostate.
**Methods**: We derive a self-supervised neural network for fitting VERDICT (ssVERDICT) that estimates parameter maps without training data. We compare the performance of ssVERDICT to two established baseline methods for fitting diffusion MRI models: conventional nonlinear least squares (NLLS) and supervised deep learning. We do this quantitatively on simulated data, by comparing the Pearson's correlation coefficient, mean-squared error (MSE), bias, and variance with respect to the simulated ground-truth. We also calculate _in vivo_ parameter maps on a cohort of
20 prostate cancer patients and compare the methods' performance in discriminating benign from cancerous tissue via Wilcoxon's signed-rank test.
**Results:** In simulations, ssVERDICT outperforms the baseline methods (NLLS and supervised DL) in estimating all the parameters from the VERDICT prostate model in terms of Pearson's correlation coefficient, bias, and MSE. _In vivo_, ssVERDICT shows stronger lesion conspicuity across all parameter maps, and improves discrimination between benign and cancerous tissue over the baseline methods.
**Conclusion:** ssVERDICT significantly outperforms state-of-the-art methods for VERDICT model fitting, and shows for the first time, fitting of a complex three-compartment biophysical model with machine learning without the requirement of explicit training labels.
**Keywords--** Diffusion MRI, Microstructure Estimation, Self-Supervised Learning, Prostate Cancer
## 1 Introduction
Prostate (PCa) characterisation is reliant on invasive biopsy, but in recent years, multiparametric MRI (mp-MRI) has become established in the diagnostic pathway for localisation and staging of clinically-significant PCa (csPCa) [1]. Diffusion MRI (dMRI) is a powerful component of mp-MRI, measuring the motion of water molecules in biological tissues to infer information about local microstructure. Advanced multi-compartment models of the dMRI signal can estimate parameters relating to specific microstructural properties such as cell size, density and vasculature [2]. Such models enable non-invasive analysis of similar metrics to those typically only accessible by histology, and have been shown to reduce the need for invasive biopsies in breast [3] and prostate cancer [4].
Microstructural information can be extracted by designing models with parameters corresponding to these biophysically-relevant metrics, which are then estimated by fitting the models to dMRI data. These models tend to be nonlinear with many free parameters, and dense q-space sampling is required for accurate description of the
microstructure, which involves time-consuming processes. This means that parameter estimation becomes a difficult inverse problem, scaling with both voxel number and model complexity. Additionally, parameter estimation requires an optimisation-based procedure, typically relying on nonlinear least squares (NLLS) curve fitting, which is computationally expensive and prone to estimation errors [5]. These challenges when obtaining and examining dMRI data hinder the clinical translation of these methods.
Recent work has utilised deep learning (DL) techniques to solve this parameter estimation inverse problem. These algorithms learn the mapping between the q-space data and the microstructural parameters of the dMRI model. Pioneering work on q-space learning by Golkov et al. [6] estimated model parameters using a multilayer perceptron (MLP), an approach that has since been widely used for ultrafast model fitting [7-9]. Convolutional neural networks (CNNs) have also been used with supervised learning for dMRI model fitting [10], as have transformers [11]. This approach has been used to fit both simple exponential models, as well complex biophysical models such as Neurite Orientation Dispersion and Density Imaging (NODDI) [12] and the Spherical Mean Technique (SMT) [13]. However, these methods are significantly affected by the underlying distribution of the training data, which can introduce biases in the parameter estimates [5,13].
Self-supervised learning techniques can circumvent this issue, as they do not rely on explicitly labelled training data, instead extracting labels from the input data itself. This approach has been successful for microstructural parameter estimation with the simple bi-exponential intravoxel incoherent motion (IVIM) model [5, 14-16]. However, despite these numerous IVIM examples, and in contrast to supervised model fitting, self-supervised model fitting has not been demonstrated for complex biophysical models.
Here we introduce a self-supervised approach to fit the Vascular, Extracellular and Restricted Diffusion for Cytometry in Tumours (VERDICT) model for the prostate, a complex three-compartment biophysical model [17]. We refer to our method as ssVERDICT. The VERDICT framework, currently in clinical trials for PCa, requires robust model fitting to estimate microstructural metrics such as cell size, intracellular volume fraction and diffusivity. These have previously been estimated via NLLS and
supervised DL approaches [18, 19], but the complexity of VERDICT increases its susceptibility to the aforementioned limitations of these techniques.
This is the first work demonstrating self-supervised fitting beyond simple exponential models. We show that ssVERDICT achieves higher accuracy and reduced bias when estimating microstructural parameters using ground truth simulations. On real data, ssVERDICT achieves discrimination of cancerous tissue from benign at a higher confidence level on a dataset of 20 PCa patients, highlighting the potential of the method for clinical translation.
## 2 Methods
In this section, we firstly introduce the VERDICT model for prostate, a three-compartment biophysical dMRI model. We then outline how the simulated data was generated, and how the patient data was acquired. We discuss the two baseline fitting methods (conventional NLLS fitting and supervised deep learning), followed by our novel self-supervised fitting method, ssVERDICT. Finally, we give details on the pre-processing steps, ROI selection and the evaluation metrics used.
### VERDICT Model
The VERDICT prostate model is the sum of three parametric models, describing the dMRI signal as intracellular (IC), extracellular-extravascular (EES) and vascular (VASC) water [17]. The total signal is:
\[S=\ f_{VASC}S_{VASC}(d_{VASC},b)+f_{IC}S_{IC}(d_{IC},R,b,\Delta,\delta)\] \[+f_{EES}S_{EES}(d_{EES},b) \tag{1}\]
where \(f_{i}\) is the volume fraction and \(S_{i}\) is the signal with no diffusion weighting from water molecules in population \(i\), where \(i=\) IC, VASC or EES. The vascular signal fraction, \(f_{VASC}\), is computed as \(1-\ f_{IC}-f_{EES}\), since \(\sum_{i=1}^{3}f_{i}=1\) and \(0\leq f_{i}\leq 1\)[17]. Here \(b\) is the b-value, \(\Delta\) is the gradient pulse separation and \(\delta\) is the gradient pulse duration.
The mathematical signal forms are as follows:
\[S_{VASC}=\frac{\sqrt{\pi}}{2}.\frac{\phi\left(\sqrt{b\left(-\left(\Delta-\delta/_{3 }\right)(\gamma\delta)^{2}d_{VASC})\right)}\right)}{\sqrt{b\left(-\left(\Delta- \delta/_{3}\right)(\gamma\delta)^{2}d_{VASC}\right)}}(2)\]
\[S_{IC}=\exp\left(-\frac{2\gamma^{2}G^{2}}{d_{IC}}\sum_{m=1}^{\infty}\frac{ \alpha_{m}^{-4}}{\alpha_{m}^{2}R^{2}-2}\left[2\delta\right.\right.\]
\[\left.\left.\qquad\qquad-\frac{2+e^{-\alpha_{m}^{2}d_{IC}(\Delta-\delta)}-2e^{ -\alpha_{m}^{2}d_{IC}\delta}-2e^{-\alpha_{m}^{2}d_{IC}\Delta}+2e^{-\alpha_{m}^ {2}d_{IC}(\Delta+\delta)}}{\alpha_{m}^{2}d_{IC}}\right]\right) \tag{3}\]
\[S_{EES}=\ e^{-bd_{EES}}(4)\]
where \(\phi\) is the error function \(\phi(z)=\int_{0}^{z}\exp(-t^{2})\,dt\) and \(\alpha_{m}^{2}\) is the \(m^{th}\) root of \((\alpha R)^{-1}J_{\frac{3}{2}}(\alpha R)=J_{\frac{5}{2}}\), where \(J_{n}(x)\) is a Bessel function of the first kind [2].
The spherical mean version of the VERDICT model represents the IC component as spheres of radius \(R\) (using the GPD approximation [20]) with intra-sphere diffusivity fixed at \(d_{IC}=2\,\mathrm{\SIUnitSymbolMicro m}^{2}/\mathrm{ms}\). The EES component is represented as Gaussian isotropic diffusion with effective diffusivity \(d_{EES}\), and the vascular component as spherically-averaged randomly oriented sticks with intra-stick diffusivity fixed at \(d_{VASC}=8\,\mathrm{\SIUnitSymbolMicro m}^{2}/\mathrm{ms}\)[21]. By fitting the model to dMRI data, we estimate four model parameters: \(f_{EES}\), \(f_{IC}\), \(R\) and \(d_{EES}\).
### Patient Data
The study was performed with the approval of the local ethics committee embedded within the INNOVATE clinical trial (NCT02689271) [22], which included men suspected of having csPCa. For this study, we randomly selected 20 patients from the INNOVATE cohort with biopsy-confirmed csPCa. VERDICT-MRI was performed on a 3T MRI system (Achieva; Philips, Best, the Netherlands), using a pulsed-gradient spin echo sequence. The imaging parameters, as published in [4, 22-24], were as follows: repetition time (TR), 2482-3945 ms; field of view, \(200\times 220\,\mathrm{mm}\); voxel size, \(1.3\times 1.3\times 5\,\mathrm{mm}\); no interslice gap; acquisition matrix, \(176\times 176\). The optimised VERDICT acquisition protocol for prostate is: \(b\), 90, 500, 1500, 2000 and 3000 s/mm\({}^{2}\); \(\delta\), 3.9, 3.9, 11.4, 23.9, 14.4, 18.9 ms; \(\Delta\), 23.8, 23.8, 31.3, 43.8, 34.3, 38.8 ms [23]. For each of the five combinations of \(b/\delta/\Delta\) we used the minimum possible echo time (TE),
giving TEs of 50-90 ms, and we acquired a separate \(b=0\) image for each TE, resulting in 10 image volumes.
### Simulated Data
We generated synthetic datasets for quantitative analysis using the VERDICT model with added Rician noise. We first simulated datasets with SNR levels ranging from 10-100 so we could test the robustness of the methods to noise. We then set SNR = 50 for the final simulated dataset. We created 100,000 signals from uniform VERDICT parameter distributions within biophysically realistic parameter ranges: \(f_{EES}=[0.01,0.99],\;\;f_{IC}=[0.01,0.99],\;\;R=[0.01,\!15]\mathrm{\mu m}\;\;\) and \(\;\;d_{EES}=[0.5,\!3]\mathrm{\mu m}^{2}/\mathrm{ms}\). We simulated dMRI data using the same acquisition protocol as the patient data [22]. The parameters were drawn from uniform (rather than _in vivo_) distributions to minimise bias in the resulting parameter estimates [5, 13].
### Conventional Iterative Fitting
We fit the VERDICT model via NLLS using custom code in MATLAB (The Mathworks Inc., Natick, Massachusetts, USA), with parameter constraints as given in Sec. 2.3 using the 'Isqcurvefit' function as in [17, 19]. Prediction for the whole unmasked dMRI dataset (roughly \(5\times 10^{5}\) voxels) took ~140 s per subject (Apple M1 Pro).
### Supervised Deep Learning
Supervised techniques approximate the function \(f\) that maps the measurement **S** to its corresponding ground truth parameters, **x**, by minimising the difference between the ground truth parameter values (training labels) and the parameter estimates (network output). The training loss is calculated as the mean squared error (MSE) between the estimated and ground truth values. We use an MLP architecture, implemented in Python 3.7.13 using the 'MLPregressor' in scikit-learn 0.23, as in [18, 24-26].
The input of the deep neural network (DNN) is a vector of dMRI signals for each combination of \(b\), \(\Delta\), \(\delta\) (a total of 10 in this case), followed by three fully-connected hidden layers with 150 neurons [18, 24-26], and a final regression layer with four output neurons (equal to the number of parameters to be estimated). The DNN is trained on 100,000 synthetic signals (split into 80% for training and 20% for validation), with values for the model parameters randomly chosen from the ranges given in Sec. 2.3. We performed the optimisation with the ADAM method for 1000 epochs (adaptive learning rate with initial value of 0.001; one update per minibatch of 100 voxels; early stopping to mitigate overfitting; and momentum = 0.9). For the final parameter computation, we used the DNN at the epoch with minimum validation loss. The creation of the training set and training of the DNN (which was performed only once) took ~200 s. Prediction of the trained DNN for the whole unmasked dMRI dataset took ~30 s per subject.
### Self-Supervised Deep Learning
Self-supervised methods compute \(f\) by minimising the difference between the noisy MR signals (network inputs) and noise-free signal estimates reconstructed from the estimated parameters (network outputs). The training loss is equivalent to the MSE between the predicted signal, \(\hat{S}\), and the input signal \(S\)[14]. Here, network training and inference is performed on the same dataset, mimicking the NLLS approach.
We implemented a fully-connected neural network with three hidden layers, each with 10 neurons (equal to the number of image volumes), using PyTorch 1.12.1. The output layer is fed into the VERDICT model equation to generate the predicted signal \(\hat{S}\). Crucially, this requires coding the VERDICT model in a differentiable form to enable backpropagation. For this, we formulate the intricate signal equations for VERDICT's'sphere' and 'astrosticks' compartments (Eqs. 2,3) as PyTorch tensor functions, so that multi-dimensional tensors of batched parameter values can be inputted to yield output tensors of batched predicted signals. A schematic of ssVERDICT is given in Fig. 1.
For the final parameter estimation, we used the normalised input data, the ADAM optimiser and the DNN at the epoch with minimum validation loss. We optimised the DNN by backpropagating the MSE between \(S\) and \(\hat{S}\), where \(\hat{S}\) is reconstructed via the VERDICT model from the parameter estimates. We chose a learning rate of 0.0001 and the network was trained until 10 consecutive epochs occurred without any improvement in loss, before terminating to prevent overtraining. We used dropout (\(p=0.5\)) to prevent overfitting and constrained the parameter values to the ranges in Sec. 2.3 using the PyTorch clamp function. Training and prediction for the whole unmasked dMRI dataset took ~50 s per subject.
### 2.7 Data Pre-processing
The pre-processing pipeline, as published in [4, 22-24], included denoising of the raw DW- MRI using MP-PCA [27] as implemented within MrTrix3 [28] 'dwidenoise', and then correction for Gibbs ringing [29] with custom code in MATLAB. To reduce potential artefacts caused by patient movement during scanning and eddy current distortions, we applied mutual-information rigid and affine registration using custom code in MATLAB [30]. We normalised the data by dividing the dMRI volumes by their matched \(b=0\). As we use the spherical mean version of the VERDICT model, we spherically averaged the data to produce 10 image volumes, where each volume was a 3D image consisting of 14 slices.
Figure 1: Schematic of our self-supervised network. The input to the neural network is the signal extracted from 10 signal volumes, therefore there are 10 input notes. The network has three hidden layers, each with 10 nodes. The final layer has five nodes, corresponding to the four estimated VERDICT parameters and \(S_{0}\), the signal with no diffusion weighting. To reconstruct the signal (\(\hat{S}\)), the complex VERDICT signal equations (Eqs 1-4) are written in differentiable form so that it can be incorporated as a layer
### 2.8 ROIs
Patients were biopsied based on their mp-MRI score as reported by two board-certified experiencedoradiologists (reporting more than 2,000 prostate MR scans per year). The regions of interest (ROI) were drawn by a board-certified study radiologist (S. Singh) using a pictorial report made by the clinical uroradiologist, and confirmed as cancerous retrospectively via targeted biopsy. An additional ROI was located for each patient in an area of benign tissue to be used for comparison, after a review of the sampling biopsy result confirmed the absence of tumour on the contralateral side.
### 2.9 Evaluation Metrics
We quantitatively compared the performance of the three parameter estimation methods via a variety of evaluation metrics: (1) Pearson's correlation coefficient (2) MSE (3) bias and (4) variance, all with respect to ground truth parameter values used for the simulated data. The formulae for the metrics used are given below:
\[\text{MSE}=\tfrac{1}{N}\sum_{i=0}^{N}(O_{i}-\ E_{i})^{2} \tag{5}\] \[\text{Bias}=\tfrac{1}{N}\sum_{i=0}^{N}(O_{i}-\ E_{i})\] (6) \[\text{Variance}=\tfrac{1}{N}\sum_{i=0}^{N}(O_{i}-\ \bar{O})^{2} \tag{7}\]
where \(O\) is the ground truth parameter value, \(E\) is the estimated value, and \(N\) is the number of samples.
We performed discrimination between tissue types _in vivo_ using the Wilcoxon's signed-rank test (preceded by the Shapiro-Wilk test for normality).
## 3 Results
Figure 2 shows estimated VERDICT parameters via each fitting method plotted against randomly-generated ground truth parameter values (Sec. 2.3). The Pearson's correlation coefficients \(r\) are highest for all four VERDICT parameters when fitted via ssVERDICT. We also observe higher \(r\) values for supervised DL fitting over NLLS.
Note the horizontal lines in the NLLS \(R\) correlation plot, corresponding to the \(R\) values in the grid-search stage of the NLLS fitting.
Table 1 gives the MSE, bias and variance values for all four fitted parameters, with each of the fitting methods. We observe lower bias and MSE across all parameters via ssVERDICT, and lower variance in estimating \(f_{EES}\) and \(d_{EES}\). However, supervised DL fitting achieves lowest variance in estimating \(f_{IC}\) and \(R\).
Figure 3 shows _in vivo_ maps of the four fitted VERDICT parameters and calculated \(f_{VASC}\). ssVERDICT shows strong lesion conspicuity for the \(f_{IC}\) and \(f_{EES}\) maps, and reasonable conspicuity on the \(f_{VASC}\), \(R\) and \(d_{EES}\) maps. The supervised DL method achieves strong lesion conspicuity for \(f_{EES}\) and \(f_{VASC}\), and the NLLS method for \(f_{IC}\) and \(f_{EES}\).
Figure 4 shows boxplots of the fitted VERDICT parameters, in benign and cancerous prostate tissue for a dataset of 20 patients. All three methods can discriminate between tissue types to a high level of significance with \(f_{IC}\) and \(f_{EES}\). ssVERDICT
Figure 2: Scatterplot of simulated ground truth parameter values and predicted values via the three fitting methods. We observe higher Pearson’s correlation coefficient \(r\) when using ssVERDICT for all four estimated parameters.
increases discrimination between benign and cancerous prostate tissue when compared to NLLS and supervised fitting in two ways:
1. Achieves statistical significance at \(p<0.001\) with extracellular-extravascular diffusivity (\(d_{EES}\)), which is not shown by the other techniques
2. Shows statistically significant differences at \(p<0.05\) for cell radius (\(R\)), which is not seen with supervised DL or NLLS fitting.
Figure 5 shows boxplots of the difference between the fitted VERDICT parameters and the ground truth values via the three fitting methods at varying SNR. We generally observe that across the VERDICT parameters, ssVERDICT results in estimates with a median difference closest to zero and smaller interquartile ranges. This suggests more accurate estimation via ssVERDICT across a range of SNR values. This also supports our decision to simulate data with an SNR of 50, as we show that parameter estimation remains robust across a wide range of SNRs.
## 4 Discussion
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{**MSE**} \\ \hline
**Method** & \(f_{1C}\) & \(f_{EES}\) & \(R\) & \(d_{EES}\) \\ \hline NLLS & 0.1232 & 0.1137 & 17.6976 & 0.9905 \\ \hline Supervised DL & 0.0714 & 0.0994 & 6.8860 & 0.7489 \\ \hline ssVERDICT & **0.0289** & **0.0362** & **5.5278** & **0.7160** \\ \hline \multicolumn{5}{|c|}{**Bias**} \\ \hline
**Method** & \(f_{1C}\) & \(f_{EES}\) & \(R\) & \(d_{EES}\) \\ \hline NLLS & -0.0742 & 0.0680 & -0.9442 & -0.3624 \\ \hline Supervised DL & -0.1008 & 0.1571 & -0.7152 & 0.3994 \\ \hline ssVERDICT & **-0.0070** & **-0.0162** & **0.4002** & **0.2522** \\ \hline \multicolumn{5}{|c|}{**Variance**} \\ \hline
**Method** & \(f_{1C}\) & \(f_{EES}\) & \(R\) & \(d_{EES}\) \\ \hline NLLS & 0.0958 & 0.0942 & 17.5022 & 0.8649 \\ \hline Supervised DL & **0.0459** & 0.0627 & **13.3562** & 0.6388 \\ \hline ssVERDICT & 0.0655 & **0.0542** & 16.7244 & **0.4378** \\ \hline \end{tabular}
\end{table}
Table 1: MSE, bias and variance values calculated between simulated ground truth and predictions obtained via each fitting method. We find ssVERDICT achieves the lowest MSE and bias across all four parameters, and lowest variance for \(f_{EES}\) and \(d_{EES}\) also.
PCA diagnosis can be significantly improved by the introduction of non-invasive biomarkers derived from quantitative diffusion MRI [15, 17, 31]. However, clinical adoption of such techniques requires robust model fitting to avoid misdiagnosis [13, 32]. This study presents a novel self-supervised fitting strategy (ssVERDICT) that can support biophysical multi-compartment dMRI models, demonstrated with the three-compartment VERDICT prostate [17, 19]. Previously, self-supervised model fitting was limited only to simple exponential and biexponential models [5, 14]. This is likely due to the difficulty involved in formulating complex signal equations (typical of biophysical models) as a differentiable forward model. Our work is a key step-change for self-supervised machine learning for dMRI model fitting, moving from simple models to complex multi-compartment biophysical models.
Figure 3: Parameter maps of the four fitted VERDICT parameters and calculated \(f_{VASC}\) for two patients – dataset 1 shows a Gleason 3+3 grade tumour in the left anterior and 3+4 grade tumour in the right posterior peripheral zone, and dataset 2 shows a Gleason 4+3 grade tumour in the right posterior peripheral zone, and dataset 2 shows a Gleason 4+3 grade tumour in the right peripheral zone. We observe improved lesion conspicuity overall when using ssVERDICT, whilst supervised DL only shows strong tumour conspicuity for \(f_{ES}\) and \(f_{VASC}\), and NLLS only for \(f_{IC}\) and \(f_{ES}\).
We demonstrate that ssVERDICT outperforms the two gold-standard approaches for VERDICT model fitting (conventional iterative fitting (NLLS) and supervised DL fitting) [17, 18, 19, 24, 25, 26, 33] across a range of quantitative metrics. We also use ssVERDICT on clinical _in vivo_ prostate data, showing excellent tissue discrimination between benign and cancerous tissue. Our work is the first to investigate model fitting estimation bias in prostate imaging, achieving reduced bias in comparison to supervised DL, as well as being faster than conventional NLLS fitting. PyTorch code for the complex VERDICT prostate model, as well as instructions on how to implement self-supervised fitting of other VERDICT-based biophysical models [18, 33] is available at [https://github.com/snigdha-sen/ssVERDICT](https://github.com/snigdha-sen/ssVERDICT). The differentiable form of the compartments can also be used to enable self-supervised fitting of other complex diffusion models, e.g. the'sphere' for NODDI [34] and 'astrosticks' for the Soma And Neurite Density Imaging (SANDI) model [35].
In simulations, ssVERDICT showed stronger correlations between estimated parameters and ground-truth values than the other two methods for all VERDICT parameters (Fig. 2). This suggests ssVERDICT can estimate the underlying microstructure more accurately than supervised DL and NLLS. We also found reduced bias and MSE across all four fitted VERDICT parameters when using ssVERDICT in
Figure 4: Boxplots of four fitted VERDICT parameter values in benign and cancerous tissue regions in a dataset of 20 PCa patients, calculated via the three fitting methods. We find that ssVERDICT maintains the high level of statistical significance achieved by the baseline methods when using \(f_{IC}\) and \(f_{ESS}\) for tissue discrimination. ssVERDICT also improves the level of statistical significance with \(d_{ESS}\), and achieves statistical significance with \(R\).
comparison to the other methods, as well as lower variance when estimating \(f_{EES}\) and \(d_{EES}\) (Table 1). These results are in agreement with [5, 14] who found that self-supervised fitting of the simple IVIM model resulted in more accurate estimation than NLLS and lower bias than supervised DL. Our results demonstrate that this improvement in estimation translates to a significantly more complex multi-compartment model.
Analysis of real patient data with ssVERDICT shows promising results _in vivo_, achieving the best tumour conspicuity over all VERDICT maps (e.g Fig. 3) and enhanced tissue type discrimination. We found higher statistical significance for \(d_{EES}\) with ssVERDICT in comparison to other methods. For \(f_{IC}\) and \(f_{EES}\), ssVERDICT achieved statistical significance at \(p<0.001\), as did the baselines, and also achieved statistical significance for \(R\). These results indicate that the improved accuracy in estimation with ssVERDICT in simulations transfers to patient data, demonstrating better discrimination between tissue types. This strongly suggests the benefits of our technique will translate to clinical practice, improving non-invasive tumour characterisation and hence further reducing invasive biopsies.
Figure 5: Boxplots of difference between fitted VERDICT parameter values and simulated ground truth using the three fitting strategies. We find median differences closest to zero and smaller interquartile ranges in general across the four parameters when using ssSVERDICT, suggesting more accurate fitting by our method across a range of SNR values.
This work is limited primarily by the small size of the patient dataset, and the range of prostatic disease included. We also only focus on voxelwise methods, rather than extending to architectures that learn spatial correspondences in images such as CNNs or spatial transformers. Whilst a self-supervised CNN has been demonstrated for the IVIM model [15, 16] and supervised CNN methods have been widely used for dMRI model fitting, we instead focus on voxelwise fitting methods to enable a clear comparison between ssVERDICT and the currently used VERDICT fitting techniques in a controlled environment.
Future work will aim to increase statistical significance with a larger patient cohort, and incorporate a wider range of prostatic diseases [25], to test ssVERDICT's ability to accurately characterise tissue microstructure and maximise its potential clinical impact. We will also investigate fitting more complex biophysical models such as [18, 33] via a self-supervised CNN approach similar to [15, 17], to investigate potential further gains in fitting speed and accuracy.
In conclusion, our work shows that self-supervised fitting of the VERDICT prostate model performs better in simulations and _in vivo_ data than baseline methods. This study is the first to extend self-supervised model fitting beyond highly simple models. Our results demonstrate that ssVERDICT provides accurate characterization of prostate tumour microstructure, contributing towards the ultimate goal of reducing the number of biopsies and improving patient care.
## Acknowledgement
This work was supported by the EPSRC-funded UCL Centre for Doctoral Training in Intelligent, Integrated Imaging in Healthcare (i4health) (EP/S021930/1) and the Department of Health's NIHR-funded Biomedical Research Centre at University College London Hospitals. This work is also funded by EPSRC, grant numbers EP/N021967/1, EP/R006032/1, EP/V034537/1; and by Prostate Cancer UK, Targeted Call 2014, Translational Research St.2, grant number PG14-018-TR2. |
2309.07176 | Optimal and Fair Encouragement Policy Evaluation and Learning | In consequential domains, it is often impossible to compel individuals to
take treatment, so that optimal policy rules are merely suggestions in the
presence of human non-adherence to treatment recommendations. In these same
domains, there may be heterogeneity both in who responds in taking-up
treatment, and heterogeneity in treatment efficacy. While optimal treatment
rules can maximize causal outcomes across the population, access parity
constraints or other fairness considerations can be relevant in the case of
encouragement. For example, in social services, a persistent puzzle is the gap
in take-up of beneficial services among those who may benefit from them the
most. When in addition the decision-maker has distributional preferences over
both access and average outcomes, the optimal decision rule changes. We study
causal identification, statistical variance-reduced estimation, and robust
estimation of optimal treatment rules, including under potential violations of
positivity. We consider fairness constraints such as demographic parity in
treatment take-up, and other constraints, via constrained optimization. Our
framework can be extended to handle algorithmic recommendations under an
often-reasonable covariate-conditional exclusion restriction, using our
robustness checks for lack of positivity in the recommendation. We develop a
two-stage algorithm for solving over parametrized policy classes under general
constraints to obtain variance-sensitive regret bounds. We illustrate the
methods in two case studies based on data from randomized encouragement to
enroll in insurance and from pretrial supervised release with electronic
monitoring. | Angela Zhou | 2023-09-12T20:45:30Z | http://arxiv.org/abs/2309.07176v2 | # Optimal and Fair Encouragement Policy Evaluation and Learning
###### Abstract
In consequential domains, it is often impossible to compel individuals to take treatment, so that optimal policy rules are merely suggestions in the presence of human non-adherence to treatment recommendations. Under heterogeneity, covariates may predict take-up of treatment and final outcome, but differently. While optimal treatment rules optimize causal outcomes across the population, access parity constraints or other fairness considerations on who receives treatment can be important. For example, in social services, a persistent puzzle is the gap in take-up of beneficial services among those who may benefit from them the most. We study causal identification and robust estimation of optimal treatment rules, including under potential violations of positivity. We consider fairness constraints such as demographic parity in treatment take-up, and other constraints, via constrained optimization. Our framework can be extended to handle algorithmic recommendations under an often-reasonable covariate-conditional exclusion restriction, using our robustness checks for lack of positivity in the recommendation. We develop a two-stage algorithm for solving over parametrized policy classes under general constraints to obtain variance-sensitive regret bounds. We illustrate the methods in two case studies based on data from randomized encouragement to enroll in insurance and from pretrial supervised release with electronic monitoring.
## 1 Introduction
The intersection of causal inference and machine learning for heterogeneous treatment effect estimation can improve public health, increase revenue, and improve outcomes by personalizing treatment decisions, such as medications, e-commerce platform interactions, and social interventions, to those who benefit from it the most [8; 39; 45; 63]. But, in many important settings, we do not have direct control over treatment, and can only optimize over _encouragements_, or _recommendations_ for treatment. For example, in e-commerce, companies can rarely _compel_ users to sign up for certain services, rather _nudge_ or _encourage_ users to sign up via promotions and offers. When we are interested in optimizing the effects of signing up - or other voluntary actions beyond a platform's control - on important final outcomes such as revenue, we therefore need to consider _optimal encouragement designs_. The human in the loop requires new methodology for optimal encouragement designs because when the human in the loop makes the final prescription, algorithmic recommendations do not have direct causal effects on outcomes; they can only change the probability of treatment assignment which does have direct causal effects.
In this paper, we focus on leveraging this insight to model sources of disparities via non-adherence in the provision of social services and algorithmic recommendations, whereas previous decision rules or recommendations are audited for disparities on the recommendation alone, rather than realized outcomes. To be sure, non-adherence itself is well-studied in causal inference, for example
via intention-to-treat analysis. We focus on implications for optimal treatment rules in settings where _fairness in machine learning_[11] is an important concern. For example, doctors prescribe treatment from recommendations [40], managers and workers combine their expertise to act based on decision support [13], and in the social sector, caseworkers assign to beneficial programs based on recommendations from risk scores that support triage [21; 26; 62]. By fairness, we refer specifically to _statistical parity_ constraints which enforce parity in performance measures, as the term is used in the algorithmic fairness literature.
One example of salient fairness concerns include disparities in _access to the intervention_. A long history of welfare rights advocacy and civil rights oversight [58] is concerned about disparities in provision of resources and services in social benefits, and recognizes that discretionary decisions of "street-level bureaucrats" [41] may lead to disparities in access. Even without external decision-makers screening individuals in and out, differential take-up by individuals can be a large driver of realized inequities in service delivery. Hassles in service delivery can compound burdens on marginalized and vulnerable populations [29]. Motivated by this context, we develop two case studies around expanding access to health insurance (in the appendix) and algorithmic recommendations to illustrate how treatment-fair optimal decision-rules can reduce access disparities at low overall utility cost. In this paper, we also develop methodology for more general fairness constraints.
Our contributions are as follows: we characterize optimal and resource fairness-constrained optimal decision rules, develop statistically improved estimators and robustness checks for the setting of algorithmic recommendations with sufficiently randomized decisions. In contrast, previous work in algorithmic accountability primarily focuses on auditing _recommendations_, but not both the access and efficacy achieved under the final decision rule. Therefore, previous methods can fall short in mitigating potential disparities. We consider two settings: one related to encouragement designs with random allocation of encouragement, another related to algorithmic recommendations (which requires either parametric or robust extrapolation). We also develop methodology for optimizing over a constrained policy class with less conservative out-of-sample fairness constraint satisfaction by a two-stage procedure, and we provide sample complexity bounds. We assess improved recommendation rules in a stylized case study of optimizing health insurance expansion using data from the Oregon Insurance study, and another stylized case study of optimizing recommendation of supervised release based on a pretrial risk-assessment tool while reducing surveillance disparities.
## 2 Related Work
In the main text, we briefly highlight the most relevant methodological and substantive work and defer additional discussion to the appendix.
**Optimal encouragement designs/policy learning with constraints.** There is extensive literature on off-policy evaluation and learning, empirical welfare maximization, and optimal treatment regimes [9; 63; 45; 39]. [51] studies an optimal individualized encouragement design, though their focus is on optimal individualized treatment regimes with instrumental variables. [34] study fairness in pricing, and some of the desiderata in that setting on revenue (here, marginal welfare) and demand (take-up) are again relevant here, but in a more general setting beyond pricing. The most closely related work in terms of problem setup is the formulation of "optimal encouragement designs" in [51]. However, they focus on knapsack resource constraints, which have a different solution structure than fairness constraints. [56] has studied uniform feasibility in constrained resource allocation, but without encouragement or fairness. [14] studies robust extrapolation in policy learning from algorithmic recommendation, but not fairness. Our later case study on supervised release appeals to a great deal of randomness in final treatment decisions for supervised release (arguably less consequential than pretrial detention, hence subject to wide discretion) so that we only require robust extrapolation over the first out of two stages.
**Fairness constraints on intention-to-treat analysis.** We focus on deriving estimators for intention-to-treat analyses in view of relevant fairness constraints. Our interest is in imposing separate desiderata on treatment realizations under non-compliance; but we don't conduct instrumental variable inference: we assume unconfoundedness holds. Our analysis essentially considers simultaneously two perspectives in the constrained optimization: 1) viewing treatment as a potential outcome of a recommendation treatment, i.e. \(T(R)\), and 2) an intention-to-treat stance in the causal effects of treatment on outcomes, i.e. \(Y(T)\), even though treatment is not controllable. Marginally, the first
perspective estimates disparities and is relevant for estimating fairness constraints, while the second is relevant for the utility objective. Importantly, the quantities we estimate are not on joint events of take-up and final outcome utility (unlike principal stratification). Rather, we assess personalized policies by their population-averaged utility and fairness measures.
## 3 Problem Setup
We briefly describe the problem setup. We work in the Neyman-Rubin potential outcomes framework for causal inference [52]. We define the following:
* recommendation flag \(R\in\{0,1\}\), where \(R=1\) means encouraged/recommended. (We will use the terms encouragement/recommendation interchangeably).
* treatment \(T(r)\in\mathcal{T}\), where \(T(r)=1\) indicates the treatment decision was \(1\) when the recommendation reported \(r\).
* outcome \(Y(t(r))\) is the potential outcome under encouragement \(r\) and treatment \(t\).
Regarding fairness, we are concerned about disparities in utility and treatment benefits (resources or burdens) across different groups, denoted \(A\in\{a,b\}\). (For notational brevity, we may generically discuss identification/estimation without additionally conditioning on the protected attribute). For example, recommendations arise from binary high-risk/low-risk labels of classifiers. In practice, in consequential domains, classifier decisions are rarely automated, rather used to inform humans in the loop who decide whether to assign treatment. For binary outcomes, we will interpret \(Y(t(r))=1\) as the positive outcome. When outcomes and treatments are binary, \(Y\in\{0,1\},T\in\mathcal{T}\), where \(\mathcal{T}=\{0,1\}\), we may further develop analogues of fair classification criteria. We let \(c(r,t,y)\colon\{0,1\}^{3}\mapsto\mathbb{R}\) denote the cost function for \(r\in\{0,1\},t\in\mathcal{T},y\in\{0,1\}\), which may sometimes be abbreviated \(c_{rt}(y)\). We discuss identification and estimation based on the following recommendation propensity \(e_{r}\), treatment propensity \(p_{t|r}\), and outcome \(\mu_{t}\) models:
\[e_{r}(X,A)\coloneqq P(R=r\mid X,A),\;\;p_{t|r}(X,A)\coloneqq P(T =t\mid R=r,X,A),\] \[\mu_{rt}(X,A)\coloneqq\mathbb{E}[c_{rt}(Y)\mid R=r,T=t,X,A]= \mathbb{E}[c_{rt}(Y)\mid T=t,X,A]\coloneqq\mu_{t}(X,A)\text{ (asn.2)}\]
We are generally instead interested in _personalized recommendation rules_, described via the policy function \(\pi(r\mid X)\coloneqq\pi_{r}(X)\) which gives the probability of assigning the recommendation \(r\) to covariates \(X\). The average encouragement effect (AEE) is the difference in average outcomes if we refer everyone vs. no one, while the encouragement policy value \(V(\pi)\) is the population expectation induced by the outcomes and treatment with recommendations following the policy distribution.
\[AEE=\mathbb{E}[Y(T(1))-Y(T(0))],\qquad V(\pi)=\mathbb{E}[c(\pi,T(\pi),Y(\pi))].\]
We use the AEE terminology instead of intention-to-treat (ITT) because the conventional first-stage intention-to-treat in ITT is actually our first-stage encouragement or recommendation. Because algorithmic decision makers may be differentially responsive to recommendation, and treatment effects may be heterogeneous, the optimal recommendation rule may differ from the (infeasible) optimal treatment rule when taking constraints into account or for simpler policy classes.
**Assumption 1** (Consistency and SUTVA ).: \(Y_{i}=Y_{i}(T_{i}(R_{i}))\)_._
**Assumption 2** (Conditional exclusion restriction).: \(Y(T(R))\perp\!\!\!\!\perp R\mid T,X,A\)_._
**Assumption 3** (Unconfoundedness).: \(Y(T(R))\perp\!\!\!\!\perp T(R)\mid X,A\)_._
**Assumption 4** (Stable responsivities under new recommendations).: \(P(T=t\mid R=r,X)\) remains fixed from the observational to the future dataset.
**Assumption 5** (Decomposable utilities).: \(c(r,t,y)=c_{r}(r)+c_{t}(t)+c_{y}(y)\)__
Our key assumption beyond standard causal inference assumptions is the conditional exclusion restriction assumption 2, i.e. that conditional on observable information \(X\), the recommendation has no causal effect on the outcome beyond its effect on increasing treatment probability. This assumes that all of the covariate information that is informative of downstream outcomes is measured. Although this may not exactly hold in all applications, stating this assumption is also a starting point for sensitivity analysis under violations of it [32, 37].
Assumption 4 is a structural assumption that limits our method to most appropriately re-optimize over small changes to existing algorithmic recommendations. For example, \(p_{0|1}(x)\) (disagreement with algorithmic recommendation) could be a baseline algorithmic aversion. Not all settings are appropriate for this assumption. We don't assume micro-foundations on how or why human decision-makers were deviating from algorithmic recommendations, but take these patterns as given. Again we can relax this assumption with sensitivity analysis. Assumption 5 is a mild assumption on modeling utility, that it is not defined on joint realizations of potential outcomes.
We first also assume overlap in recommendations and treatment. But later we give robust methods to relax this assumption, leveraging our finer-grained characterization.
**Assumption 6** (Overlap).: \(\rho_{r}\leq e_{r}(X,A)\leq 1-\rho_{r};\ \ \rho_{t}\leq p_{t|r}(X,A)\leq 1-\rho_{t}\) _and \(\rho_{r},\rho_{t}\leq 0\)_
## 4 Method
We consider two problem settings, which model different situations, and differ based on the strength of overlap assumptions.
**Setting 1** (Randomized encouragement).: \(R\) _is (as-if) randomized and satisfies overlap (Assumption 6)._
Then \(R\) can be interpreted as intention to treat or prescription, whereas \(T\) is the actual realization thereof. Theorem 1 models non-adherence situations where decision-makers can target encouragements, but not the direct receipt of treatment itself.
**Setting 2** (Algorithmic recommendation).: \(R\) _is the output of a predictive model and does not satisfy Assumption 6._
We later on extend our methods to the second setting, where \(R\) does not satisfy overlap in recommendation, but there is sufficient randomness in human decisions to satisfy overlap in treatment.
First, we establish causal identification of the estimands via regression adjustment. Causal identification rewrites the causal estimand in terms of probability distributions estimable from data. The argument follows by applying the conditional exclusion restriction and consistency, but crucially does not rely on overlap. We also first consider a special type of fairness constraint, resource parity, and characterize optimal decisions.
**Proposition 1** (Regression adjustment identification).: \[\mathbb{E}[c(\pi,T(\pi),Y(\pi))]=\sum_{t\in\mathcal{T},r\in\{0,1\}}\mathbb{E} [\pi_{r}(X)\mu_{t}(X)p_{t|r}(X)]\]
Resource-parity constrained optimal decision rulesWe consider an access/resource/burden parity fairness constraint:
\[V_{\epsilon}^{*}=\min_{\pi}\ \{\mathbb{E}[c(\pi,T(\pi),Y(\pi))]\colon\mathbb{E}[T (\pi)\mid A=a]-\mathbb{E}[T(\pi)\mid A=b]\leq\epsilon\} \tag{1}\]
Enforcing absolute values, etc. follows in the standard way. Not all values of \(\epsilon\) may be feasible; in the appendix we give an auxiliary program to compute feasible ranges of \(\epsilon\). We first characterize a threshold solution when the policy class is unconstrained.
**Proposition 2** (Threshold solutions).: Define \(L(\lambda,X,A)=\)
\[(p_{1|1}(X,A)-p_{1|0}(X,A))\left\{\tau(X,A)+\frac{\lambda}{p(A)}(\mathbb{I} \left[A=a\right]-\mathbb{I}\left[A=b\right])\right\}+\lambda(p_{1|0}(X,a)-p_{ 1|0}(X,b))\]
Then \(\lambda^{*}\in\operatorname*{arg\,min}_{\lambda}\ \mathbb{E}[L(\lambda,X,A)_{+}]\) and \(\pi^{*}(x,u)=\mathbb{I}\{L(\lambda^{*},X,u)>0\}.\) If instead \(d(x)\) is a function of covariates \(x\) only, \(\lambda^{*}\in\operatorname*{arg\,min}_{\lambda}\ \mathbb{E}[\mathbb{E}[L(\lambda,X,A)\mid X]_{+}]\) and \(\pi^{*}(x)=\mathbb{I}\{\mathbb{E}[L(\lambda^{*},X,A)\mid X]>0\}.\)
Establishing this threshold structure (follows by duality of infinite-dimensional linear programming) allows us to provide a generalization bound argument.
**Proposition 3** (Policy value generalization).: Assume the nuisance models \(\eta=[p_{1|0},p_{1|1},\mu_{1},\mu_{0}]^{\top},\eta\in\mathcal{F}_{\eta}\) are consistent and well-specified with finite VC-dimension \(v_{\eta}\) over the product function class \(\mathcal{F}_{\eta}\). Let \(\Pi=\{\mathbb{I}\{\mathbb{E}[L(\lambda,X,A;\eta)\mid X]>0\colon\lambda\in \mathbb{R};\eta\in\mathcal{F}_{\eta}\}.\)
\[\sup_{\pi\in\Pi,\lambda\in\mathbb{R}}|(\mathbb{E}_{n}[\pi L(\lambda,X,A)]- \mathbb{E}[\pi L(\lambda,X,A)])|=O_{p}(n^{-\frac{1}{2}})\]
This bound is stated for known nuisance functions: verifying stability under estimated nuisance functions further requires rate conditions.
Doubly-robust estimationWe may improve statistical properties of estimation by developing _doubly robust_ estimators which can achieve faster statistical convergence when both the probability of recommendation assignment (when it is random), and the probability of outcome are consistently estimated; or otherwise protect against misspecification of either model. We first consider the ideal setting when algorithmic recommendations are randomized so that \(e_{r}(X)=P(R=r\mid X)\).
**Proposition 4** (Variance-reduced estimation).: \[V(\pi) =\sum_{t\in\mathcal{T},r\in\{0,1\}}\mathbb{E}\left[\pi_{r}(X) \left\{\frac{\mathbb{I}[R=r]}{e_{r}(X)}(\mathbb{I}[T=t]c_{r1}(Y)-\mu_{1}(X)p_{ t|r}(X))+\mu_{1}(X)p_{t|r}(X)\right\}\right]\] \[\mathbb{E}[T(\pi)] =\sum_{r\in\{0,1\}}\mathbb{E}\left[\pi_{r}(X)\left\{\frac{ \mathbb{I}[R=r]}{e_{r}(X)}(T(r)-p_{1|r}(x))+p_{1|r}(x)\right\}\right]\]
We retain the full expression rather than simplifying (as appears in [51]) since the doubly-robust estimation of constraints changes the Lagrangian. For example, for regression adjustment, it is clearer in Proposition 9 how constraints affect the optimal decision rule.
### Robust estimation with treatment overlap but not recommendation overlap
When recommendations are e.g. the high-risk/low-risk labels from binary classifiers, we may not satisfy the overlap assumption, since algorithmic recommendations are deterministic functions of covariates. However, note that identification in Proposition 1 requires only SUTVA and consistency, and the exclusion restriction assumption.
A naive approach based on parametric extrapolation estimates \(p_{1|1}(X)\), treatment responsivity, on the observed data and simply uses the parametric form to extrapolate to the full dataset. (In Appendix B we describe variance reduction which can be possibel). On the other hand, parametric extrapolation is generally unsatisfactory because conclusions will be driven by model specification rather than observed data. Nonetheless, it can provide a starting point for robust extrapolation of structurally plausible treatment response probabilities.
Robust extrapolation under violations of overlapWe next describe methods for robust extrapolation under structural assumptions about smoothness of outcome models. Under violations of overlap, the only unknown quantity is \(p_{t|r}(X)\) in regions of no overlap in recommendation; but a plausible assumption is that the underlying function is smooth in covariates. A robust approach obtains worst-case bounds on policy value under all functions compatible with a particular smoothness assumption.On the other hand, we assume that overlap holds with respect to \(T\) given covariates \(X\), so our finer-grained approach via Assumption 2 enjoys milder penalties due to robustness, since we need only robustly extrapolate the treatment response to recommendations, \(p_{t|r}(X)\), rather than the outcome models \(\mu_{t}(X)\). Define the regions of no-overlap as the following: let \(\mathcal{X}_{r}^{\text{nov}}=\{x:P(R=r\mid x)=0\}\); on this region we do not jointly observe all potential values of \((t,r,x)\), and let \(\mathcal{X}^{\text{nov}}=\bigcup_{r}\mathcal{X}_{r}^{\text{nov}}\). Correspondingly, define the overlap region as \(\mathcal{X}^{\text{nov}}=(\mathcal{X}^{\text{nov}})^{c}\). We consider uncertainty sets for ambiguous treatment recommendation probabilities. For example, one plausible structural assumption is _monotonicity_ of treatment in recommendation. We define the following uncertainty set:
\[\mathcal{U}_{q_{t|r}}\coloneqq\left\{q_{1|r}(x^{\prime})\colon q_{1|r}(x)\geq p _{1|r}(x),\ \forall x\in\mathcal{X}_{r}^{\text{nov}}\ \sum_{t\in\mathcal{T}}q_{t|r}(x)=1,\forall x,r\right\}\]
We could assume uniform bounds on unknown probabilities, or more refined bounds, such as Lipschitz-smoothness with respect to some distance metric \(d\), or boundedness.
\[\mathcal{U}_{\text{lip}} \coloneqq\left\{q_{1|r}(x^{\prime})\colon d(q_{1|r}(x^{\prime}),p_ {1|r}(x))\leq Ld(x^{\prime},x),\ (x^{\prime},x)\in(\mathcal{X}^{\text{nov}}\times\mathcal{X}^{\text{nov}})\right\}\] \[\mathcal{U}_{\text{bnd}} \coloneqq\left\{q_{1|r}(x^{\prime})\colon\underline{b}(x)\leq q_{ 1|r}(x^{\prime})\leq\overline{b}(x)\right\}\]
Define \(V_{ov}(\pi)\coloneqq\sum_{t\in\mathcal{T},r\in\{0,1\}}\mathbb{E}[\pi_{r}(X)p_ {t|r}(X)\mu_{t}(X)\mathbb{I}\{X\in\mathcal{X}^{\text{nov}}\}]\). Let \(\mathcal{U}\) denote the uncertainty set including any custom constraints, e.g. \(\mathcal{U}=\mathcal{U}_{q_{1|r}}\cap\mathcal{U}_{\text{lip}}\). Then we may obtain robust bounds by optimizing over regions of no overlap:
\[\overline{V}(\pi) \coloneqq V_{ov}(\pi)+\overline{V}_{nov}(\pi),\] \[\text{where }\overline{V}_{nov}(\pi) \coloneqq\max_{q_{tr}(X)\in\mathcal{U}}\left\{\sum_{t\in \mathcal{T},r\in\{0,1\}}\mathbb{E}[\pi_{r}(X)\mu_{t}(X)q_{tr}(X)\mathbb{I}\{X \in\mathcal{X}^{\text{nov}}\}]]\right\}\]
In the specialized, but practically relevant case of binary outcomes/treatments/recommendations, we obtain the following simplifications for bounds on the policy value, and the minimax robust policy that optimizes the worst-case overlap extrapolation function. In the special case of constant uniform bounds, it is equivalent (in the case of binary outcomes) to consider marginalizations:
**Lemma 1** (Binary outcomes, constant bound).: _Let \(\mathcal{U}_{\text{chnd}}\coloneqq\left\{q_{t|r}(x^{\prime})\colon\underline{B} \leq q_{1|r}(x^{\prime})\leq\overline{B}\right\}\) and \(\mathcal{U}=\mathcal{U}_{q_{t|r}}\cap\mathcal{U}_{\text{chnd}}\). Define \(\beta_{t|r}\coloneqq\mathbb{E}[q_{t|r}(X,A)\mid T=t]\text{.If }T\in\{0,1\},\)_
\[\overline{V}_{no}(\pi) =\sum_{t\in\mathcal{T},r\in\{0,1\}}\mathbb{E}[c_{rt}^{*}\beta_{t| r}\mathbb{E}[Y\pi_{r}(X)\mid T=t]\mathbb{I}\{X\in\mathcal{X}^{\text{nov}}\}]],\] \[\text{where }c_{rt}^{*} =\begin{cases}\overline{B}\mathbb{I}\left[t=1\right]+\underline{B }\mathbb{I}\left[t=0\right]&\text{ if }\mathbb{E}[Y\pi_{r}(X)\mid T=t]\geq 0\\ \overline{B}\mathbb{I}\left[t=0\right]+\underline{B}\mathbb{I}\left[t=1\right]& \text{ if }\mathbb{E}[Y\pi_{r}(X)\mid T=t]<0\end{cases}.\]
We consider the case of continuous-valued outcomes in the example setting of the simple resource-parity constrained program of Equation (1). We first study simple uncertainty sets, like intervals, to deduce insights about the robust policy, with a more general reformulation in the appendix.
**Proposition 5** (Robust linear program ).: Suppose \(r,t\in\{0,1\},\) and \(q_{r1}(\cdot,u)\in\mathcal{U}_{\text{h}nd},\forall r,u.\) Define
\[\tau(x,a) \coloneqq\mu_{1}(x,a)-\mu_{0}(x,a),\ \ \Delta B_{r}(x,u)\coloneqq( \overline{B}_{r}(x,u)-\underline{B}_{r}(x,u)),\] \[B_{r}^{\text{mid}}(x,u) \coloneqq\underline{B}_{r}(x,u)+\frac{1}{2}\Delta B_{r}(x,u),\ \ c_{1}(\pi) \coloneqq\sum_{r}\mathbb{E}[\tau\pi_{r}B^{\text{mid}}],\] \[\mathbb{E}[\Delta_{ov}T(\pi)] \coloneqq\mathbb{E}[T(\pi)\mathbb{I}\{X\in\mathcal{X}^{\text{nov }}\}\mid A=a]-\mathbb{E}[T(\pi)\mathbb{I}\{X\in\mathcal{X}^{\text{nov}}\}\mid A =b].\]
Then the robust linear program is:
\[\min V_{ov}(\pi)+\mathbb{E}[\mu_{0}]+c_{1}(\pi)-\frac{1}{2}\sum_{r} \mathbb{E}[\left|\tau\right|\pi_{r}\Delta B_{r}(X,A)\mathbb{I}\{X\in\mathcal{ X}^{\text{nov}}\}]\] \[\text{s.t. }\sum_{r}\{\mathbb{E}[\pi_{r}\overline{B}_{r}(X,A) \mathbb{I}\{X\in\mathcal{X}^{\text{nov}}\}\mid A=a]-\mathbb{E}[\pi_{r} \underline{B}_{r}(X,A)\mathbb{I}\{X\in\mathcal{X}^{\text{nov}}\}\mid A=b]\}+ \Delta_{ov}^{T}(\pi)\leq\epsilon\]
## 5 Additional fairness constraints and policy optimization
We previously discussed policy optimization, over unrestricted decision rules, given estimates. We now introduce general methodology to handle 1) optimization over a policy class of restricted functional form and 2) more general fairness constraints. We first introduce the fair-classification algorithm of [4], then describe our extensions to obtain variance-sensitive regret bounds and less conservative policy optimization (using a regularized ERM argument given in [17]).
Algorithm and setupWe first describe the reductions-based approach for fair classification of [4] before describing our adaptation for constrained policy learning, and localized two-stage variance reduction. They consider classification (i.e. loss minimization) under fairness constraints that can be represented generically as a linear program. In the following, to be consistent with standard
form for linear programs, note that we consider costs \(Y\) so that we can phrase the saddle-point as minimization-maximization. The \(|\mathcal{K}|\) linear constraints and \(J\) groups (values of protected attribute \(A\)) are summarized via a coefficient matrix \(M\in\mathbb{R}^{K\times J}\), which multiplies a vector of constraint moments \(h_{j}(\pi),j\in[J]\) (with \(J\) the number of groups), \(O=(X,A,R,T,Y)\) denoting our data observations, and \(d\) the constraint constant vector:
\[h_{j}(\pi)=\mathbb{E}\left[g_{j}(O,\pi(X))\mid\mathcal{E}_{j}\right]\quad \text{for }j\in J,\;\;Mh(\pi)\leq d\]
The elements of \(h_{j}(\pi)\) are average functionals, for example the average treatment takeup in group \(j\). Importantly, the moment function \(g_{j}\) depends on \(\pi\) while the conditioning event \(\mathcal{E}_{j}\) cannot depend on \(\pi\). Many important fairness constraints can nonetheless be written in this framework, such as burden/resource parity, parity in true positive rates, but not measures such as calibration whose conditioning event does depend on \(\pi\). (See Appendix C.2 for examples omitted for brevity).
Our objective function is the policy value \(V(\pi)\). (Later this is linearized, as in [4] by optimizing over distributions over policies). We further consider a convexification of \(\Pi\) via randomized policies \(Q\in\Delta(\Pi)\), where \(\Delta(\Pi)\) is the set of distributions over \(\Pi\), i.e. a randomized classifier that samples a policy \(\pi\sim Q\). Therefore, our target estimand is the optimal distribution \(Q\) over policies \(\pi\) that minimizes the objective value \(V(Q)\) subject to the fairness constraints encoded in \(Mh(Q)\leq d\):
\[\min_{Q\in\Delta(\Pi)}\{V(Q)\colon\;\;Mh(Q)\leq d\}\]
Next we discuss the cost-weighted classification reduction of off-policy learning [63], which we use to solve constrained off-policy learning via [3].
We use a well-known reduction of policy learning to cost-sensitive classification, described in Appendix C.2.1 of the appendix. Therefore the centered regret can be reparametrized via the parameter \(\beta\) as: \(J(\beta)=J(\operatorname{sgn}(f_{\beta}(\cdot)))=\mathbb{E}[\operatorname{ sgn}(f_{\beta}(X))\left\{\psi\right\}]\). We can apply the standard reduction to cost-sensitive classification since \(\psi_{i}\operatorname{sgn}(f_{\beta}(X_{i}))=\left|\psi_{i}\right|(1-2\mathbb{ I}\left[\operatorname{sgn}(f_{\beta}(X_{i}))\neq\operatorname{sgn}(\psi_{i})\right])\). Then we can use surrogate losses for the zero-one loss. Although many functional forms for \(\ell(\cdot)\) are Fisher-consistent, one such choice of \(\ell\) is the logistic (cross-entropy) loss \(\mathbb{E}[\left|\psi\right|\ell(f_{\beta}(X),\operatorname{sgn}(\psi))], \qquad l(g,s)=2\log(1+\exp(g))-(s+1)\).
**Optimization.** Ultimately, the optimization is solved using sampled and estimated moments. Define the integrand of the constrained, weighted empirical risk minimization as \(v_{(\cdot)}(O;\pi_{\beta},\eta)=\left|\psi_{(\cdot)}(O;\eta)\right|\ell(f_{ \beta}(X),\operatorname{sgn}(\psi_{(\cdot)}(O;\eta)))\). Our estimate of the objective function is therefore
\[V_{(\cdot)}(Q)=\mathbb{E}[\left|\psi_{(\cdot)}\right|\ell(f_{\beta}, \operatorname{sgn}(\psi_{(\cdot)}))]=\mathbb{E}_{\pi_{\beta}\sim Q}[v_{(\cdot )}(O;\pi_{\beta},\eta)].\]
Note that for the rest of our discussions of algorithms for constrained policy optimization, we overload notation and use \(V_{(\cdot)}(Q)\) to refer to policy _regret_, as above. The optimal policies are the same for regret vs. value. We obtain the sample estimator \(\tilde{V}_{(\cdot)}(Q)\) and sample constraint moments \(\hat{h}(Q)\)
analogously. We also add a feasibility margin \(\epsilon_{k}\) which depends on concentration of the estimated constraints, so the sampled constraint vector is \(\hat{d}_{k}=d_{k}+\epsilon_{k},\) for all \(k\). We seek an approximate saddle point so that the constrained solution is equivalent to the Lagrangian,
\[\hat{L}(Q,\lambda)=\hat{V}(Q)+\lambda^{\top}(M\hat{h}(Q)-\hat{d}),\qquad\min_{Q \in\Delta(\Pi)}\{\hat{V}(Q)\colon M\hat{h}(Q)\leq\hat{d}\}=\min_{Q\in\Delta( \Pi)}\max_{\lambda\in\mathbb{R}_{+}^{\mathcal{K}}}\hat{L}(Q,\lambda).\]
We simultaneously solve for an approximate saddle point over the \(B\)-bounded domain of \(\lambda\):
\[\min_{Q\in\Delta}\max_{\lambda\in\mathbb{R}_{+}^{|\mathcal{K}|},\|\lambda\|_ {1}\leq B}\hat{L}(Q,\lambda),\qquad\max_{\lambda\in\mathbb{R}_{+}^{|\mathcal{ K}|},\|\lambda\|_{1}\leq B}\;\min_{Q\in\Delta}\hat{L}(Q,\lambda)\]
[4, Theorem 3] gives generalization guarantees on the policy value and constraint violation achieved by the approximate saddle point output by the algorithm. The analysis is generic under rate assumptions on uniform convergence of policy and constraint values. Such a rate \(\alpha\) follows from standard analyses in causal inference, and is used to set the constraint violation feasibility margin \(\epsilon_{k}=O(n^{-\alpha})\).
**Assumption 7** (Rate assumption on policy and constraint values.).: There exists \(C,C^{\prime}\geq 0\) and \(\alpha\leq 1/2\) such that \(\sup_{Q\in\Delta(\Pi)}\{V(Q;\eta)-\hat{V}(Q;\hat{\eta})\}\leq Cn^{-\alpha}\) and \(\varepsilon_{k}=C^{\prime}\sum_{j\in\mathcal{J}}|M_{k,j}|\,n_{j}^{-\alpha}\), where \(n_{j}\) is the number of data points that fall in \(\mathcal{E}_{j}\).
Next we summarize the optimization algorithm. We play a no-regret (second-order multiplicative weights [16, 55], a slight variant of Hedge/exponentiated gradient [25]) algorithm for the \(\lambda-\)player, while using best-response oracles for the \(Q-\)player. Full details are in Algorithm 1. Given \(\lambda_{t},\)\(\mathrm{BEST}_{\beta}\left(\lambda_{t}\right)\) computes a best response over \(Q\); since the worst-case distribution will place all its weight on one classifier, this step can be implemented by a reduction to cost-sensitive/weighted classification [15, 63], which we describe in further detail below. Computing the best response over \(\mathrm{BEST}_{\lambda}(\hat{Q}_{t}))\) selects the most violated constraint. Further details are in Appendix C.2.
**Two-stage variance-constrained algorithm.** We seek to improve upon this procedure so that we may obtain regret bounds on policy value and fairness constraint violation that exhibit more favorable dependence on the maximal variance over small-variance _slices_ near the optimal policy, rather than worst-case constants over all policies [17, 9]. And, using estimated variance to set constraint feasibility slacks can achieve tighter fairness control.
These challenges motivate the two-stage procedure, described formally in Algorithm 2 and verbally here. We adapt an out-of-sample regularization scheme developed in [17], which recovers variance-sensitive regret bounds via a small modification to an empirical risk minimization procedure (and by extension, policy learning). We split the data into two subsets \(\mathcal{D}_{1},\mathcal{D}_{2}\), and first learn nuisance estimators \(\hat{\eta}_{1}\) from \(\mathcal{D}_{1}\) (possibly with further sample-splitting) for use in our policy value and constraint estimates. We run Algorithm 1 (\(\mathrm{REDFAIR}(\mathcal{D}_{1},h,\mathcal{E},M,d;\hat{\eta}_{1})\)) on data from \(\mathcal{D}_{1}\) to estimate the optimal policy distribution \(\hat{Q}_{1}\), and the constraint variances at \(\hat{Q}_{1}\). We identify the first-stage binding constraints via the index set \(\hat{I}_{1}\). Next, we _augment_ the constraint matrix with additional constraints that require feasible policies for the second-stage policy distribution to achieve \(\epsilon_{n}\) close policy value and constraint moment values relative to \(\hat{Q}_{1}\). Since errors concentrate quickly, this can be viewed as variance regularization. And, we set the constraint slacks \(\hat{d}\gets d+2\sum_{j\in\mathcal{J}}|M_{k,j}|\,\hat{\sigma}_{j}^{2}n^{-\alpha}\) in the second stage using estimated variance constants from \(\hat{Q}_{1}\). This results in tighter control of fairness constraints. The second stage solves for an approximate saddle point of the augmented system, with objective function and constraints evaluated on \(\mathcal{D}_{2}\) and returns \(\hat{Q}_{2}\).
Next, we provide a generalization bound on the out-of-sample performance of the policy returned by the two-stage procedure. Importantly, because of our two stage procedure, the regret of the policy depends on the worst-case variance of near-optimal policies (rather than all policies). Define the function classes \(\mathcal{F}_{\Pi}=\{v_{DR}(\cdot,\pi;\eta)\colon\pi\in\Pi,\eta\in\mathcal{F}_{ \eta}\}\), \(\mathcal{F}_{j}=\{g_{j}(\cdot,\pi;\eta)\colon\pi\in\Pi,\eta\in\mathcal{F}_{ \eta}\}\), and the empirical entropy integral \(\kappa(r,\mathcal{F})=\inf_{\alpha\geq 0}\{4\alpha+10\int_{\alpha}^{\alpha}\sqrt{ \frac{\mathcal{H}_{2}(\epsilon,\mathcal{F},n)}{n}}d\epsilon\}\) where \(H_{2}(\epsilon,\mathcal{F},n)\) is the \(L_{2}\) empirical entropy, i.e. log of the \(\|\cdot\|_{2}\)\(\epsilon\)-covering number. We make a mild assumption of a learnable function class (bounded entropy integral) [59], satisfied by many standard function classes such as linear models, polynomials, kernel regression, and neural networks [60].
**Assumption 8**.: The function classes \(\mathcal{F}_{\Pi},\{\mathcal{F}_{j}\}_{j\in\mathcal{J}}\) satisfy that for any constant \(r,\kappa(r,\mathcal{F})\to 0\) as \(n\rightarrow\infty\). The function classes \(\{\mathcal{F}_{j}\}_{j\in\mathcal{J}}\) comprise of \(L_{j}\)-Lipschitz contractions of \(\pi\).
We will assume that we are using doubly-robust/orthogonalized estimation as in proposition 4, hence state results depending on estimation error of nuisance vector \(\eta\). The next theorem summarizes the out-of-sample performance of the the two-stage algorithm of Algorithm 2, \(\hat{Q}_{2}\).
**Theorem 3** (Variance-Based Oracle Policy Regret).: _Suppose that the mean-squared-error of the nuisance estimates is upper bounded w.p. \(1-\delta/2\) by \(\chi_{n,\delta}^{2}\), over the randomness of the nuisance sample: \(\max_{l}\{\mathbb{E}[(\hat{\eta}_{l}-\eta_{l})^{2}]\}_{l\in[L]}\coloneqq\chi_{ n}^{2}\)._
_Let \(v_{DR}^{0}(O;Q)\) denote evaluation with true nuisance functions \(\eta_{0}\); define \(r=\sup_{Q\in\mathcal{Q}}\sqrt{\mathbb{E}\left[v_{DR}^{0}(O;Q)^{2}\right]}\) and \(\epsilon_{n}=\Theta\left(\kappa\left(r,\mathcal{F}_{\Pi}\right)+r\sqrt{\frac{ \log(1/\delta)}{n}}\right)\). Moreover, denote an \(\epsilon\)-regret slice of the policy space:_
\[\mathcal{Q}_{*}(\epsilon)=\left\{Q\in\Delta[\Pi]:V(Q_{*}^{0})-V(Q)\leq \epsilon,\;\;h(Q_{*}^{0})-h(Q)\leq d+\epsilon\right\}.\]
_Let \(\tilde{\epsilon}_{n}=O(\epsilon_{n}+\chi_{n,\delta}^{2})\) and denote the variance of the difference between any two policies in an \(\epsilon_{n}\)-regret slice, evaluated at the true nuisance quantities:_
\[\bar{\sigma}_{\mathcal{D}_{2}}^{2}=\sup\,\{\mathrm{Var}\left(v_{DR}^{0}(O;Q)- v_{DR}^{0}\left(O;Q^{\prime}\right)\right):Q,Q^{\prime}\in\mathcal{Q}_{*}( \tilde{\epsilon}_{n})\}.\]
_(Define \(\bar{\sigma}_{k,\mathcal{D}_{2}}^{2}\) analogously for the variance of constraint moments). Then, letting \(\gamma(Q)\coloneqq Mh(Q)\) denote the constraint values, the policy distribution \(Q_{2}\) returned by the out-of-sample regularized ERM satisfies w.p. \(1-\delta\) over the randomness of \(S\):_
\[V(\hat{Q}_{2})-V(Q^{*}) =O(\kappa(\bar{\sigma}_{\mathcal{D}_{2}},\mathrm{conv}(\mathcal{F }_{\Pi}))+\bar{\sigma}_{\mathcal{D}_{2}}n^{-\frac{1}{2}}\sqrt{\log(3/\delta)} +\chi_{n,\delta}^{2})\] \[(\gamma_{k}(\hat{Q}_{2})-d_{k})-(\gamma_{k}(Q^{*})-d_{k}) =O(\kappa(\bar{\sigma}_{k,\mathcal{D}_{2}},\mathrm{conv}(\mathcal{ F}_{j}))+\bar{\sigma}_{k,\mathcal{D}_{2}}n^{-\frac{1}{2}}\sqrt{\log(3/\delta)} +\chi_{n,\delta}^{2})\]
The specific benefits of the two-stage approach are that 1) the constants are improved from an absolute, structure-agnostic bound to depending on the variance of low-regret policies, which also reflects improved variance from using doubly-robust estimation as in proposition 4, and 2) less-conservative out-of-sample fairness constraint satisfaction.
## 6 Case Studies
Due to space constraints, in the main text we only present a case study based on the PSA-DMF for supervised release [49]. In the appendix we include additional experiments and robustness checks, including a case study of fully-randomized recommendations and non-adherence. We conduct a case study on a dataset of judicial decisions on _supervised_ release based on risk-score-informed recommendations of supervised release under an electronic-monitoring program [49]. The PSA-DMF (Public Safety Assessment Decision Making Framework) uses a prediction of failure to appear (for a future court date) to inform pretrial decisions, including our focus on supervised release with electronic monitoring, where judges make the final decision [1]. Despite a large literature on pretrial risk assessment, to the best of our knowledge, it is unclear what empirical evidence justifies release recommendation matrices that have been used in practice to recommend supervised release via electronic monitoring. (We focus on supervised release with electronic monitoring, though the broad term of "supervised release" encompasses substantially very different programs nationwide, including access to supportive services and cesworkers which has been touted as a factor in enabling ball reform and release more broadly [5].) There are current policy concerns about disparities in increasing use of supervised release given mixed evidence on outcomes [49; 28]; e.g. Safety and Justice Challenge [53] concludes "targeted efforts to reduce racial disparities are necessary". We focus on a publicly-available dataset from Cook County with information about defendant characteristics, algorithmic recommendation for electronic monitoring, detention/release/supervised release decisions, and failure to appear and other outcomes [50]. The data were initially used to assess bail reform [49].
We let \(Z\in\{0,1\}\) denote release \((Z=1)\) (with or without conditions). All of our analysis occurs in the \((XZ,AZ,RZ,YZ)\) group, i.e. among the released population only. For brevity we drop \(Z\) in describing the data below. We let \(X\) denote covariates (age, top charge category, PSA FTA/NCA score bucket and flag, top charge category). (Binarized) protected attribute is \(A\): race (non-white/white), or gender (female/male). The algorithmic recommendation is \(R\), a recommendation from the PSA-DMF matrix for supervised release (at any intensity of supervision conditions). The treatment \(T\) is whether
the individual is released under supervision (at any intensity of supervision conditions). The outcome variable, \(Y\), is failure to appear (\(Y=1\)).
In this initial case study, we work with publicly available data [50]. In future work we will seek more granular data with additional robustness checks to support substantive conclusions. We discuss in greater detail in the appendix, but to summarize, unconfoundedness is likely violated (but this can be addressed with standard sensitivity analysis), and some line-level data was aggregated for privacy.
Next in Figure 1 we provide descriptive information illustrating heterogeneity (including by protected attribute) in adherence and effectiveness. We observe wide variation in judges assigning supervised release beyond the recommendation. We use logistic regression to estimate outcome models and treatment response models. The first figure shows estimates of the causal effect for different groups, by gender (similar heterogeneity for race). The outcome is failure to appear, so negative scores are beneficial. The second figure illustrates the difference in responsiveness: how much more likely decision-makers are to assign treatment when there is vs. isn't an algorithmic recommendation to do so. The last figure plots a logistic regression of the lift in responsiveness on the causal effect \(\tau(x,a)=\mu_{1}(x,a)-\mu_{0}(x,a)\). We observe disparities in how responsive decision-makers are conditional on the same treatment effect efficacy. This is importantly not a claim of animus because decision-makers didn't have access to causal effect estimates. Nonetheless, disparities persist.
In Figure 2 we highlight results from constrained policy optimization. The first two plots in each set illustrate the objective function value and \(A=a\) average treatment cost, respectively; for \(A\) being race (nonwhite/white) or gender (female/male), respectively. We use costs of \(100\) for \(Y=1\) (failure to appear, \(0\) for \(Y=0\), and \(20\) when \(T=1\) (set arbitrarily), and minimize costs. On the x-axis we plot the penalty \(\lambda\) that we use to assess the solutions of Proposition 9. The vertical dashed line indicates the solution achieving \(\epsilon=0\), i.e. parity in treatment take-up. Near-optimal policies that reduce treatment disparity can be of interest due to advocacy concerns about how the expansion of supervised release could increase the surveillance of already surveillance-burdened marginalized populations. We see that indeed, for race, surveillance-parity constrained policies can substantially reduce disparities for nonwhites while not increasing surveillance on whites that much: the red line decreases significantly at low increase to the blue line (and low increases to the objective value). On the other hand, for gender, the opportunity for improvement in surveillance disparity is much smaller. |
2301.13721 | DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models | Targeting to understand the underlying explainable factors behind
observations and modeling the conditional generation process on these factors,
we connect disentangled representation learning to Diffusion Probabilistic
Models (DPMs) to take advantage of the remarkable modeling ability of DPMs. We
propose a new task, disentanglement of (DPMs): given a pre-trained DPM, without
any annotations of the factors, the task is to automatically discover the
inherent factors behind the observations and disentangle the gradient fields of
DPM into sub-gradient fields, each conditioned on the representation of each
discovered factor. With disentangled DPMs, those inherent factors can be
automatically discovered, explicitly represented, and clearly injected into the
diffusion process via the sub-gradient fields. To tackle this task, we devise
an unsupervised approach named DisDiff, achieving disentangled representation
learning in the framework of DPMs. Extensive experiments on synthetic and
real-world datasets demonstrate the effectiveness of DisDiff. | Tao Yang, Yuwang Wang, Yan Lv, Nanning Zheng | 2023-01-31T15:58:32Z | http://arxiv.org/abs/2301.13721v3 | # DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models
###### Abstract
In this paper, targeting to understand the underlying explainable factors behind observations and modeling the conditional generation process on these factors, we propose a new task, disentanglement of diffusion probabilistic models (DPMs), to take advantage of the remarkable modeling ability of DPMs. To tackle this task, we further devise an unsupervised approach named DisDiff. For the first time, we achieve disentangled representation learning in the framework of diffusion probabilistic models. Given a pre-trained DPM, DisDiff can automatically discover the inherent factors behind the image data and disentangle the gradient fields of DPM into sub-gradient fields, each conditioned on the representation of each discovered factor. We propose a novel Disentangling Loss for DisDiff to facilitate the disentanglement of the representation and sub-gradients. The extensive experiments on synthetic and real-world datasets demonstrate the effectiveness of DisDiff.
DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models
## 1 Introduction
As one of the most successful generative models, diffusion probabilistic models (DPMs) achieves remarkable performance in image synthesis. They use a series of probabilistic distributions to corrupt images in the forward process and train a sequence of probabilistic models converging to image distribution to reverse the forward process. Despite the remarkable success of DPM in tasks such as image generation (Song et al., 2020), text to images (Saharia et al., 2022), image editing (Meng et al., 2021), little attention has been paid on the representation learning (Zhang et al., 2022) based on DPM. Diff-AE (Preechakul et al., 2022) and PADE (Zhang et al., 2022) are the two methods proposed recently for representation learning by reconstructing the images in the DPM framework. However, the learned latent representation can only be interpreted relying on an extra pre-trained with predefined semantics linear classifier. Although there are some degrees of freedom during implementation, in this paper, we will refer DPMs exclusively to the Denoising Diffusion Probabilistic Models (DDPMs) (Ho et al., 2020).
On the other hand, disentangled representation learning (Higgins et al., 2018) aims to learn the representation of the underlying explainable factors behind the observed data and is thought to be one of the possible ways for AI to understand the world fundamentally. Different factors correspond to different kinds of image variations, respectively, and independently. Most of the methods learn the disentangled representation based on generative models, such as VAE (Higgins et al., 2017; Chen et al., 2018; Kim and Mnih, 2018) and GAN (Lin et al., 2020). The VAE-based methods have an inherent trade-off between the disentangling ability and generating quality (Higgins et al., 2017; Chen et al., 2018; Kim and Mnih, 2018). The GAN-based methods suffer from the problem of reconstruction due to the difficulty of gan-inversion (Wang et al., 2022). To the best of our knowledge, there is no method of learning disentangled representation using DPM.
In this paper, we connect DPM to disentangled representation learning, for the first time, and propose a new task: the disentanglement of DPM. Given a pre-trained DPM model, the goal of disentanglement of DPM is to learn disentangled representations for the underlying factors in an unsupervised manner, and learn the corresponding disentangled conditional sub-gradient fields, with each conditioned on the representation of each discovered factor.
The benefits of the disentanglement of DPM are two-folds: \((i)\) It enables totally unsupervised controlling of images by automatically discovering the inherent semantic factors behind the image data. These factors helps to extends the DPM conditions information from human defined ones such as annotations (Zhang et al., 2022)/image-text pairs (Kawar et al., 2022), or supervised pre-trained models (Kim et al., 2022) such as CLIP (Radford et al., 2021). One can also flexibly sample partial conditions on the part of the information introduced by the superposition of the sub-gradient field, which is novel in existing DPM works. \((ii)\) DPM has remarkable performance on image generation quality,
and is naturally friendly for the inverse problem, e.g., the inversion of DDIM (Song et al., 2020), PADE. Compared to VAE (trade-off between the disentangling ability and generating quality) or GAN (problem of gan-inversion), DPM is a better framework for disentangled representation learning. Besides, as Locatello et al. (2019) points out, other inductive biases should be proposed except for total correlation. DPM makes it possible to adopt constraints from all different timesteps as a new type of inductive bias. Further, as Srivastava et al. (2020) points out, the information of data includes: factorized and non-factorized. DPM has the ability to sample non-factorized (non-conditioned) information (Ho and Salimans, 2022), which is naturally fitting for disentanglement.
To address the task of disentangling the DPM, we further propose a unsupervised solution for the disentanglement of a pretrained DPM, named as DisDiff. DisDiff adopts an encoder to learn the disentangled presentation for each factor, and a decoder to learn the corresponding disentangled conditional sub-gradient fields. We further propose a novel Disentangling Loss to make the encoded representation satisfy the disentanglement requirement, and reconstruct the input image as well.
Our main contributions can be summarized as follows:
* We present a new task: disentanglement of DPM, disentangling a DPM into several disentangled sub-gradient fields, which can improve the interpretability of DPM.
* We build an unsupervised framework for disentanglement of DPM, DisDiff, which not only learns a disentangled representation but also disentangled gradient field for each factor.
* We propose a Disentangling Loss for DPM to facilitate the disentanglement of different factor conditions and the sub-gradient fields.
## 2 Related Works
**Diffusion Probabilistic Models** DPMs have achieved comparable or superior image generation quality (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020; Song et al., 2020; Jolicoeur-Martineau et al., 2020) than GAN (Goodfellow et al., 2020). Diffusion-based image editing has drawn much attention, and there are mainly two categories of works. Firstly, image-guided works edit an image by mixing the latent variables of DPM and the input image (Choi et al., 2021; Lugmayr et al., 2022; Meng et al., 2021). However, using images to specify the attributes for editing may cause ambiguity, as pointed out by Kwon et al. (2022). Secondly, the classifier-guided works (Dhariwal and Nichol, 2021; Avrahami et al., 2022; Liu et al., 2023) edit image by utilizing the gradient of an extra classifier. These methods require calculating the gradient, which is costly. Meanwhile, these methods require annotations or models pre-trained with labeled data. In this paper, we propose DisDiff to edit the image in an unsupervised way. On the other hand, little attention has been paid to representation learning in the literature on the diffusion model. Two related works are Diff-ae (Preechakul et al., 2022) and PADE (Zhang et al., 2022). Diff-ae (Preechakul et al., 2022) proposes a diffusion-based auto-encoder for image reconstruction. PADE (Zhang et al., 2022) uses a pre-trained DPM to build an auto-encoder for image reconstruction. However, the latent representation learned by these two works does not explicitly respond to the underlying factors of the dataset. To the best of our knowledge, our DisDiff is the first diffusion-based framework for disentangled representation learning.
**Disentangled Representation Learning**Bengio et al. (2013) introduced disentangled representation learning. The target of disentangled representation learning is to discover the underline explanatory factors of the observed data. The disentangled representation is defined as each dimension of the disentangled representation corresponding to an independent factor. Based on such a definition, some VAE-based works achieve disentanglement (Chen et al., 2018; Kim and Mnih, 2018; Higgins et al., 2017; Burgess et al., 2018) only by the constraints on probabilistic distributions of representations. Locatello et al. (2019) points out the identifiable problem by proving that only these constraints are not enough for disentanglement, and extra inductive bias is required. For example, Yang et al. (2021) proposes to use symmetry properties modeled by group theory as inductive bias. Most of the methods of disentanglement are based on VAE. There are also some works based on GAN, including leveraging pretrained generative model (Ren et al., 2021). Our DisDiff introduces the constraint of all time steps during the diffusion process as a new type of inductive bias. Furthermore, DPM is capable of sampling non-factorized (non-conditioned) information (Ho and Salimans, 2022), which is naturally fitting for disentanglement. In this way, we shed light on disentanglement based on a new framework of the DPM.
## 3 Background
### Diffusion Probabilistic Models (DPM)
We take DDPM (Ho et al., 2020) as an example. DDPM adopts a sequence of fixed variance distributions \(q(x_{t}|x_{t-1})\) as the forward process to collapse the image distribution \(p(x_{0})\) to \(\mathcal{N}(0,I)\). These distributions are
\[q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}I). \tag{1}\]
Then we can sample \(x_{t}\) by the following formula \(x_{t}\sim\mathcal{N}(x_{t};\sqrt{\bar{\alpha}_{t}}x_{0},(1-\bar{\alpha}_{t}))\), where \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\Pi_{t=1}^{t}\alpha_{t}\), i.e., \(x_{t}=\sqrt{\bar{\alpha}_{t}}x_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\). The reverse process is fitting by other distributions parameterized by \(\theta\):
\[p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t};\mu_{\theta}(x_{t},t),\sigma_{t}I). \tag{2}\]
where \(\mu_{\theta}(x_{t},t)\) is parameterize by a Unet \(\epsilon_{\theta}(x_{t},t)\). The training of it is to minimize the variational upper bound of negative log-likelihood:
\[\mathcal{L}_{\theta}=\mathop{\mathbb{E}}_{x_{0},t,\epsilon}\|\epsilon- \epsilon_{\theta}(x_{t},t)\|. \tag{3}\]
### Representation learning from DPMs
The classifier-guided method (Dhariwal and Nichol, 2021) uses the gradient of pre-trained classifier \(\nabla_{x_{t}}\log p(y|x_{t})\) to impose a condition on pre-trained DPM and obtain a new conditional DPM: \(\mathcal{N}(x_{t};\mu_{\theta}(x_{t},t)+\sigma_{t}\nabla_{x_{t}}\log p(y|x_{t }),\sigma_{t})\). Based on the classifier-guided sampling method, PADE (Zhang et al., 2022) proposes a method to learn auto-encoder for pre-trained DPM. Specifically, given a pre-trained DPM, PADE introduces an encoder \(E_{\phi}\), and the representation can be derived by \(z=E_{\phi}(x_{0})\). They use a gradient estimator \(G_{\psi}(x_{t},z,t)\) to simulate gradient \(\nabla_{x_{t}}\log p(z|x_{t})\) for reconstruction.
By this means, they use it to assemble the unconditional DPM as a new conditional DPM as the decoder. Similar to regular DPM \(\mathcal{N}(x_{t};\mu_{\theta}(x_{t},t)+\sigma_{t}G_{\psi}(x_{t},z,t),\sigma_ {t})\), we can use the following objective to train encoder \(E_{\phi}\) and the network \(G_{\psi}\):
\[\mathcal{L}_{\psi}=\mathop{\mathbb{E}}_{x_{0},t,\epsilon}\|\epsilon-\epsilon_ {\theta}(x_{t},t)+\frac{\sqrt{\alpha_{t}}\sqrt{1-\bar{\alpha}_{t}}}{\beta_{t} }\sigma_{t}G_{\psi}(x_{t},z,t)\|. \tag{4}\]
## 4 Method
In this section, we first introduce the formulation of the proposed task in Section 4.1. Then we present the overview of DisDiff in Section 4.2. After that, we present the detailed implementation of the proposed Disentangling Loss in Section 4.3 and how to balance it with reconstruction loss in Section 4.4. Finally, in Section 4.5, we discuss the relation between Disentangling Loss and the total correlation, which is a necessary condition for disentanglement.
### Disentanglement of DPM
We assume that dataset \(\mathcal{D}\) is generated by \(N\) underlying ground truth factors \(N\) factors \(\mathcal{C}=\{1,2,\ldots,N\}\). For example, for Shapes3D, the underlying concept factors include background color, floor color, object color, object shape, object scale, and pose. Therefore, there is a one-one mapping between each sample and each tuple of factor representations, \(h:x_{0}\mapsto(f^{1},\ldots,f^{N}),\forall x_{0}\in\mathcal{D}\). The data distribution \(p(x)\) can be disentangled into \(N\) independent distributions \(\{p(x|f^{k})|k=1,\ldots,N\}\), and each conditioned on only one factor, which is shown as the curved surface of image space in Figure 1(a). The DPM learns a sequence of distributions \(\{p_{t}(x)|t=T,T-1,\ldots,0\}\) converging to \(p(x)\). Such convergence is achieved by optimizing the corresponding gradient fields \(\{\nabla_{x}\log p_{t}(x)|t=T,T-1,\ldots,0\}\) (learned by \(\epsilon_{\theta}\) with parameters \(\theta\)). The disentangled DPM contains \(N\) sequences of distributions, converging to \(N\) distributions \(\{p(x|f^{c})|c=1,\ldots,N\}\) respectively. The target of disentanglement of a DPM is, for each factor \(c\), to learn \(G_{\psi}^{c}\) to estimate \(\nabla_{x}\log p_{t}(x|f^{c})\), which corresponds to the arrows pointing to the curve surface in Figure 1(a). One may note that in the data sample space, such as image space, the curved surfaces indicate that the data sample space is not well-organized, and the variations of the factors are entangled. Compared to the image space, the ground truth factor space is well-organized, and the subspaces of factors are orthogonal between each other, as shown in Figure 1(b). Disentangled representation learning aims to model the ground truth factor space using an encoder \(E_{\phi}\), which encodes the raw data into disentangled representations. The disentangled representation is ideal for representing the conditions for the conditional distributions \(\{p(x|f^{c})|c=1,\ldots,N\}\) of the disentangled DPM. Conditioned on the disentangled
Figure 1: Illustration of disentanglement of DPMs. (a) is the diagram of image space. (b) is the diagram of factor space. (c) is the demonstration of sampled images. Surface indicates the conditional distribution of a single factor \(p(x|z^{c})\). Different colors correspond to different factors. Here we show three factors: object color, background color, and floor color. Arrows are gradient fields \(\nabla_{x_{t}}\log p(z^{c}|x_{t})\) parameterized by \(G_{\phi}^{c}(x_{t},t,z^{c})\). The learned \(G_{\phi}^{c}(x_{t},t,z^{c})\) converges the noise data to the conditional distribution. The black points are the sampled images, which are shown in (c).
representation, the disentangled DPM can flexibly generate the data samples. In this paper, we propose a method named DisDiff, as a solution for the disentanglement of a DPM.
### Overview of DisDiff
The overview framework of DisDiff is shown in Figure 2. Given a pre-trained unconditional DPM on dataset \(\mathcal{D}\) with \(N\) factors \(\mathcal{C}=\{1,2,\ldots,N\}\), e.g., a DDPM model with parameters \(\theta\), \(p_{\theta}(x_{t-1}|x_{t})=\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t),\sigma_{t})\), our target is to disentangle the DPM in an unsupervised manner. Specifically, given \(x_{0}\in\mathcal{D}\), for each factor \(c\in\mathcal{C}\), the goal is to learn the disentangled representation \(z^{c}\) via an encoder \(E_{\phi}\) (with learnable parameters \(\phi\)) as \(E_{\phi}(x_{0})=\{E_{\phi}^{1}(x_{0}),E_{\phi}^{2}(x_{0}),\ldots,E_{\phi}^{N}( x_{0})\}=\{z^{1},z^{2},\ldots,z^{N}\}\), and the disentangled gradient field \(\nabla_{x_{t}}\log p(z^{c}|x_{t})\). Therefore, the conditional reverse process (condition on factors \(\mathcal{S}\subseteq\mathcal{C}\), and \(z^{\mathcal{S}}=\{z^{c}|c\in\mathcal{S}\}\)) can be formulated by a Gaussian distribution \(p_{\theta}(x_{t-1}|x_{t},z^{\mathcal{S}})\) with a shifted mean:
\[\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t)+\Sigma_{t}\sum_{c\in\mathcal{S}} \nabla_{x_{t}}\log p(z^{c}|x_{t}),\sigma_{t}). \tag{5}\]
Since \(p(z^{c}|x_{t})\) is intractable, we use \(G_{\psi}^{c}(x_{t},z^{c},t),c\in\mathcal{C}\), with learnable parameters \(\psi\), to estimate the gradient fields \(\nabla_{x_{t}}\log p(z^{c}|x_{t}),c\in C\).
With different options of \(\mathcal{S}\), one can flexibly devise the approximator following score-based conditioning trick (Song et al., 2020; Song and Ermon, 2019) as follows:
\[\epsilon_{\psi}(x_{t},z^{\mathcal{S}},t)=\epsilon_{\theta}(x_{t},t)-\sum_{c\in \mathcal{S}}\sqrt{1-\bar{\alpha_{t}}}G_{\psi}^{c}(x_{t},z^{c},t). \tag{6}\]
Then one can derive the corresponding data sample using Tweedie's Formula as:
\[\hat{x}_{0}^{\mathcal{S}}=\frac{x_{t}-\sqrt{1-\bar{\alpha_{t}}}\epsilon_{\psi }(x_{t},z^{\mathcal{S}},t)}{\sqrt{\bar{\alpha}_{t}}}. \tag{7}\]
For example, one can choose only one disentangled factor, i.e., setting \(\mathcal{S}=c\), and use the gradient \(\nabla_{x_{t}}\log p(z^{c}|x_{t})\) (only conditioned on \(z^{c}\)) to guide the sampling of the pre-trained DPM, resulting in the predicted sample \(\hat{x}_{0}^{c}\). We leave \(\hat{x}_{0}\) to denote the predicted data sample using the pretrained unconditioned DPM \(\epsilon_{\theta}(x_{t},t)\).
According to the _completeness_ requirement of disentanglement, the disentangled representation \(E_{\phi}(x_{0})\) should contain the full information of sample \(x_{0}\), i.e., one can reconstruct \(x_{0}\) by using all the disentangled representations \(E_{\phi}(x_{0})\) as a condition. Following Equation 7, we set \(\mathcal{S}=\mathcal{C}\) to include all the disentangled representation, resulting in the derived data sample \(\hat{x}_{0}^{\mathcal{C}}\). One may note that this is exactly the reconstruction case in PDAE. Here, we adopt the same reconstruction loss, denoted as
\[\begin{split}&\mathcal{L}_{r}~{}=\underset{x_{0},t,\epsilon}{ \mathbb{E}}\|\epsilon-\epsilon_{\theta}(x_{t},t)\\ +&\frac{\sqrt{\bar{\alpha_{t}}\sqrt{1-\bar{\alpha_{t}}}} }{\bar{\beta_{t}}}\sigma_{t}\sum_{c\in\mathcal{C}}G_{\psi}^{c}(x_{t},z^{c},t) \|.\end{split} \tag{8}\]
Besides the above _completeness_ requirement, each disentangled representation should only reflect one corresponding factor independently, i.e., the _disentanglement_ requirement. We devise a novel loss, named Disentangling Loss, to achieve the disentanglement of the pre-trained DPM. In the following, we will present the Disentangling Loss and the total loss.
### Disentangling Loss
In this section, we provide the detailed implementation of Disentangling Loss. As discussed above, given sample \(x_{0}\) and its disentangled representation \(E(x_{0})\), we randomly sample \(c\in\mathcal{C}\) and use \(\mathcal{S}=c\) to get the conditioned sample \(\hat{x}_{0}^{c}\) (only conditioned on representation \(z^{c}\)). According to the disentanglement requirement, compared to the unconditioned predicted image \(\hat{x}_{0}\) (sampling with the pre-trained unconditioned DPM \(\epsilon_{\theta}(x_{t},t)\)), \(\hat{x}_{0}^{c}\) should satisfy the
Figure 3: The demonstration of disentangling loss. We first sample a factor \(c\) and then decode the representation \(z^{c}\) to obtain the gradient field of the corresponding factor. With this gradient field, we can obtain the predicted \(x_{0}\) of the corresponding factor. On the other hand, we can obtain predicted \(\hat{x}_{0}\) of the original pre-trained DPM. We then encode the images into two different representations and calculate the disentangling loss.
Figure 2: Illustration of DisDiff. Gray networks indicate the pre-trained Unet of DPM \(\epsilon_{\theta}(x_{t},t)\). Image \(x_{0}\) is first to be encoded to representations \(\{z^{1},z^{2},\ldots z^{N}\}\) of different factors by encoder \(E_{\phi}\) (\(N=3\) in the figure). We thus decode the representations by decoder \(G_{\theta}^{c}\). By this means, we can obtain the gradient field of the corresponding factor. With the obtained gradient field, we can sample the image under the corresponding condition.
following two conditions: \((i)\)_invariant condition_, for the \(k\)-th (\(k\neq c,k\in\mathcal{C}\)) disentangled representation, \(E_{\phi}^{k}(\hat{x}_{0})\) should be the same with \(E_{\phi}^{k}(\hat{x}_{0})\), \((ii)\)_variant condition_, for the \(c\)-th disentangled representation, the conditioned one \(E_{\phi}^{c}(\hat{x}_{0})\) should be closer to \(E_{\phi}^{c}(x_{0})\) than the unconditioned one \(E_{\phi}^{c}(\hat{x}_{0})\). In the following, we provide the detailed implementation of the above two conditions. According to the above \((i)\)_invariant condition_, the \(k\)-th (\(k\neq c,k\in\mathcal{C}\)) representation should be kept the same. We encode the two samples using \(E_{\phi}\) and derive the distance scalar between the \(k\)-th representation as:
\[d_{k}=\|E_{\phi}^{k}(\hat{x}_{0}^{c})-E_{\phi}^{k}(\hat{x}_{0})\|. \tag{9}\]
Then the distance vector can be represented as \(d=[d_{1},d_{2},\dots,d_{C}]\) Finally, we use the cross-entropy loss to identify the index \(c\) and restrain others unchanged:
\[\mathcal{L}_{in}=\mathop{\mathbb{E}}_{x_{0},t,\epsilon,c}[Cross Entropy(d,c)]. \tag{10}\]
We denote it as invariant loss \(\mathcal{L}_{in}\).
For the \((ii)\)_variant condition_, we first calculate the distance scalar of the \(k\)-th representation between \(\hat{x}_{0}\) (unconditioned) and \(x_{0}\), \(\hat{x}_{0}^{c}\) (conditioned) and \(x_{0}\) as following respectively:
\[\begin{array}{rl}d_{k}^{n}&=\|E_{\phi}^{k}(\hat{x}_{0})-E_{\phi}^{k}(x_{0})\| \\ d_{k}^{p}&=\|E_{\phi}^{k}(\hat{x}_{0}^{c})-E_{\phi}^{k}(x_{0})\|\end{array} \tag{11}\]
According to the above condition \((ii)\), for the conditioned factor \(c\), \(d_{c}^{n}-d_{c}^{p}\) should be maximized, while others, i.e. \(d_{k}^{n}-d_{k}^{p}\) (\(k\neq c,k\in\mathcal{C}\)), should be minimized to 0. Similarly we adopt an entropy loss to achieve the subjective as:
\[\mathcal{L}_{va}=\mathop{\mathbb{E}}_{x_{0},t,\epsilon,c}[Cross Entropy(d^{n}-d^{p},c)], \tag{12}\]
where \(d^{n}=[d_{1}^{n},d_{2}^{n},\dots,d_{C}^{n}]\) and \(d^{p}=[d_{1}^{p},d_{2}^{n},\dots,d_{C}^{p}]\). We denote it as variant loss \(\mathcal{L}_{va}\). So far, we introduce the Disentangling Loss, \(\mathcal{L}_{in}\) and \(\mathcal{L}_{va}\).
### Total Loss
As we need to satisfy both the _completeness_ and _disentanglement_ requirements, the total loss includes the above reconstruction loss (\(\mathcal{L}_{r}\)) and Disentangling Loss (\(\mathcal{L}_{in}\) and \(\mathcal{L}_{va}\)). However, the weight to balance the two-part loss should be carefully set. The reason is the following. Note that the above Disentangling Loss \(\mathcal{L}_{in}\) and \(\mathcal{L}_{va}\) is conditioned on the sampled time step of the diffusion process. However, the condition of the diffusion model varies among different time steps. For example, if \(t\) is close to \(T\), \(G_{\psi}^{c}(x_{t},z^{c},t)\) mainly condition on \(z^{c}\). And if \(t\) is close to \(0\), the output mainly conditions on \(x_{t}\). Therefore, for different time steps, different weights should be used for Disentangling Loss. Considering that the difference between the inputs of encoder reflexes such change on condition. Specifically, if \(t\) is close to \(T\), the difference between \(\hat{x}_{t}\) and \(\hat{x}_{t}^{c}\) is small. In addition, if \(t\) is close to \(0\), such a difference is significant. Based on the above discussion, the more the output condition on \(z^{c}\), the higher the weight should be. We thus propose to use the MSE distance between the inputs of the Encoder as the weight coefficient:
\[\gamma_{d}=\lambda\|\hat{x}_{0}-\hat{x}_{0}^{c}\|^{2}, \tag{13}\]
where \(\lambda\) is a hyper-parameter. We stop the gradient of \(\hat{x}_{0}\) and \(\hat{x}_{0}^{c}\) for calculating the weight coefficient \(\gamma_{d}\).
The total loss can be calculated as:
\[\mathcal{L}_{a}=\mathcal{L}_{r}+\gamma_{d}(\mathcal{L}_{in}+\mathcal{L}_{va}). \tag{14}\]
### Relation to Total Correlation
In this section, we demonstrate that Total Correlation is a necessary condition for our disentangling loss. Total Correlation is once regarded as an important constraint for representation disentanglement. However, Locatello et al. (2019) point out that besides total Correlation, other inductive biases should also be considered. In this paper, we introduce Disentangling Loss for DPM as an additional inductive bias. We prove that Total Correlation is a necessary condition.
Specifically, if the reconstruction loss is minimized, we have
\[\nabla_{x_{t}}\log p(z^{1},\dots,z^{N}|x_{t})=\sum_{c\in\mathcal{C}}G_{\psi}^ {c}(x_{t},z^{c},t) \tag{15}\]
On the other hand, if the disentangling loss is minimized, we have \(G_{\psi}^{c}(x_{t},z^{c},t)=\nabla_{x_{t}}\log p(z^{c}|x_{t})\). Since \(\sum_{c\in\mathcal{C}}\nabla_{x_{t}}\log p(z^{c}|x_{t})=\nabla_{x_{t}}\log \Pi_{c\in\mathcal{C}}p(z^{c}|x_{t})\) always hold, bring these two equations into Eq. 15, we have
\[\nabla_{x_{t}}\log p(z^{1},\dots,z^{N}|x_{t})=\nabla_{x_{t}}\log\Pi_{c\in \mathcal{C}}p(z^{c}|x_{t}) \tag{16}\]
The equation above results in the fisher divergence between the joint distribution \(p(z^{1},\dots,z^{N}|x_{t})\) and the product of marginal distribution \(\Pi_{c\in\mathcal{C}}p(z^{c}|x_{t})\) is 0. Therefore the Total Correlation holds for all \(x_{t}\):
\[p(z^{1},\dots,z^{N}|x_{t})=\Pi_{c\in\mathcal{C}}p(z^{c}|x_{t}) \tag{17}\]
## 5 Experiments
In this section, we conduct experiments to demonstrate the effectiveness of DisDiff on both synthetic and real-world datasets.
### Experimental Setup
**Implementation Details**. \(x_{0}\) can be an image space or a latent space of images. For image diffusion, we take
pre-trained DDIM as the DPM (DisDiff-IM). For latent diffusion, we can take the pre-traind KL-version latent diffusion model (LDM) or vq-version LDM as DPM (DisDiff-KL and DisDiff-VQ). For detail of network \(G_{\theta}\), we follow Zhang et al. (2022) to use the extended Group Normalization (Dhariwal and Nichol, 2021) by applying scaling & shifting twice. The difference is we use learn-able position embedding to indicate \(c\):
\[AdaGN(h,t,z^{c})=z_{s}^{c}(t_{s}^{c}GN(h)+t_{b}^{c})+z_{b}^{c} \tag{18}\]
where \(GN\) denotes group normalization, and \([t_{s}^{c},t_{b}^{c}],[z_{s}^{c},z_{b}^{c}]\) are obtained from a linear projection: \(z_{s}^{c},z_{b}^{c}=linearProj(z^{c})\), \(t_{s}^{c},t_{b}^{c}=linearProj([t,p^{c}])\). In addition, \(p^{c}\) is the leanable positional embedding. \(h\) is the feature map of Unet.
**Datasets** For evaluation of disentanglement, we follow Ren et al. (2021) to use the popular public datasets: Shapes3D (Kim and Mnih, 2018), a dataset of 3D shapes. MPI3D (Gondal et al., 2019), a 3D dataset recorded in a controlled environment, and Cars3D (Reed et al., 2015), a dataset of CAD models generated by color renderings. All experiments are conducted on 64x64 image resolution, which is the same as the literature. For real-world datasets, we conduct our experiments on CelebA (Liu et al., 2015).
**Baselines & Metrics** Since DisDiff is the first diffusion-based disentanglement model. Therefore, we compare the performance with VAE-based and GAN-based baselines. Specifically, the VAE-based models include: FactorVAE (Kim and Mnih, 2018), and \(\beta\)-TCVAE (Chen et al., 2018). The GAN-based baselines include InfoGAN-CR (Lin et al., 2020), GANspace (GS) (Harkonen et al., 2020), LatentDiscovery (LD) (Voynov and Babenko, 2020) and DisCo (Ren et al., 2021). Considering the influence of performance on the random seed. We have 10 runs for each method. We use four representative metrics: FactorVAE score (Kim and Mnih, 2018), and the DCI (Eastwood
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{Cars3D} & \multicolumn{2}{c}{Shapes3D} & \multicolumn{2}{c}{MPI3D} \\ \cline{2-7} & FactorVAE score & DCI & FactorVAE score & DCI & FactorVAE score & DCI \\ \hline \multicolumn{7}{c}{_VAE-based:_} \\ \hline FactorVAE & \(0.906\pm 0.052\) & \(0.161\pm 0.019\) & \(0.840\pm 0.066\) & \(0.611\pm 0.082\) & \(0.152\pm 0.025\) & \(0.240\pm 0.051\) \\ \(\beta\)-TCVAE & \(0.855\pm 0.082\) & \(0.140\pm 0.019\) & \(0.873\pm 0.074\) & \(0.613\pm 0.114\) & \(0.179\pm 0.017\) & \(0.237\pm 0.056\) \\ \hline \multicolumn{7}{c}{_GAN-based:_} \\ \hline InfoGAN-CR & \(0.411\pm 0.013\) & \(0.020\pm 0.011\) & \(0.587\pm 0.058\) & \(0.478\pm 0.055\) & \(0.439\pm 0.061\) & \(0.241\pm 0.075\) \\ \hline \multicolumn{7}{c}{_Pre-trained GAN-based:_} \\ \hline LD & \(0.852\pm 0.039\) & \(0.216\pm 0.072\) & \(0.805\pm 0.064\) & \(0.380\pm 0.062\) & \(0.391\pm 0.039\) & \(0.196\pm 0.038\) \\ GS & \(0.932\pm 0.018\) & \(0.209\pm 0.031\) & \(0.788\pm 0.091\) & \(0.284\pm 0.034\) & \(0.465\pm 0.036\) & \(0.229\pm 0.042\) \\ DisCo & \(0.855\pm 0.074\) & \(0.271\pm 0.037\) & \(0.877\pm 0.031\) & \(0.708\pm 0.048\) & \(0.371\pm 0.030\) & \(0.292\pm 0.024\) \\ \hline \multicolumn{7}{c}{_Diffusion-based:_} \\ \hline DisDiff-VQ (Ours) & \(\mathbf{0.976\pm 0.018}\) & \(\mathbf{0.232\pm 0.019}\) & \(\mathbf{0.902\pm 0.043}\) & \(\mathbf{0.723\pm 0.013}\) & \(\mathbf{0.617\pm 0.070}\) & \(\mathbf{0.337\pm 0.057}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons of disentanglement on the FactorVAE score and DCI disentanglement metrics (mean \(\pm\) std, higher is better). DisDiff achieves state-of-the-art performance with a large margin in almost all the cases compared to all baselines. Especially on the MPI3D dataset.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & FactorVAE score & DCI \\ \hline DisDiff-IM & \(0.783\) & \(0.655\) \\ DisDiff-KL & \(0.837\) & \(0.660\) \\ DisDiff-VQ & \(0.902\) & \(0.723\) \\ \hline DisDiff-VQ w \(\mathcal{L}_{in}\) & \(0.782\) & \(0.538\) \\ DisDiff-VQ w \(\mathcal{L}_{va}\) & \(0.810\) & \(0.620\) \\ DisDiff-VQ w \(\mathcal{L}_{dis}\) & \(0.653\) & \(0.414\) \\ wo detach & \(0.324\) & \(0.026\) \\ \hline constant weighting & \(0.679\) & \(0.426\) \\ loss weighting & \(0.678\) & \(0.465\) \\ \hline attention condition & \(0.824\) & \(0.591\) \\ wo pos embedding & \(0.854\) & \(0.678\) \\ wo orth embedding & \(0.807\) & \(0.610\) \\ \hline latent number \(N\)= 6 & \(0.865\) & \(0.654\) \\ latent number \(N\)= 10 & \(0.902\) & \(0.723\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of DisDiff on image tokenizer, components, batchsize and token numbers.
& Williams, 2018). However, since \(\{z^{c}\}\) are vector-wise represenatation, we follow Du et al. (2021) to perform PCA as post-processing on the representation before evaluation.
### Main Results
We conduct the following experiments to verify the disentanglement ability of proposed DisDiff model. We regard the learned representation \(\{z^{c}\}\) as the disentangled one and use the popular metrics in disentangled representation literature for evaluation.
The quantitative comparison results of disentanglement under different metrics are shown in Table 1. As shown in the table, DisDiff outperforms the baselines, demonstrating the model's superior disentanglement ability. Compared with the VAE-based methods, since these methods suffer from the trade-off between generation and disentanglement (Lezama, 2018) but DisDiff does not. As for the GAN-based methods, the disentanglement is learned by exploring the latent space of GAN. Therefore, the performance is limited by the latent space of GAN. DisDiff leverages the gradient field of data space to learn disentanglement and does not have such limitations. In addition, DisDiff resolves the disentanglement problem into 1000 sub-problems under different time steps, which reduces the difficulty.
### Qualitative Results
In order to analyze the disentanglement of DisDiff qualitatively. We swap the representation \(\{z^{c}\}\) of two images one by one and sample the image conditioned on the swapped representation. We follow LDM-VQ to sample images in \(200\) steps. For the popular dataset of disentanglement literature, we take shapes3d as an example. As shown in Figure 4, DisDiff successfully learned pure factors. Compared with the VAE-based methods, DisDiff has better image quality. For the real-world dataset, since there are no ground truth factors, we demonstrate the qualitative results in Figure. We take CelebA as an example, as demonstrated in Figure 5, DisDiff also achieves good disentanglement on real-world datasets. Please note that compare with Disco (Ren et al., 2021). DisDiff has the ability of reconstruction, which is not available for DisCo.
### Ablation Study
In order to analyze the effectiveness of the proposed parts of DisDiff, we design an ablation study from the following five aspects: DPM type, Disentangling Loss, loss weighting, condition type, and latent number. We take shapes3d as the dataset to conduct these ablation studies.
**DPM type** The disentanglement of DisDiff is derived by the decomposition of the gradient field of the diffusion model. Therefore, the diffusion space influence the performance of DisDiff. We take Shapes3D as an example, it is hard for the model to learn shape and scale in image space, but much easier in the latent space of auto-encoder. Therefore, we compare the performance of DisDiff with different diffusion types: image diffusion model, e.g., DDIM (DisDiff-IM), KL-version latent diffusion model (DisDiff-KL) and VQ-version latent diffusion model, e.g., VQ-LDM (DisDiff-VQ).
Figure 4: The qualitative results on Shapes3D. The source images provide the representations of the generated image. The target image provides the representations for swapping. Other rows of images are generated by swapping the representation of the corresponding factor on Shapes3D. DisDiff learns pure factors by each representation. The learned factors are azimuth, background color, floor color, object color, and object shape, respectively.
Figure 5: The qualitative results on CelebA. Each row of images is generated by swapping the representation of the corresponding factor on CelebA. DisDiff learns pure factors by each representation. The learned factors are bangs, skin color, expression, and hair.
As shown in Table 2, the LDM-version DisDiff outperforms image-version DisDiff as expected. In addition, KL-LDM has more complex latent space than VQ-LDM. DisDiff-VQ outperforms DisDiff-KL.
**Disentangling Loss** Disentangling loss is composed of two parts: Invariant loss \(\mathcal{L}_{in}\) and Variant loss \(\mathcal{L}_{va}\). In order to verify the effectiveness of each part, we train DisDiff-VQ without it. \(\mathcal{L}_{in}\) encourages in-variance of representation not being sampled, which means that the sampled factor will not affect the representation of other factors (\(z^{k},k\neq l\) of generated \(\hat{x}_{0}^{l}\)). On the other hand, \(\mathcal{L}_{va}\) encourages the representation of sampled factor (\(z^{l}\) of generated \(\hat{x}_{0}^{l}\)) should be close to the corresponding one of \(x_{0}\). As shown in Table 2, mainly \(\mathcal{L}_{in}\) encourage the disentanglement, and \(\mathcal{L}_{va}\) further constrain the model and improve the performance. Note that the disentangling loss is optimized w.r.t. \(G_{\theta}\) but not \(E_{\theta}^{c}\). If the loss is optimized on both modules, as shown in Table 2, DisDiff fails to achieve disentanglement. The reason is that the disentangling loss influenced the encoder, so DisDiff failed to reconstruct the input image.
**Loss weighting** As introduced, considering that the condition varies among time steps, we adopt the difference of the encoder as the weight coefficient. In this section, we explore other options to verify its effectiveness. We offer two different weighting types: constant weighting and loss weighting. The first type is the transitional way of weighting. The second one is to balance the scale of Distangling Loss and diffusion loss. From Table 2, these two types of weighting hurt the performance to a different extent.
**Condition type** DisDiff follows PADE (Zhang et al., 2022) and (Dhariwal and Nichol, 2021) to adopt AdaGN for injecting the condition. However, there is another option in the literature: cross-attention. As shown in Table 2, cross-attention hurt the performance but not much. We infer that the reason may be that the condition is only a single token, which limits the ability of attention. We use learnable orthogonal positional embedding to indicate different factors. As shown in Table 2, no matter whether no positional embedding (wo pos embedding) or traditional learnable positional embedding (wo orth embedding) hurt the performance. The reason is that the orthogonal embedding is always different from each other in all training steps.
**Latent number** The number of latent is a important hyper-parameter set in-advance. We conduct ablation study on this hyper-parameter. As shown in Table 2, the latent number only has limited influence on the performance.
### Partially Condition Sampling
As discussed in Section 4.2, DisDiff can partially sample conditions on the part of the factors. Specifically, we can use Equation 6 to sample image condition on factors set \(\mathcal{S}\). We take Shapes3D, as an example, when DisDiff sampling image condition on background color is red. We obtain a set of images of the background color red and other factors randomly sampled. From Figure 6, we see that DisDiff has the ability to condition individual factors on Shapes3D. In addition, DisDiff also has such ability on the real-world dataset (CelebA) in Figure 7. DisDiff has the ability to sample information exclusively to conditions.
## 6 Conclusion
In this paper, we demonstrate a new task: disentanglement of DPM, by disentangling a DPM into several disentangled
Figure 6: The partially condition sampling on Shapes3D. The target image provides the representation of partially sampling image. Each row of images are generated by imposing a single gradient field of the corresponding factor on the pre-tained DPM. DisDiff samples image condition on only a single factor. The sampled image has a fixed factor, e.g., the images of factor 1 have same background color with target one. The conditioned factors are: azimuth, background color, floor color, object color and object shape, respectively.
Figure 7: The partially condition sampling on CelebA. The target image provides the representations of the sampling image. Images of each row are conditioned on a single factor. DisDiff samples image condition on only a single factor. Therefore, the sampled image in the same row has a fixed factor, e.g., the images of factor 1 have the same bangs as the target one. The conditioned factors are bangs, skin color, expression, and hair, respectively.
gradient fields, we can improve the interpreter-ability of DPM. To solve the task, we build an unsupervised diffusion-based disentanglement framework named DisDiff. DisDiff learns a disentangled representation of the input image in the diffusion process. In addition, for each factor, DisDiff learns a disentangled gradient field, which brings the following new properties for disentanglement literature. DisDiff adopted disentangling constraints on all different timesteps, which is a new inductive bias. Except for image editing, with the disentangled DPM, we also can sample partially conditions on the part of the information by superposition of the sub-gradient field. For future work, Applying DisDiff to more general conditioned DPM is a direction worth exploring. Besides, utilizing the proposed disentangling method to pre-trained conditional DPM makes it more flexible.
|
2309.17365 | Grain boundary segregation and phase separation in ceria-zirconia from
atomistic simulation | Doping is the most common strategy employed in the development of new and
improved materials. However, predicting the effects of doping on the
atomic-scale structure of a material is often difficult or limited to high-end
experimental techniques. Doping can induce phase separation in a material,
undermining the material's stability. A further complication is that dopant
atoms can segregate to interfaces in a material such as grain boundaries (GBs),
with consequences for key macroscopic properties of the material such as its
conductivity. Here, we describe a computational methodology based on semi-grand
canonical Monte Carlo which can be used to probe these phenomena at the atomic
scale for metal oxide solid solutions. The methodology can provide precise
predictions of the thermodynamic conditions at which phase separation occurs.
It can also provide the segregation patterns exhibited by GBs at given
conditions. We apply the methodology to one of the most important catalytic
materials, ceria-zirconia. Our calculations reveal an interesting richness in
the GB segregation in this system. Most GBs we examined exhibited continuous
increases in Zr segregation upon Zr doping, with a concomitant reduction in the
formation enthalpies of the GBs. However, a few GBs exhibited no segregation at
low temperatures. We also observed evidence of first-order complexion
transitions in some GBs. | Tom L. Underwood, Susanna Vigorito, Marco Molinari, John Purton, Nigel B. Wilding, John T. S. Irvine, Stephen C. Parker | 2023-09-29T16:15:42Z | http://arxiv.org/abs/2309.17365v2 | # Grain boundary segregation and phase separation in ceria-zirconia from atomistic simulation
###### Abstract
Doping is the most common strategy employed in the development of new and improved materials. However, predicting the effects of doping on the atomic-scale structure of a material is often difficult or limited to high-end experimental techniques. Doping can induce phase separation in a material, undermining the material's stability. A further complication is that dopant atoms can segregate to interfaces in a material such as grain boundaries (GBs), with consequences for key macroscopic properties of the material such as its conductivity. Here, we describe a computational methodology based on semi-grand canonical Monte Carlo which can be used to probe these phenomena at the atomic scale for metal oxide solid solutions. The methodology can provide precise predictions of the thermodynamic conditions at which phase separation occurs. It can also provide the segregation patterns exhibited by GBs at given conditions. We apply the methodology to one of the most important catalytic materials, ceria-zirconia. Our calculations reveal an interesting richness in the GB segregation in this system. Most GBs we examined exhibited continuous increases in Zr segregation upon Zr doping, with a concomitant reduction in the formation enthalpies of the GBs. However, a few GBs exhibited no segregation at low temperatures. We also observed evidence of first-order complexion transitions in some GBs.
## I Introduction
It is well known that the atomic-scale structure of grain boundaries (GBs) in a material can strongly influence macroscopic properties of the material such as strength and conductivity [1; 2; 3]. For this reason, there is considerable interest in being able to control GB structure at the atomic scale in order to design new materials with superior properties [4; 5; 6]. One way in which GB structure can be controlled is by adding dopants to a material [5; 6]. Depending on the temperature, dopant concentration, and the particular GB type, dopant atoms may _segregate_ to GBs, bringing about changes in their structure and, ultimately, the macroscopic properties of the material [2; 7]. However, this phenomenon is nontrivial, for which reason there is interest in using computer simulation to obtain accurate maps of GB properties over a range of thermodynamic parameters [8; 9].
Analytical and field-based [2; 3; 9; 10] models have added great insight into GB properties including segregation in solid solutions. However, they cannot provide a detailed description of GB structure at the atomic scale. On the other hand, density-functional theory (DFT) [11; 12] is in principle capable of providing such detail with high accuracy [13]. However, DFT is in practice limited to small system sizes on account of its computational expense. A middle ground is found in using interatomic potentials, which strike a balance between accuracy and computational cost. Using interatomic potentials, molecular dynamics (MD)[14] and Monte Carlo (MC)[15; 16] simulation have been used to study GBs in solid solutions. While MD is a powerful method for quantifying time-dependent properties such as defect diffusion coefficients, in solid solutions it can prove intractable with MD to reach GB structures corresponding to thermodynamic equilibrium due to long timescales associated with dopant diffusion. MC can sidestep this issue by utilising _unphysical_ dynamics which enable equilibrium to be reached quickly. For instance, to study segregation in metal oxide solid solutions, dynamics which involve swapping the positions of atoms belonging to different elements has been successfully utilised [17; 18; 19; 20; 21; 22; 23; 24]. Moreover, the semi-grand canonical MC (SGCMC)[15; 25] method, which utilises dynamics where the elements of atoms are _transformed_ in-place during the simulation, has proved to be a powerful probe of equilibrium GB structures in alloys [26; 27; 28; 29; 30; 31; 32; 33; 34; 35].
While, as mentioned above, the SGCMC method has been used to study GBs in alloys, as far as we are aware it has yet to be applied to GBs in _metal oxide solid solutions_. Given the importance of this class of material to technology, and the success SGCMC has had in providing insight into alloy GBs, this is something we believe deserves attention. Moreover, while the use of SGCMC to study GBs is a relatively recent development, SGCMC has a long history of being used to calculate phase diagrams of mixtures, a task for which it is particularly well suited [15]. SGCMC has even been used in conjunction with _free energy methods[36; 37]_ - i.e. methods which utilise adaptive algorithms to learn free energy landscapes in the vicinity of phase transitions - in order to calculate phase diagrams to high precision [38; 39]. An |
2309.03310 | Metal-THINGS: a panchromatic analysis of the local scaling relationships
of the dwarf irregular galaxy NGC 1569 | We investigate several panchromatic scaling relations (SRs) for the dwarf
irregular galaxy NGC 1569 using IFU data from the Metal-THINGS Survey. Among
the spatially resolved properties analyzed, we explore SRs between the stellar
mass, SFR, molecular gas, total gas, baryonic mass, gas metallicity, gas
fraction, SFE and effective oxygen yields. Such multiwavelength SRs are
analyzed at a spatial resolution of 180 pc, by combining our IFU observations
with data from the surveys THINGS, CARMA, and archival data from DustPedia.
Although we recover several known relations, our slopes are different to
previously reported ones. Our star formation main sequence, Kennicutt-Schmidt
(KS) and molecular KS relations show higher SFRs, lower scatter, and higher
correlations, with steeper (1.21), and flatter slopes (0.96, 0.58)
respectively. The shape of the SRs including metallicity, stellar mass, and gas
fraction are flat, with an average value of 12+log(O/H) $\sim$ 8.12 dex. The
baryonic mass vs effective oxygen yields, and the stellar, gas and baryonic
mass vs SFE show higher dispersions and lower correlations. Since we use the
dust mass as a tracer of gas mass, we derive the Dust-to-Gas Ratio and the CO
luminosity-to-molecular gas mass conversion factors, showing differences of
0.16 and 0.95 dex for the total and molecular gas surface density,
respectively, in comparison to previously reported values. We use a self
regulated feedback model to conclude that stellar feedback plays an important
role generating outflows in NGC 1569. | L. E. Garduño, J. Zaragoza-Cardiel, M. A. Lara-López, I. Zinchenko, M. C. Zerbo, M. E. De Rossi, Jacopo Fritz, S. Dib, L. Pilyugin, M. Sánchez-Cruces, V. Heesen, S. P. O'Sullivan, O. López-Cruz, M. Valerdi, M. Rosado | 2023-09-06T18:42:53Z | http://arxiv.org/abs/2309.03310v1 | Metal-THINGS: a panchromatic analysis of the local scaling relationships of the dwarf irregular galaxy NGC 1569
###### Abstract
We investigate several panchromatic scaling relations (SRs) for the dwarf irregular galaxy NGC 1569 using IFU data from the Metal-THINGS Survey. Among the spatially resolved properties analyzed, we explore SRs between the stellar mass, SFR, molecular gas, total gas, baryonic mass, gas metallicity, gas fraction, SFE and effective oxygen yields. Such multiwavelength SRs are analyzed at a spatial resolution of 180 pc, by combining our IFU observations with data from the surveys THINGS, CARMA, and archival data from DustPedia. Although we recover several known relations, our slopes are different to previously reported ones. Our star formation main sequence, Kennicutt-Schmidt (KS) and molecular KS relations show higher SFRs, lower scatter, and higher correlations, with steeper (1.21), and flatter slopes (0.96, 0.58) respectively. The shape of the SRs including metallicity, stellar mass, and gas fraction are flat, with an average value of 12+log(O/H) \(\sim\) 8.12 dex. The baryonic mass vs effective oxygen yields, and the stellar, gas and baryonic mass vs SFE show higher dispersions and lower correlations. Since we use the dust mass as a tracer of gas mass, we derive the Dust-to-Gas Ratio and the CO luminosity-to-molecular gas mass conversion factors, showing differences of 0.16 and 0.95 dex for the total and molecular gas surface density, respectively, in comparison to previously reported values. We use a self regulated feedback model to conclude that stellar feedback plays an important role generating outflows in NGC 1569.
keywords: galaxies: abundances, galaxies: dwarf, galaxies: irregular, galaxies: starburst, galaxies: star formation, galaxies: statistics
## 1 Introduction
The evolutionary path of galaxies is driven by distinct processes, at different time scales. The history of star formation in galaxies, gas accretion, mergers or gas inflows/outflows -among others- can be studied through the current galaxy properties, such as star formation rate (SFR), gas metallicity (Z), stellar mass (M\({}_{\star}\)), gas mass (M\({}_{\rm gas}\)), baryonic mass (M\({}_{\rm bar}\)), star formation efficiency (SFE) and effective yields (\(\Upsilon_{\rm eff}\)). Since all of these properties contain information about the past and current evolution of galaxies, scaling relations (SRs)
between these properties are an important tool to understand the most important mechanisms that drive galaxy evolution.
During the last few decades, several global SRs were explored using fiber spectroscopic surveys, showing critical physical properties of galaxies in the local universe. On one hand, using the Sloan Digital Sky Survey (SDSS, York et al., 2000) and the Galaxy and Mass Assembly Survey (GAMA, Driver et al., 2011) data, the relation between stellar mass vs. SFR (M\({}_{\star}\)-SFR, Brinchmann et al., 2004; Lara-Lopez et al., 2013b), and stellar mass vs. metallicity (M\({}_{\star}\)-Z, Tremonti et al., 2004; Lara-Lopez et al., 2013a) were established for thousands of galaxies. On the other hand, additional SRs were established such as the baryonic mass-effective yield (M\({}_{\rm bar}\)-\(Y_{\rm eff}\), Tremonti et al., 2004; Lara-Lopez et al., 2019), the gas fraction-metallicity (u-Z, Pilyugin et al., 2004; Lara-Lopez et al., 2019), the gas fraction-effective yield (u-\(Y_{\rm eff}\), Dalcanton, 2007) or the relation between baryonic mass, stellar mass and gas mass with the star formation efficiency (M\({}_{\rm bar}\)-SFE, M\({}_{\star}\)-SFE, M\({}_{\rm gas}\)-SFE, Haynes et al., 2011; Lara-Lopez et al., 2019). All the SRs mentioned above help to understand a part of the galaxy evolution process by analyzing the gas inside galaxies and how different feedback processes affect their evolutionary path.
Since stars form out of collapsing gas clouds, a correlation is expected between the surface densities of star formation and gas. Indeed, this is described by the Kennicutt-Schmidt (KS) relation, an empirical scaling relation between the gas surface density and the SFR surface density given by \(\Sigma_{\rm SFR}\) = a \(\Sigma_{\rm gas}\)\({}^{\rm n}\)(Kennicutt, 1989) which was first proposed in the pioneering work of Schmidt (1959). A classic example of the global KS relation is presented in Kennicutt (1998), who also demonstrate the importance of radio and infrared observations in order to get robust estimations of gas mass and SFRs, respectively.
With the advent of new instrumentation such as the Integral Field Units (IFU), new possibilities of study have opened up; what was first done on a global scale, now can be done on a spatially resolved scale. Recently, IFU surveys have established local SRs, such as the stellar mass surface density vs. metallicity relation (\(\Sigma_{\star}-Z\), Rosales-Ortega et al., 2012; Barrera-Ballesteros et al., 2016; Baker et al., 2023), the stellar mass surface density vs. SFR surface density relation (\(\Sigma_{\star}-\Sigma_{\rm SFR}\), Sanchez et al., 2021; Pessa et al., 2022; Baker et al., 2023) and the spatially resolved gas fraction vs. metallicity relation (u - Z, Barrera-Ballesteros et al., 2018). Although the spatially resolved oxygen yields have not been studied by many authors, Vilchez et al. (2019) explored resolved effective yield profiles for two galaxies (NGC 5457 and NGC 628). In all of the above cases, it is important to note that local SRs mimic the global ones, which suggests that processes understood at local scales could be the key to understand global galaxy evolution processes.
The KS relation has also been analyzed locally and it preserves a similar shape to the global relation (Kennicutt et al., 2007; Blanc et al., 2009; Casasola et al., 2022; Pessa et al., 2022). Other studies have extended the relation by considering different star formation regions in a vast range of galactic environments, from the outer disks of dwarf galaxies to spiral disks, merging galaxies and individual molecular clouds (Dib, 2011; Dib et al., 2017; Shi et al., 2018; Pessa et al., 2021).
By analyzing the spatially resolved KS relation, some authors conclude that the gas mass (usually molecular gas) could play the most important role in the SFR regulation process, suggesting that this could be a more fundamental relation instead of the \(\Sigma_{\star}-\Sigma_{\rm SFR}\) relation (Morselli et al., 2020; Ellison et al., 2021; Pessa et al., 2021, 2022). However it is still a matter of debate. Indeed, other authors (_e.g._, Dib et al., 2017) explain the importance of the \(\Sigma_{\star}-\Sigma_{\rm SFR}\) relation since the gravity of stars, over scales of a few hundred parsec in galactic disks, is as important (and very often dominates) as that of gas. As Dib et al. (2017) (and references there in) mentioned, this implies that existing stars can play a fundamental role in generating large scale gravitational instabilities which in turn lead to star formation.
The correlation between the M\({}_{\star}\), Z and SFR led to the simultaneous discovery of a 3-dimensional structure (whose shape is still debatable), refereed to the Fundamental Plane (Lara-Lopez et al., 2010), and the Fundamental Metallicity Relation (FMR, Mannucci et al., 2010), highlighting the importance of the already known galactic gas inflows and outflows. Once the observational capabilities of radio wavelength reached the point at which large surveys could be pursued, the consequent analysis was to explore the relation between the gas mass (atomic, molecular or both) and the metallicity (_e.g._, Lara-Lopez et al., 2013a). Some studies confirmed the correlation between such properties and even showed the possibility of being more fundamental by driving the FMR (Bothwell et al., 2013). Deeper studies set that the dependence with the gas mass is probably stronger than with SFR, and thus the underlying FMR is between stellar mass, metallicity and gas mass (Bothwell et al., 2016a). This latter is supported by Bothwell et al. (2016b), who affirm that the metallicity dependence on SFR is a derivation of the dependence on the molecular gas content given by the KS relation. However, this is also a matter of debate. As other author mentioned, there is a dual dependence of the SFR/SFE on metallicity. One is due to the metal content in the star forming gas which governs its ability to cool, and the second is via metallicity dependent feedback (stellar winds, Dib, 2011; Dib et al., 2011). Both of the effects play against each other. A lower metallicity implies less cooling and thus less molecular gas. A lower metallicity also means weaker stellar winds and so collapsing clouds encounter less gas expulsion, which leads to more SFR. Which of these effects wins or what is their relative importance is yet to be quantified in detail.
The total gas mass is obtained by the mass in atomic gas, M\({}_{\rm HI}\), which is derived from HI observations, in combination with the molecular gas, M\({}_{\rm H_{2}}\), whose determination relies on measurement of the CO molecule emission, and on the CO luminosity-to-molecular gas mass conversion factor, \(\alpha_{\rm CO}\). For simplicity, \(\alpha_{\rm CO}\) factor was considered constant for all type of galaxies (_e.g._, Kennicutt, 1998). However, there is evidence proving that \(\alpha_{\rm CO}\) can vary with gas density and metallicity (Rubio et al., 1993; Lequeux et al., 1994; Bolatto et al., 2013). In particular, low metallicity galaxies have very faint CO emission, which makes its detection difficult and hinders a reliable calibration for the conversion factors. As pointed out also in Bolatto et al. (2013), gas rich galaxies with active star formation and low metallicity usually have a very faint CO emission. A very particular case is seen for low mass dwarf irregular galaxies, that show almost null CO emission, and have regularly low metallicities (Draine et al., 2007).
Both the metallicity and the dust grains have been highly studied since they provide important information for understanding the chemical evolution of galaxies. Indeed, it is thought that metals in the ISM are encapsulated inside of dust grains. Under the proper physical processes, _e.g._, supernovae shocks, the dust grains can be destroyed, carrying metals back to the ISM. As mentioned in Rempy-Ruyer et al. (2014), the amount of metals that are locked up in the dust grains can be quantify by the Dust-to-Gas Ratio (DGR). Thus, it is expected that the DGR depends strongly on metallicity and changes from one galaxy to another (_e.g._, because the ISM density also changes, Clark et al., 2023), especially in the low metallicity regime (Remy-Ruyer et al., 2014). Some studies, such as Leroy et al.
(2011) and Sandstrom et al. (2013), developed methodologies to compute not only the DGR but the \(\alpha_{\rm CO}\), taking into consideration the dependence of each other with the metallicity, obtaining reliable results.
In the local Universe, dwarf galaxies look like scaled down versions of high redshift massive galaxies due to their similarity in star formation, metallicity, physical size, high mass fractions or morphology (_e.g._, thickness and clumpiness) (Elmegreen et al., 2012, 2013; Moftino Flores et al., 2021). The variety of dwarf galaxies is extensive and there are different types (elliptical, spheroidal, irregular, blue compact, ultra faint, ultra compact). It is thought that the dwarf irregular galaxies are the most common in the universe, regularly found in isolation, with the particularity that they are still forming stars (Ellis, 1997; Spaans and Norman, 1997; Tolstoy, 2000; Gallart et al., 2015; Simon, 2019). Nowadays, there is an effort trying to find a connection among dwarf galaxies, establishing that irregular dwarfs evolve to Blue Compact Dwarf (BCD) through several starbursts, that enrich the ISM until their gas is exhausted to finally fade to gas free dwarf spheroidals (Kong et al., 2019). Dwarf irregular galaxies are important since they could reach the extreme low regime of mass and metallicity. Since possibly they were the first structures that were formed (Belokurov et al., 2006; Mateo, 1998), they provide unique clues to explore the early galaxy formation and evolution.
We present an analysis of the irregular dwarf galaxy NGC 1569 using local scaling relationships. This galaxy is a particularly interesting object since it is a starburst with a low stellar mass and low metallicity. A summary of the relevant physical properties can be seen in Table 1. We use IFU spectroscopy observations of the Metal-THINGS Survey (Lara-Lopez et al., 2021, 2023) in combination with other multiwavelength data. Such combination of data allows us to study, in a spatially resolved way, SRs involving both the stellar and the gas component: \(\Sigma_{*}-\Sigma_{\rm SFR}-\) Z, \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}-\) Z. Given that NGC 1569 has a very limited CO emission that results in a poor estimation of M\({}_{\rm H_{2}}\) and M\({}_{\rm gas}\), we compute the dust mass (M\({}_{\rm dust}\)) by fitting of the SED using multiwavelength data, and then we estimate the DGR according to methods already reported in the literature to obtain a reliable M\({}_{\rm gas}\).
Since the galaxies from the Metal-THINGS survey are nearby (D \(<\) 15 Mpc), the physical spatial resolution of the survey is up to two orders of magnitude better than other surveys such as CALIFA (Sanchez et al., 2012) and MANGA (Bundy et al., 2015) at the same wavelength coverage, which implies that galaxy properties are analyzed at scales of parsecs. The Field of View (FOV) of the George Mitchel Spectrograph allows us to cover almost the entire galaxy, in contrast with previous studies in which only the central structure was analyzed (Westmoquette et al., 2007, 2007, 2020). Another important key to note is that neither MANGA nor CALIFA have studied the metallicity and SFR in detail for the regime of low mass galaxies because they are biased towards higher mass ones (log(M/M\({}_{\odot}\)) > 9 ). By analyzing galaxies such as NGC 1569, we are filling the gap to the low mass regime.
The structure of this work is as follows. In SS2, we describe our data acquisition and in SS3, the estimation of the main galaxy properties. In SS4.1, we derive the spatially resolved SRs for the galaxy properties: \(\Sigma_{*}\), \(\Sigma_{\rm gas}\), \(\Sigma_{\rm SFR}\) and Z. We mainly explore the local relations for oxygen yields (Y\({}_{\rm eff}\)) and SFE in SS4.2. Finally in SS5 and 6, we present a general discussion of this work and a summary of our conclusions, respectively.
## 2 Observations and data reduction
### Metal-THINGS
In this paper, we use data from the Metal-THINGS survey (Lara-Lopez et al., 2021, 2023), which is obtaining IFU spectroscopy of 34 nearby galaxies mapped in HI with the Very Large Array (VLA) (The THINGS survey, Walter et al., 2008).
Metal-THINGS is an ongoing survey currently observing at the McDonald Observatory with the 2.7m Harlan Schmidt telescope using the George Mitchel Spectrograph (GMS), formerly known as VIRUS-P (Hill et al., 2008). GMS is an IFU with 246 fibers, each one with a diameter of 4.2 arcsecs, disposed on a square array of 100 x 102 arcsecs. The planning and development of the observing program as well as the technical details and general procedures are described in Lara-Lopez et al. 2023 (in preparation). For the case of NGC 1569, our observations and methodology are the same as to those described in Lara-Lopez et al. (2021).
We observed two pointings of NGC 1569 (Fig. 1), the first one (left) was observed in January 2018 and the second one (right) in October 2020, both using the red setup, with a spectral coverage from 4400 to 6800 A and spectral resolution of 5.3 A (low resolution grating VPI). Since GMS has a 1/3 filling factor, each pointing is observed in three dither positions to ensure a high surface coverage of the galaxy. The motivation for taking the second pointing is to investigate the faint tail in this galaxy (in the upper left corner of the pointing 2 can be seen the tail structure that goes towards the center).
During the observations, we integrated 15 minutes per dither, followed by 5 minutes of sky exposure. This process is repeated three times until a total integration time of 45 minutes is reached per dither (2.25 hours of total time for the three dithers). The observing nights for NGC 1569 were clear and had an average seeing of 2 arcsecs.
As usual, every night exposures of bias, calibration lamps (Neon + Argon), and sky flats were taken, as well as a calibration
\begin{table}
\begin{tabular}{c c c} \hline \hline & \multicolumn{2}{c}{NGC 1569} \\ \hline Distance & 3.25 Mpc & Tully et al. (2013) \\ \hline Stellar mass & 8.61 log(M\({}_{\odot}\)) & Leroy et al. (2019) \\ \cline{2-3} & 8.6 log(M\({}_{\odot}\)) & This work \\ \hline Dust mass & 4.88 log(M\({}_{\odot}\)) & Young et al. (1989) \\ \cline{2-3} & 5.3 log(M\({}_{\odot}\)) & This work \\ \hline Gas mass & 8.7 log(M\({}_{\odot}\)) & Young et al. (1989) \\ \cline{2-3} & 8.33 log(M\({}_{\odot}\)) & This work \\ \hline Metallicity & 8.16-8.19 & Kobulnicky \& Skillman (1997) \\
12+log(O/H) & 8.12 & This work \\ \cline{2-3} SFR & 0.95 M\({}_{\odot}\) yr\({}^{-1}\) & Leroy et al. (2019) \\ \cline{2-3} & 0.91 M\({}_{\odot}\) yr\({}^{-1}\) & DustPedia \\ \cline{2-3} & 0.48 M\({}_{\odot}\) yr\({}^{-1}\) & This work \\ \cline{2-3} & 0.27 M\({}_{\odot}\) yr\({}^{-1}\) & Outputs CIGALE \\ \hline Scale & 0.015 kpc/arcsec & This work \\ \hline \hline \end{tabular}
\end{table}
Table 1: Global important properties of NGC 1569. The adopted distance is using TRGBs. We compare some values computed by our own data with values in the literature. A SFR estimation is taken from the reported data of DustPedia. The other SFR estimation is taken from the outputs modules of CIGALE in our own runs. The scale is computed according to the distance.
star using a six dither position in order to get a full flux coverage for flux calibration.
For the data reduction we use P3D 1, for bias subtraction, flat frame correction, and wavelength calibration. Next, our own routines in Python are used to make the sky subtraction and dither combination. Finally, we use the common tasks of IRAF 2 (Tody, 1986) to make the flux calibration. First, we use the _Standard_ task to extract the spectrum of the standard star observed the same night of the observations with the same observational conditions. Second, we use the _Sensfunc_ task to compute both, the sensitivity and extinction functions, and finally, for the flux calibration, we use the _Calibrate_ task to apply the sensitivity curve and extinction to the spectra.
Footnote 1: [http://p3d.sourceforge.io](http://p3d.sourceforge.io)
Footnote 2: [http://iraf.nao.ac.jp/](http://iraf.nao.ac.jp/)
We fit the stellar continuum of all flux-calibrated spectra using STARLIGHT (Cid Fernandes et al., 2005; Mateus et al., 2006; Asari et al., 2007), by setting 45 simple stellar populations (SSP) models from the evolutionary synthesis models of Bruzual & Charlot (2003) with ages from 1 Myr up to 13 Gyr and metallicities Z = 0.005, 0.02 and 0.05. Thus, the stellar mass is extracted as one of the STARLIGHT outputs. After subtracting the fitted stellar continuum from the spectra, the emission lines are measured using Gaussian line-profile fittings, according to the methodology of Zinchenko et al. (2016, 2019). Such fitting allow us to analyze the spectral regions that contain important emission lines such as H\(\beta\); [OIII] \(\lambda\lambda\)4959, 5007; [OI] \(\lambda\lambda\)6300, 6364; H\(\alpha\), [NII] \(\lambda\lambda\)6548, 6583; [SII] \(\lambda\lambda\)6716, 6731.
In Fig. 2, we show the BPT diagnostics (Baldwin et al., 1981) of our fibers using the classification of Kauffmann et al. (2003) and Kewley et al. (2001), imposing a cut of S/N \(>\) 3 for the H\(\alpha\), H\(\beta\), [OIII] \(\lambda\)5007 and [NII] \(\lambda\)6583 emission lines. As can be seen, we obtain a total of 750 (98.5%) SF, 5 (0.6%) Composite and 6 (0.8%) Active Galactic Nuclei (AGN) - like fibers.
As a part of our analysis - described later- we use data at other wavelengths provided by the THINGS Survey (Walter et al., 2008), the CARMA survey (Rahman et al., 2012) and DustPedia (Davies et al., 2017). DustPedia is a collaboration that provides access to multiwavelength imagery (and photometry) for nearby galaxies and model physical parameters for such galaxies. Thus, we get images of NGC 1569 in different bands, from UV (152.8, 227.1 nm, GALEX) to the mid-far infrared (3.6, 4.5, 5.8, 8, 24 \(\mu\)m Spitzer; 70, 100, 160 \(\mu\)m Hershel-PACS; 250 \(\mu\)m Hershel-SPIRE).
## 3 Estimation of physical properties
We create a tridimensional datacube by associating each fiber to its relative position on the sky and convolving them by a gaussian of FWHM = 4.2" (the diameter of the fiber) at each wavelength. Thus, we get spaxel maps of our different emission lines. Then, we convolve and reproject such maps to a common spatial resolution (see SS3.2 below). According to the pixel size, we get a pixel area which allows to compute surface densities of some properties. Since we have an associated stellar mass at each fiber, this procedure is also applied to all the individual fibers, getting as a result a stellar mass surface density map (left panel of Fig. 3). We identify several stars in the field of view and use the star positions from the Two Micron All-Sky Survey (2MASS, Skrutskie et al., 2006) to obtain the astrometry.
The total number of spaxels for NGC 1569 before of corrections, binning and cleaning tasks is \(\sim\)400 (pixel size is \(\sim\)6", see SS3.2). Then, after applying a 2x2 binning and a similar S/N cut for the previously mentioned emission lines, we get a total of \(\sim\)100 spaxels with a pixel size of \(\sim\)12" that corresponds to \(\sim\)180 pc.
The measured emission line spaxels are corrected for interstellar reddening using the Balmer Decrement (H\(\alpha\)/H\(\beta\)) with the theoretical value of 2.86, using the extinction curve of Cardelli et al. (1989).
Figure 1: NGC 1569 image showing the field of view of the GMS in red boxes, 2 pointings were observed. The archival image in B band was taken from the Palomar Observatory Sky Survey, NGS-POSS.
Figure 2: BPT diagram for fibers with a S/N \(>\) 3 in the four emission lines used. The solid line corresponds to the Kauffmann et al. (2003) limit, the dashed line to the Kewley et al. (2001) limit. Red, green and blue colors correspond to SF, compound and AGN fibers, respectively.
### Stellar mass, SFR and metallicity estimation
The SFR was computed following Kennicutt et al. (2009) using the \(\rm L_{H\alpha}\).
\[\rm SFR\ \left[M_{\odot}\ yr^{-1}\right]=5.5\times 10^{-42}\times L_{H\alpha} \tag{1}\]
The above SFR formula considers a Kroupa IMF (Kroupa, 2001), which is very similar to a Chabrier IMF (Chabrier, 2003), with which our stellar masses are computed. Our star formation rate surface density map is shown in the middle panel of Fig. 3.
The gas metallicities are estimated using the S-calibration described in Pilyugin & Grebel (2016) which also use the emission line spectra. It is noteworthy that the computed oxygen abundances are gas-phase abundances. They are based on the definition of the strong line ratios as \(\rm N_{2}=[N\,II]\ \rm\delta 548+\rm\delta 5684/H\beta\), \(\rm S_{2}=[S\,II]\ \rm\delta 6717+\rm\delta 6731/H\beta\) and \(\rm R_{3}=[O\,III]\ \rm\lambda 4959+\rm\lambda 5007/H\beta\). Then, a relation is given according to the value of \(\rm log\ N_{2}\). The so-called upper branch (\(\rm log\ N_{2}\geq-0.6\)) is defined by the relation:
\[\rm 12+\rm log(O/H)_{S,\,U} = 8.424+0.030\ \log(R_{3}/S_{2})+0.751\ \log N_{2}\] \[+ (-0.349+0.182\ \log(R_{3}/S_{2})+0.508\log N_{2})\] \[\times \log S_{2}\]
While the so-called lower branch (\(\log N_{2}<-0.6\)) by the relation:
\[\rm 12+\rm log(O/H)_{S,\,L} = 8.072+0.789\ \log(R_{3}/S_{2})+0.726\ \log N_{2}\] \[+ (1.069-0.170\ \log(R_{3}/S_{2})+0.022\log N_{2})\] \[\times \log S_{2}\]
Throughout all the paper we use the term metallicity when referring to the gas-phase oxygen abundances. Our metallicity map can be seen in the right panel of Fig. 3.
### Dust mass estimates
For the dust mass estimates, we follow Calzetti et al. (2018), in which the global dust properties of a galaxy are used to compute the resolved dust masses, by fitting the entire Spectral Energy Distribution (SED) (to the available bands) and using the dust models from Draine & Li (2007).
To fit the complete SED, we use the Code Investigating GALAXY Emission (CIGALE, Burgarella et al., 2005; Nersesian et al., 2019). CIGALE is a fitting code that reproduces the observed panchromatic photometry of galaxies, by means of stellar and dust emission models, imposing an energy balance to simultaneously account for UV/optical extinction, and IR dust emission. Among the most important input parameters are the Simple Stellar Population (SSP) set, the Star Formation History (SFH) and dust properties affecting both extinction and IR emission. The input parameters that we used were taken from DustPedia (see Table 2), and as input observations, we take the data of NGC 1569 also from DustPedia, in the mentioned bands at the end of \(\rm\lx@sectionsign\)2 (from GALEX (UV) up to Herschel-SPIRE (FIR) at 250 \(\mu\)m.) The optical photometry, also included in the SED, are centered at 5170, 5426, 5725 and 6080 \(\rm\AA\) (obtained from our spectroscopic data, as described in \(\rm\lx@sectionsign\)2). CIGALE provides several output estimates (in which the dust mass can be found) according to the best computed SED fitting and a specific choice of the initial mass function. In this analysis, we use the IMF of Chabrier (2003) and the synthesis models of Bruzual & Charlot (2003).
We apply color corrections to all the Spitzer, Hershel-PACS and Hershel-SPIRE bands that we used, as Lianou et al. (2014) reported. Spitzer bands are also corrected by aperture effects due to the condition of extended objects.
Since NGC 1569 is in a sky projection to the galactic plane, it has a high extinction magnitude \(\rm A_{V}\)=1.58. Thus, we correct for galactic extinction all the GALEX and optical bands using the extinction curve of Cardelli et al. (1989), and for the Spitzer bands, we use the curve extinction of Indebetouw et al. (2005).
Finally, since all bands have different spatial resolutions, a reprojection and convolution to the worst resolution band (Hershel-SPIRE, 250 \(\mu\)m) is applied in order to have a correct and fair comparison between all image data. The pixel size of such image data results in \(\sim\)6\({}^{\circ}\). Then, we also apply a 2x2 binning to the images to have a pixel size of \(\sim\)12\({}^{\circ}\) (\(\sim\)180 pc). The re-projection is performed using the kernels from Aniano et al. (2011). Before the re-projection and convolution process, we mask the stars that appear in the field of view of the galaxy to avoid flux contamination.
We run CIGALE on each pixel to get a dust mass surface density map (left upper panel of Fig. 6). We have estimated dust masses on each pixel twice: first, we use data up to Hershel-PACS 160 \(\mu\)m (without any binning) in order to have several pixels and apply the method described in \(\rm\lx@sectionsign\)3.3 to estimate the DGR. Second, we also use data up to Hershel-SPIRE 250 \(\mu\)m to estimate the dust masses used in the following sections with a 2x2 binning. As mentioned, the specific input parameters that we use are shown in Table 2. Dust mass estimates using data up to Hershel-SPIRE 250 \(\mu\)m would be more sensitive to the bulk of the cold dust than the Herschel-SPIRE 160 \(\mu\)m. For this galaxy the dust masses are very similar when using IR data up to the Hershel-PACS 160 \(\mu\)m wavelength, compared to using up to the Hershel-PACS 500 \(\mu\)m wavelength (see Appendix A).
### Gas mass estimates
In order to properly take into account the low gas metallicity of this galaxy, we followed the method presented in Leroy et al. (2011) and Sandstrom et al. (2013) to compute the DGR and the CO to molecular gas mass factor. This method combines the masses of the atomic (\(\rm M_{HI}\)) and molecular (\(\rm M_{H_{2}}\)) gas, together with the dust masses, to compute the DGR and \(\rm\alpha_{CO}\), where \(\rm\alpha_{CO}\) is the factor to convert observed CO luminosity (\(\rm L_{CO}\)) to molecular gas mass, \(\rm M_{H_{2}}=\rm\alpha_{CO}\ L_{CO}\). Briefly, the method is based on the relation between these three masses:
\[\rm M_{gas}=\frac{M_{dust}}{DGR}=\mu_{gal}(M_{HI}+\rm\alpha_{CO}\ L_{CO}), \tag{4}\]
where \(\mu_{gal}\) is the mean atomic weight, so the gas mass term includes the contribution of hydrogen and other elements, as defined in Remy-Ruyer et al. (2014). We use the Galactic mass fraction of Helium, \(\rm Y_{\odot}=0.270\)(Asplund et al., 2009), and the metallicity derived from the observations presented in this work, \(\rm 12+\rm log(O/H)=8.12\), in order to estimate \(\mu_{gal}\) and take into account the mass of the different elements. Assuming a unique solution for similar metallicity regions to Eq. 4, one can find the best values of DGR and \(\rm\alpha_{CO}\) using the value of \(\rm\alpha_{CO}\) while deriving the solution for DGR, using Eq. 4 for all the pixels inside an specific region, and estimating the standard deviation of DGR of all those pixels inside that region. The best value of \(\rm\alpha_{CO}\), and then the solution, is that which minimizes the scatter of DGR.
We obtain the HI intensity from the THINGS survey (Walter et al., 2008) that we convert to HI gas mass using the equation:
\[\frac{\mathrm{M_{HI}}}{\mathrm{M_{O}}}=2.36\times 10^{5}\left(\frac{\mathrm{D}}{ \mathrm{Mpc}}\right)^{2}\times\frac{\mathrm{F_{HI}}}{\mathrm{Jy\,km/s}} \tag{5}\]
where \(\mathrm{F_{HI}}\) is the flux measured from the moment-0 map, and \(\mathrm{D}\) is the adopted distance according to Table 1.
The CO intensity is obtained using the data from the CARMA survey (Rahman et al., 2012). We derived the CO(1-0) moment-0 map from the CARMA datacube estimating the noise in the emission free channels, and looking for the peak which fulfills a signal to noise ratio larger than 3. We estimate the CO luminosity using the following equation (Solomon et al., 1992):
\[\frac{\mathrm{L_{CO}}}{\mathrm{K\,km\,s^{-1}\,pc^{2}}}=3.25\,\times 10^{7} \left(\frac{\mathrm{\nu_{test}}}{\mathrm{GHz}}\right)^{-2}(1+z)^{-1} \tag{6}\] \[\times\left(\frac{\mathrm{D}}{\mathrm{Mpc}}\right)^{2}\left( \frac{\mathrm{F_{CO}}}{\mathrm{Jy\,km\,s^{-1}}}\right),\]
where \(\mathrm{\nu_{test}}\) is the rest frequency of the line (115.27 GHz in the case of CO(1-0)), D is the distance (seen in Table. 1), z is the redshift of the galaxy (in this case it is negligible due to the proximity of the galaxy), and \(\mathrm{F_{CO}}\) is the velocity-integrated flux measured from the moment-0 map.
Both, the CO and HI flux maps were reprojected and convolved to the worst resolution when deriving the dust mass, which we
\begin{table}
\begin{tabular}{c c c} \hline \multicolumn{3}{c}{CIGALE Parameters} \\ \hline Module & Parameter & Value \\ \hline & \(\tau_{\mathrm{min}}\) & 7000, 8000, 10000, 12000, 13000 \\ & \(\mathrm{Age_{\mathrm{min}}}\) & 9500, 13000 \\ Delayed SFH & \(\mathrm{\nu_{\mathrm{burst}}}\) & 5.0,10.0,0.20, 50, 80, 0.0 110.0 \\ & \(\mathrm{Age_{\mathrm{burst}}}\) & 10,30,50,70,90,150,200,250,300,400 \\ & \(f_{\mathrm{burst}}\) & 0.1, 0.2, 0.3, 0.4, 0.5, 0.75 \\ & \(\mathrm{SFR_{A}}\) & 1.0 \\ \hline SSP & & \\ Bruzual \& Charlot (2003) & IMF & Chabrier (2003) \\ & Stellar metallicity & 0.004, 0.008 \\ \hline Dust attenuation & \(E(B-V)_{\mathrm{young}}\) & 0.0075, 0.011, 0.017, 0.026, 0.038, 0.058, 0.087, 0.13, 0.20, 0.29, 0.44 \\ & \(E(B-V)_{\mathrm{old\,He\, tracer}}\) & 0.50 \\ & Central wavelength of the UV bump & 217.5 \\ Calzetti et al. (2000) and Leitherer et al. (2002) & Width (FWHM) of the UV bump & 35.00.0 \\ & Amplitude of the UV bump & 0.0 \\ & Slope delta of the power law & 0.0 \\ & Filters & BB90 \& V9B0 \& FUV \\ \hline Dust emission & \(q_{\mathrm{PAH}}\) & 0.47 \\ & \(U_{\mathrm{min}}\) & 35 \\ Draine et al. (2014) & \(\alpha\) & 2.0 \\ & \(\gamma\) & 0.015, 0.025, 0.035 \\ \hline \end{tabular}
\end{table}
Table 2: Modules and physical parameter values used as input to CIGALE. \(\tau_{\mathrm{min}}\), \(\mathrm{Age_{\mathrm{min}}}\), \(\tau_{\mathrm{burst}}\) and \(\mathrm{Age_{\mathrm{burst}}}\) are expressed in 10\({}^{6}\)yr. The SSP, dust attenuation models and dust emission model are described in the cited references.
Figure 3: The stellar mass surface density, \(\Sigma\), (left), the SFR surface density, \(\Sigma_{\mathrm{SFR}}\) (middle) and the gas metallicity (right) maps. The black contours correspond to the galaxy structure in the J band from 2MASS. We show the 2 observed pointings in this map; after the S/N and BPT selection criteria, just a few spaxels were recovered from the second pointing. Each spaxel in the three maps corresponds to a scale of \(\sim\)12\({}^{\circ}\) (\(\sim\)180 pc).
choose to be that of PACS 160 \(\mu\)m (FWHM = 11.2") in order to have enough pixels in the CO flux map and being able to apply the method.
We show in Fig. 4 the HI flux (left), CO(1-0) flux (middle), and the dust mass surface density (right), \(\Sigma_{\rm dust}\). The CO map of NGC 1569 does not present much CO emission, probably due to its low metallicity. However, the CO emission is enough to compute the best DGR as a function of \(\alpha_{\rm CO}\) using 84 pixel values, where we assume a unique solution of Eq. 4, since the metallicity of this galaxy is pretty constant (as we see in this work).
We show the result of the method in Fig. 5, where we plot the standard deviation of the DGRs, versus \(\alpha_{\rm CO}\) for the whole set of pixels. We obtain the best solution, _i.e._, where the minimum value of the standard deviation of log(DGR) occurs. This solution corresponds to log(DGR) = \(-3.08\pm 0.18\) and log(\(\alpha_{\rm CO}\)) = \(1.6\pm 0.4\) M\({}_{\odot}\)pc\({}^{2}\)(K km/s)\({}^{-1}\). The change of the DGR standard deviations is very small in comparison with the large variation of log(\(\alpha_{\rm CO}\)) in Fig. 5. Therefore, the estimated uncertainty of \(\alpha_{\rm CO}\) shows a significant margin of error of 40%. However, this does not affect our results since we will use the DGR value to compute gas masses presented in this work.
The uncertainties were obtained following Leroy et al. (2011); Sandstrom et al. (2013). We first estimated two different noises: i) adding random noise to the observed properties according to their errors, and ii) via bootstrapping. Each estimation is repeated 100 times independently. The errors for the two different noises were inferred with the standard deviation of the derived values, and finally we added in quadrature to obtain the final error.
The DGR and log(\(\alpha_{\rm CO}\)) reported values are in agreement with those expected for the metallicity of NGC 1569 (Z = 0.0044) (_e.g._, Sandstrom et al. 2013; Bolatto et al. 2013, for \(\alpha_{\rm CO}\)) and (_e.g._, Remy-Ruyer et al. 2014; De Vis et al. 2019, for DGR).
At the end, using the Eq. 4 we compute the gas mass surface density map, displayed in the upper middle panel of Fig. 6.
### The baryonic mass, gas fraction, oxygen yields, SFE and depletion time estimates
The physical properties that we obtained in previous sections (stellar mass, total gas mass and SFR) allow us to compute other galaxy properties following previous works (_e.g._, Lara-Lopez et al. 2019). The baryonic mass is given by:
\[{\rm M_{bar}=M_{\star}+M_{gas}} \tag{7}\]
The \({\rm M_{bar}}\) surface density map is shown in the right upper panel of Fig. 6
Thus, the gas fraction is:
\[\mu=\frac{{\rm M_{gas}}}{{\rm M_{gas}+M_{\star}}} \tag{8}\]
For the effective oxygen yields (\({\rm Y_{eff}}\)), we follow the classic formalism described in Pagel & Patchett (1975); Searle & Sargent (1972). This is:
\[{\rm Y_{eff}=\frac{Z_{gas}}{\ln(1/\mu)}} \tag{9}\]
We adopt the relation given by Garnett (2002) for the value of \(Z_{\rm gas}\), in which the oxygen abundance O/H is expressed in units of the number of oxygen atoms relative to hydrogen:
\[{\rm Z_{gas}=12\times(O/H)} \tag{10}\]
Finally, we define the SFE as:
\[{\rm SFE=\frac{SFR}{M_{gas}}} \tag{11}\]
And the depletion time as:
\[{\rm t_{dpp}=\frac{M_{gas}}{SFR}} \tag{12}\]
Since our information is spatially resolved, we have maps for all of these properties. Thus, the effective oxygen yield map, the gas mass fraction map and the SFE map are shown in the left, middle and right bottom panel of Fig. 6, respectively.
## 4 Results
### The local relationships between the stellar mass, gas mass, star formation rate and metallicity
In this section, we analyze the spatially resolved maps that we get for the different galaxy properties previously estimated. As mentioned in SS3.2, we convolved and reprojected our multiwavelength maps to the lowest resolution band (Hershel-SPIRE 250 \(\mu\)m). The convolution and reprojection is also done to the emission lines maps, which are used to obtain the physical properties described in SS3. Thus, we establish a common and equivalent scale to compare all the properties with each other. A binning of 2x2 was also applied to avoid pixel correlations and to have a larger spatial scale. All the maps in Figs. 3 and 6 have a pixel size of 12" corresponding to a scale of 180 pc. We also compute the global galaxy properties by adding the values of the individual spaxels in the flux emission lines, and then apply the equations in SS2.
The spatially resolved map of stellar mass \(\Sigma_{\star}\) (left panel of Fig. 3) shows that the highest values of mass are located in the center of the galaxy, where the super stellar clusters (SSC) A and B are located. Around these regions the values of mass decreases in a gradual way up to the outer parts. We compute a global stellar mass of log(M\({}_{\star}\)) = 8.6 M\({}_{\odot}\). The SFR surface density map (middle panel of Fig. 3) shows similar features than the stellar mass map. Our estimation for the global SFR is log(SFR) = -0.32 M\({}_{\odot}\) yr\({}^{-1}\). The metallicity map does not show any pattern but it highlights some spaxels in the center of the galaxy with a lower metallicity than others in the outer part. For example, the central spaxel with a 12+log(O/H) \(\sim\) 8.05 has a metallicity 0.1 dex lower than other spaxels in the outer zone with 12+log(O/H) \(\sim\) 8.15. This slight change could be an evidence of outflows in the outer parts and a possible inflow just in the center of the galaxy, as we mention in the discussion of this work. In general, we have 65.3% of spaxels with higher Z and 34.7% lower Z, than the global value (12+log(O/H) = 8.12), respectively.
The \(\Sigma_{\star}-\)\(\Sigma_{\rm SFR}\) relation is shown in the panel A of Fig. 7, where it can be observed that we recover the well known correlation between the two properties. The solid magenta line in the figure is the power law fit reported for galaxies with SBc-Irr morphology in the MANGA survey (Cano-Diaz et al. 2019). Our reported \(\Sigma_{\star}-\Sigma_{\rm SFR}\) relation is color-coded by the gas metallicity, without a clear pattern or correlation. In the same figure, the dashed red line represents the limit below which the SFR value is lower than 10\({}^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\). This
is a typical value for which the SFR can become very uncertain due to poor sampling of the IMF. The use of synthesis stellar population (SSP) modeling to derive stellar population properties (among them the SFR), is intrinsically limited by the fact that the models are constructed using fully sampled IMFs. This means that by applying such models to an observed spectra, it is implicitly assumed that the mass distribution of stars that are producing the spectra are sufficiently complete (Haydon et al., 2020). It has been shown that this happens when the total stellar mass relative to this spectrum is of at least 10\({}^{4}\) M\({}_{\odot}\)(El-Badry et al., 2017). As the SFR prescriptions are calculated by assuming a constant SFR over \(\sim\)10\({}^{7}\) yr, this means that values below 10\({}^{4}\) M\({}_{\odot}\)/10\({}^{7}\) yr\({}^{-1}\)\(\sim\) 10\({}^{-3}\) M\({}_{\odot}\) yr\({}^{-1}\) can possibly suffer from incomplete IMF sampling, resulting in an uncertain SFR determination.
The \(\Sigma_{*}-\) Z relation can be seen in the panel B of Fig. 7. In this case we do not recover the classic polynomial shape of the global M\({}_{\bullet}\)-Z relation. Instead, the metallicity spans across a range of \(\sim\)8.05-8.15 dex with respect to the stellar mass. This implies that spaxels have a metallicity variation of up to 0.1 dex (25%). In general, the metallicity is not constant throughout the galaxy but the variation is small. The plot is color-coded with the SFR and, as expected, the SF main sequence is seen (_i.e._, \(\Sigma_{*}\) scales with \(\Sigma_{\rm SFR}\)).
The surface density of dust and gas mass maps (upper left and bottom panels of Fig. 6, respectively) share similar features with respect to the previous map of stellar mass, the highest values are in the center of the galaxy and the regions follows the structure of the galaxy. The global value of dust mass that we compute is log(M\({}_{\rm dust}\)) = 5.3M\({}_{\odot}\). Since the dust masses are multiplied by a DGR, it is expected to have the same spatial distribution in the dust and gas surface density maps. The global estimation for the gas mass that we computed is log(M\({}_{\rm gas}\)) = 8.33M\({}_{\odot}\). We take the values of surface density of HI, H\({}_{2}\) and the CO/H\({}_{2}\) conversion factor reported by Kennicutt (1998) to estimate the luminosity of CO. This value is multiplied by the \(\alpha_{\rm CO}\) factor that we get in SS3.3 to have our own value of H\({}_{2}\) and thus we derive a new global gas mass surface density (log(\(\Sigma_{\rm gas}\)) = 1.49M\({}_{\odot}\) pc\({}^{-2}\)). In comparison with the surface density gas mass estimates of Kennicutt (1998), log(\(\Sigma_{\rm gas}\))=1.33M\({}_{\odot}\) pc\({}^{-2}\), the difference between these 2 estimations is 0.16 dex. We also derive a new value for log(\(\Sigma_{\rm H_{2}}\)) = 1.05 M\({}_{\odot}\) pc\({}^{-2}\), in contrast with the old value log(\(\Sigma_{\rm H_{2}}\)) = 0.10 M\({}_{\odot}\) pc\({}^{-2}\) reported by Kennicutt (1998). The large difference between these values is due to the used different \(\alpha_{\rm CO}\). Kennicutt (1998) assumed a constant Milky Way value \(\log(\alpha_{\rm CO})=0.65\pm 0.4\) M\({}_{\odot}\)pc\({}^{2}\)(K km/s)\({}^{-1}\) for all galaxies in the sample, while we applied our own estimate in this work of log(\(\alpha_{\rm CO}\)) = 1.6 \(\pm\) 0.4 M\({}_{\odot}\)pc\({}^{2}\)(K km/s)\({}^{-1}\).
Figure 7 (panel C) displays the spatially resolved KS relation color-coded by metallicity. We clearly recover the linear shape of the scaling relation, although with a different slope than previous works. It is also important to note the high dispersion towards the low mass range, specifically below the horizontal red line, which is due to the sampling problem of the IMF. When we compare our results with the previous work of Bigiel et al. (2008), the difference between both slopes is clear. We discuss the possible origin of such differences in SS5.
In all the SRs shown in this section, the black contours dots represent the tail of NGC 1569 as can be appreciated in the second pointing of Fig. 1. As mentioned, from the second pointing we just recover a few spaxels due to the selection criteria. Such tail is an important zone of the galaxy because it is a possible sign of interaction. However, the spaxels related to this zone, for the plots of this section, do not show a particular behaviour or strong feature to be taken into consideration.
The coefficient values of the fittings in this section are reported in Table 3.
Figure 4: HI intensity (left), CO (1-0) intensity (middle), and dust mass surface density, \(\Sigma_{\rm dust}\) (right), of NGC 1569. The three maps are reprojected and convolved to the worst resolution observation (FWHM\(=11.2"\), Hershel-PACS 160\(\mu\)m) used in the dust mass estimation (see §3.3).
Figure 5: Standard deviation of the Dust-to-Gas Ratios (DGRs), as a function of the CO Luminosity-to-H\({}_{2}\) factor, \(\alpha_{\rm CO}\). The DGRs for each value of \(\alpha_{\rm CO}\) were estimated using Eq. 4 for all the pixels where CO in NGC1569 is detected.
The local relationships between the gas fraction, baryonic mass, oxygen yields and star formation efficiency.
The baryonic mass surface density, gas fraction, \(\rm Y_{eff}\) and SFE maps are shown in the lower panel of Fig. 6. The baryonic surface density does not show big differences with respect to the former maps. We estimate a global value for the baryonic mass of log(\(\rm M_{bar}/M_{\odot}\)) = 8.8.
In contrast with the maps in which the spaxels have high values in the center with a gradually decreasing distribution to the edges, the resolved \(\rm Y_{eff}\) map (left lower panel of Fig. 6) does not show neither a pattern or a structure. For this map, the highest values are in the center of the galaxy, and also along the horizontal plane of the map. Although it is a common practice to use the gas-phase oxygen abundance to specify the galaxy metallicity in the construction of the different relations, in the estimation of the oxygen yield the total oxygen abundance (gas + dust) must be used. Peimbert & Peimbert (2010) estimated the oxygen dust depletions in galactic and extragalactic HII regions finding that the fraction of oxygen atoms embedded in dust grains are a function of the oxygen abundance, being around 0.10 dex for metal-poor HII regions; thus, the total oxygen abundance for NGC 1569 is around \(12+\log\)(O/H) = 8.22. Using the gas mass fraction \(\rm\mu=0.34\), the integrated total oxygen yield is \(\rm Y_{eff}\)= 0.00185 or \(\log\)(\(\rm Y_{eff}\)) = -2.73.
The resolved gas fraction map (middle bottom panel of Fig. 6) shows a relevant fact at first impression, the values go up to \(\rm\mu=0.5\), which implies that all the spaxels are probably dominated by stellar mass. Our estimate of the integrated gas fraction is \(\rm\mu=0.34\), which reveals its low gas fraction. Later, we discuss the origin of the low gas fraction values in SS5.
The surface density SFE map (right lower panel of Fig. 6) does not show a clear pattern, some regions clearly show low values of SFE (\(\log\)(SFE) \(<\) -8.8 yr\({}^{-1}\)), while the high values do not correspond to a particular part of the galaxy. The integrated property for this galaxy is \(\log\)(SFE) = -8.65 yr\({}^{-1}\) which directly corresponds to a depletion time of \(\log\)(\(\rm t_{dep}\)) = 8.65 yr (\(\sim\)450 Myr).
The relation \(\rm\mu\)-Z (panel A of Fig. 8) is flat, similar to the \(\Sigma_{*}-\rm Z\) relation. As mentioned earlier, all the data are concentrated towards \(\rm\mu<0.5\) which implies a stellar dominated regime across the whole galaxy. The points with black contours (tail of NGC 1569) do not show a special location in the plot with respect to the rest of the data. A high percentage of the data correspond to values of \(\log\)(\(\rm t_{dep}\)) \(>\) 8.5 yr.
For the relation \(\Sigma_{\rm bar}\)-\(\rm Y_{eff}\) (panel B of Fig. 8), we recover a correlation between both properties. The points corresponding to the tail of the galaxy do not show evidence of a particular location.
The plots of Fig. 8 show the relation between \(\Sigma_{*}\)(panel C), \(\rm\mu\) (panel D) and \(\Sigma_{\rm bar}\) (panel E) with the SFE. There is no special dependence with the metallicity for the three plots, indeed, since the metallicity range is of only 0.2 dex, there is little dependence with metallicity. For the relations \(\Sigma_{*}\)- SFE and \(\Sigma_{\rm bar}\)- SFE, we appreciate an important feature. In general, if we do not consider the points of the tail (points with black contours), we have a mostly flat relation. However, when we consider only the points with black contours, our data suggest a possible anticorrelation between both properties. These two relations are the only ones in which the points corresponding to the tail have a clear difference with respect to the others. It is clear how such points have a limit up to \(\log\)(\(\Sigma_{*}\)) \(<\) 2.0 and \(\log\)(\(\Sigma_{\rm bar}\)) \(<\) 2.25. Finally, we see a negative correlation between
Figure 6: Upper panels: dust mass surface density \(\Sigma_{\rm dust}\) (left), gas mass surface density \(\Sigma_{\rm gas}\) (middle) and baryonic mass surface density \(\Sigma_{\rm bar}\) (right) maps. Bottom panels: spatially resolved \(\rm Y_{eff}\) (left), spatially resolved gas fraction (middle) and spatially resolved SFE (right) maps. In these maps we show the two observed pointings; after the S/N and BPT selection criteria, just a few spaxel were recovered from the second pointing. The black contours correspond to the galaxy structure in the 2MASS J band. Each spaxel corresponds to a scale of \(\sim\)12” (\(\sim\)180 pc).
\(\mu\) and SFE. The regions of the tail are distributed throughout the whole range of \(\mu\) with no special concentration.
The coefficient values of the fittings of all the SRs mentioned in this section are shown in Table 3.
Finally, it is worth to mention that the systematic offsets that can be observed depend on the method used for metallicity estimates (_e.g._, Zurita et al., 2021; Groves et al., 2023). The temperature inhomogeneities within the gas can explain the differences between strong line methods (Mendez-Delgado et al., 2023) and thus the proper corrections can be taken into consideration (Preia-Guerrero et al., 2012, 2012). Our work relies in the S-callibrator that takes into account the ionization parameter, the nitrogen-to-oxygen ratio and produces abundances compatible with those computed by the direct T\({}_{\rm e}\) method. Since our metallicities are mostly constant across all the galaxy with a very low dispersion (\(\sigma\sim 0.05\)), we do not expect changes in the slopes of our SRs.
## 5 Discussion
### Global properties of NGC 1569
NGC 1569 is an interesting case of study for its high star formation rate (Kennicutt, 1998), low metallicity (Kobulnicky & Skillman, 1997) and complex star formation history (Angeretti et al., 2005). Additionally, an extended emission has been observed beyond the optical galaxy data as warm and hot ionized and atomic hydrogen gas (Lianou et al., 2014). NGC 1569 is part of a small group of galaxies, that experienced a possible recent interaction with its close companion (Johnson, 2013).
The position of NGC 1569 on some global SRs using SDSS galaxies (red asterisk in the plots of Fig. 9) also reveals the importance of analyzing dwarf galaxy systems. Because NGC 1569 has a relatively high SFR, but low metallicity and low stellar mass, this work becomes relevant since it can help to fill in the gap towards the low mass regime. Although NGC 1569 seems to be gas rich, we propose this galaxy should have had even more gas. Later we discuss if this extra quantity of gas was lost via outflows.
By analyzing the classical SRs with the respective position of NGC 1569 (Fig. 9), we can test the idea that the presence of pristine inflows of gas enhanced the SFR, while at the same time diluted the gas metallicity, also, the position of NGC 1569 on the KS relation suggests a high SFR with respect to the main sequence of spiral and starburst galaxies. However, when the relation \(\Sigma_{*}-\Sigma_{\rm SFR}\) is analyzed (see panel A of Fig. 7), the role of inflows is not completely clear. Besides, even in the flat \(\Sigma_{*}-\) Z relation (panel B of Fig. 7), a high SFR is not seen for the spaxels with lower metallicity.
SRs such as \(\mu\)-Z or \(\rm M_{bar}\)-\(\rm Y_{eff}\) help to explore the presence of inflows or outflows by analyzing the changes in effective yields and comparisons with models of galactic chemical evolution. In principle, NGC 1569 does not appear to be comparable with the spiral SDSS galaxies in the \(\mu\)-Z or \(\rm M_{bar}\)-\(\rm Y_{eff}\) relations (Tremonti et al., 2004; Lara-Lopez et al., 2019), indeed NGC 1569 would clearly be an outlier in comparison with the those papers. However, Pilyugin et al. (2004) reported different distributions between spiral and
Figure 7: The \(\Sigma_{*}-\Sigma_{\rm SFR}\) (panel A), \(\Sigma_{*}-\) Z (panel B) and the \(\Sigma_{\rm spa}-\Sigma_{\rm SFR}\) (panel C) scaling relationships. The fits to the data are shown in black solid lines. The purple solid lines correspond to the fits of Cano-Díaz et al. (2019) and Bigiel et al. (2008) for their respective relation. The purple solid line in the \(\Sigma_{*}-\) Z relation is our own fitting for the MANGA galaxies (see Appendix G for details). The horizontal dashed red line is the limit below which there is a valid SFR. The circles with black contours correspond to the spaxels in the tail of NGC 1569. The data is color coded by the property shown in the colorbar of each panel.
irregular galaxies in the \(\mu\)-Z diagram. Precisely in the irregular galaxies sample of Pilyugin et al. (2004), NGC 1569 follows the distribution of such irregular galaxies. The scenario of gas outflows takes high relevance since our global estimation of the total oxygen yield is Y\({}_{\rm eff}\)=0.00185 or log(Y\({}_{\rm eff}\)) = -2.73. Empirical estimations of the true oxygen yield (Y\({}_{\rm o}\)), using oxygen abundances defined with the electronic temperature in the H II regions, result in Y\({}_{\rm o}\) = 0.0027 (Pilyugin et al., 2004), Y\({}_{\rm o}\) = 0.0032 (Bresolin et al., 2004) or Y\({}_{\rm o}\) = 0.0030 / 0.035 (Pilyugin et al., 2007). The ratio Y\({}_{\rm eff}\)/Y\({}_{\rm o}\) provides the estimation for the fraction of the produced oxygen that is rejected from the galaxy through galactic winds. Thus, NGC 1569 would lose around 38% of the produced oxygen if the Y\({}_{\rm o}\) were 0.0030 or would lose around 47% of the produced oxygen if the Y\({}_{\rm o}\) were 0.0035.
Previous works such as Sanchez-Cruces et al. (2022) and references there in, not only reported an extended emission of ionized hydrogen, but also the presence of a high number of supernovae remnants. This latter supports the idea that NGC 1569 is experimenting gas outflows via stellar feedback, in particular high speed galactic winds that are powered by supernova explosions, which is consistent with the conclusions of previous works (Tremonti et al., 2004; Pilyugin et al., 2004). As mentioned in Tremonti et al. (2004), a very high infall of pristine gas in a galaxy would i) reduce its metallicity, ii) enhance its gas fraction and iii) reduce slightly its effective yield. Sanchez Almeida et al. (2015) also analyzed how the very extreme low metallicity in galaxies is attributed to infalls of metal poor gas. The presence of metal poor inflows have been also invoked to explain the strong anticorrelation in the local SFR-Z\({}_{\rm g}\) rela
Figure 8: The \(\mu\)-Z (panel A), the \(\Sigma_{\rm bar}\)- Y\({}_{\rm eff}\) (panel B), the \(\Sigma_{\ast}\)- SFE (panel C), the \(\mu\) - SFE (panel D) and the \(\Sigma_{\rm bar}\)- SFE (panel E) SRs. The fits to the observed data are shown as black lines. The circles with black contours correspond to the spaxels in the tail of NGC 1569. The data is color coded by the properties shown in the colorbar.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{Fitting coefficients} \\ \hline & \(\Sigma_{*}-\Sigma_{\rm SFR}\) & \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) & \(\Sigma_{\rm dur}\)- Y\({}_{\rm eff}\) & \(\Sigma_{*}\)- SFE & \(\mu\) - SFE & \(\Sigma_{\rm bar}\)- SFE & \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) & \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) & \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) \\ \hline m & 1.21 & 0.96 & 0.11 & 0.08 & -0.77 & 0.03 & 1 & 0.58 \\ \hline \(y_{0}\) & -10.64 & -2.68 & -3.04 & -8.88 & -8.46 & -8.8 & -2.55 & -1.62 \\ \hline \(\sigma_{\rm SD}\) & 0.64 & 0.64 & 0.17 & 0.22 & 0.22 & 0.22 & 0.64 & 0.42 \\ \hline \(x_{\rm RMS}\) & 1.44 & 1.44 & 2.82 & 8.74 & 8.74 & 8.74 & 1.44 & 0.92 \\ \hline \(x_{\rm RMSE}\) & 0.25 & 0.22 & 0.16 & 0.22 & 0.2 & 0.22 & 0.58 & 0.13 \\ \hline \(p\) & 0.92 & 0.94 & 0.36 & 0.08 & -0.46 & 0 & 0.39 & 0.95 \\ \hline p-value & \(<\) 2.2\(\times\)10\({}^{-16}\) & \(<\) 2.2\(\times\)10\({}^{-16}\) & 9.9\(\times\)10\({}^{-4}\) & 0.48 & 1.7\(\times\)10\({}^{-5}\) & 0.99 & 3.3\(\times\)10\({}^{-4}\) & \(<\) 2.2\(\times\)10\({}^{-16}\) \\ \hline \end{tabular}
\end{table}
Table 3: The slopes (m), zero points (\(y_{0}\)), standard deviations (\(\sigma_{\rm SD}\)), root mean squared (\(x_{\rm RMS}\)), root mean squared error (\(x_{\rm RMSE}\)) and Pearson correlation coefficient (\(p\)) of all fittings reported in this work. The last row corresponds to the p-value of the Pearson correlation.
Figure 9: The global M\({}_{\bullet}\)- SFR (panel A), M\({}_{\bullet}\)- Z (panel B) and \(\Sigma_{\rm gas}-2_{\rm SFR}\)(panel C) SRs using SDSS galaxies. The red asterisk shows our global estimation for NGC 1569. The red asterisk in the panel C corresponds to our estimation of gas mass, preserving the HI mass and CO data reported by Kennicutt (1998), but using our own \(\alpha_{\rm CO}\)(the error bars are plotted following Kennicutt (1998)). In this plot the black dot and the line are the gas mass and the fitting reported by Kennicutt (1998), respectively (the dashed lines are the one sigma curves). The blue dots are normal spiral galaxies and the gray squares are galaxies considered as starburst. The samples are described in Kennicutt (1998).
tion seen in the metal poor MANGA galaxies (Sanchez-Menguiano et al., 2019). With all these latter facts, the idea of the presence of inflows is not consistent with the observations reported here. We think that outflows are more consistent in this case, since for instance, the gas fraction of NGC 1569 is low and dominated by stellar mass (\(\mu\) = 0.34), as we computed previously. The results of Dalcanton (2007) also support the reduction of the effective yields by outflows -via stellar feedback-.
We show the position of NGC 1569 in the KS relation (panel C of Fig. 9). The plot is taken from Kennicutt (1998), and shows two distinguishable samples: spiral galaxies (blue dots) and IR galaxies considered as starbursts (squares). It is worth noting that the position of NGC 1569 shows a high \(\Sigma_{\rm SFR}\) and \(\Sigma_{\rm gas}\) with respect to all the spirals. At least in this diagram, NGC 1569 would be in the regime of starburst if it had 1 dex (maybe 1.5 dex) extra of gas mass surface density. With this, we want to propose the idea that such possible extra gas may have been lost via outflows (instead of directly assuming that such gas was converted efficiently into stars). In the next paragraphs, we show how this possible extra gas mass could have been lost by establishing the use of a simple self-regulator model of the SF.
### Spatially resolved properties of NGC 1569
Back to our spatially resolved SRs, it is clear in the \(\Sigma_{*}-\Sigma_{\rm SFR}\) relation (panel A of Fig. 7), that the SFRs for the regions of NGC 1569 are on average 1 dex higher compared to other reported surveys (_e.g._, MANGA survey); as mentioned previously the fitting of MANGA (Cano-Diaz et al., 2019) in Fig. 7 (panel A) is for irregular galaxies. As also mentioned, the regime of low mass galaxies has not been analyzed due to observational limitations. Surveys such as MANGA or CALIFA, do not reach low values of stellar mass and metallicity, and hence this difference becomes noticeable when compared with their results. Note that the local \(\Sigma_{*}-Z\) for MANGA galaxies spans from 12+log(O/H) 8.10-8.65 (or 7.95-8.70 once they are converted to our metallicity scale) and NGC 1569 has a metallicity 12+log(O/H) = 8.12. Besides, our \(\Sigma_{*}-Z\) does not have the classic polynomial shape of such relationship, which makes the study of low metallicity galaxies relevant, because it could be evidence of a extremely homogeneous star formation history across the galaxy.
The spatially resolved KS relation for this galaxy (panel C of Fig. 7), shows a remarkable similarity to the \(\Sigma_{*}-\Sigma_{\rm SFR}\) relation, something not usually observed, since the KS is usually tighter. We compare our data with the fitting reported in Bigiel et al. (2008) which is a mean of all the slopes and zero points computed in that work (see panel C of Fig. 7). In comparison with Bigiel et al. (2008), it becomes relevant the large difference in the slopes between the fitting in that work and ours; this behaviour could be attributed to the hypothetical extra gas mass that was lost by stellar feedback.
From a spatially resolved point of view, some works in the literature have attempted to study the SR \(\mu\)-Z (see panel A of Fig. 8). This relation is mainly important because it is a via to compare observations with models of galactic chemical evolution. Barrera-Ballesteros et al. (2018) show a study for MANGA galaxies which are contrasted with a couple of analytical models (gas regulator and leaky-box); at the end, the best fitted model is the gas regulator which is also supported by the presence of outflows driven by stellar feedback. Barrera-Ballesteros et al. (2018) also highlight the concept of escape velocity that is of particular relevance for low mass galaxies; indeed, such galaxies have a weak gravitational potential that results in a low escape velocities that at the end facilitates the metal loss via stellar feedback. At this point, the case of NGC 1569 is one more time relevant because Barrera-Ballesteros et al. (2018) do not have a direct estimation of gas mass and the lower metallicity limit of 12+log(O/H) = 8.10 (7.95 in our metallicity scale) of the study is not low enough.
The work of Vilchez et al. (2019) is relevant since they study spatially resolved oxygen yields in two spiral galaxies, M101 and NGC 628. The novelty of their work is that they establish a reference range of a typical empirical estimation for the oxygen yields under a closed boxed model; then, they compared the values of oxygen yields of such spiral galaxies as a function of the galactocentric radius, finding that at large galactocentric radius the oxygen yields values of M101 are deviated from the given reference range in the beginning. At the end, the authors conclude that there could be gas flows in the outer parts of this galaxy.
Another important point to take into consideration is that our analysis relies on the methodology of Calzetti et al. (2018), who reconstructed the SED of NGC 4449 to estimate the global dust masses properties using different analytical models and taking the pertinent assumptions. Such global properties are then used to compute the spatially resolved dust masses and, more importantly, the gas masses using the so called factor D/H dust-to-hydrogen ratio -which is similar to the DGR-. This opens a new way of analysis when either the CO is limited or there is a lack of observations in CO (M\({}_{\rm H_{2}}\)) or atomic hydrogen (M\({}_{\rm HI}\)), which is particularly common for dwarf galaxies. First, the M\({}_{\rm H_{2}}\) implies the use of a \(\alpha_{\rm CO}\) which is different galaxy to galaxy; the case of study NGC 1569 helps to fill the values of \(\alpha_{\rm CO}\) or X\({}_{\rm CO}\) particularly in the low mass and metallicity regime for dwarf galaxies, as mentioned in Bolatto et al. (2013). Second, since the DGR or D/H strongly depends on metallicity (especially also in the low metallicity regime, Remy-Ruyer et al., 2014), NGC 1569 also helps to reduce the large scatter in the observed DGR precisely at such metallicity. Other studies such as Relano et al. (2018) or Vilchez et al. (2019) show the variation of the DGR as a function of the galaxy radius and metallicity, revealing that using a common DGR for all the galaxy must be done carefully.
### The interplay between the atomic gas, molecular gas, and SFR surface densities
Figure 10 shows a comparison between 3 SRs: in the panel A, the \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) (KS relation), in the panel B the \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\), and in the panel C the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\)(molecular KS relation). The \(\Sigma_{\rm HI}\) is computed directly from the THINGS data. We follow Calzetti et al. (2018) to get an estimation of \(\Sigma_{\rm H_{2}}\)using eq. 4. Indeed, once we have a \(\Sigma_{\rm gas}\) estimation, and since \(\mu_{\rm gal}\) and \(\Sigma_{\rm HI}\) are known, a value of \(\Sigma_{\rm H_{2}}\)can be computed. There are fewer spaxels since we remove the negative values.
Previous studies pointed out that some relations can be more fundamental than others. The origin of this fact is still a matter of debate since there are works that support different scenarios. For example, Kennicutt (1998) found a stronger correlation between the \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) relation, instead of the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) relation, while Wong & Blitz (2002) found a stronger correlation between the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) instead of the \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) relation. However, studies such as Schuster et al. (2007) and Crosthwaite & Turner (2007) found that the \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) relation had a tighter correlation than the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) relation.
For the particular case of NGC 1569, our \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) (panel C of Fig. 10) relation is only very marginally tighter (\(\rho\) = 0.94 and \(\rho\) = 0.95) than our \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) relation (panel A of Fig. 10). For our \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) relation (panel B of Fig. 10), we can not conclude that a correlation exists due to the high dispersion of the data.
Leroy et al. (2008) analyzed the results of several galaxy surveys in combination with several theoretical models that were called Star Forming Laws, _i.e._, different scenarios in which star formation occurs. The work of Leroy et al. (2008) indicates, for example, a clear difference between dwarf and spiral galaxies. Dwarf galaxies form stars at their average rate, while spiral galaxies form stars at about half of their average rate. This work supports the idea that star formation could depend only on the presence of molecular Hydrogen.
Bigiel et al. (2008) show that the range of the possible power law values for the KS relation is N = 1.1-2.7, while the range of the possible power law values for the molecular KS relation is N = 0.8-1.1. The molecular KS relation reported by Bigiel et al. (2008) is also tighter than the KS relation.
It is noteworthy that our KS and molecular KS slopes (power law index) are different in comparison with Bigiel et al. (2008) and the molecular KS relation for 80 nearby galaxies from Sun et al. (2023), in which they obtained the coefficient N taking different assumptions and finding a linear molecular KS relation. However, Shetty et al. (2014) (and references there in) discuss the origin of a sub-linear \(\Sigma_{\rm H_{\rm mid}}-\Sigma_{\rm SFR}\) relation with observational support. They explain that a possible origin of such relation could be due to the presence of an important amount of diffuse molecular gas, which is not forming stars. Casasola et al. (2015) also found sub-linear \(\Sigma_{\rm H_{\rm mid}}-\Sigma_{\rm SFR}\) relations, although for nearby active galactic nuclei. Confronting with our results, our molecular KS relation suggests a deficit of molecular gas in comparison with Bigiel et al. (2008). Thus, the origin of the slope in our relation is not due to a diffuse molecular gas but a possible gas expulsion by stellar feedback (see below).
Since the slopes in the KS relations are associated to the Star Formation Efficiency (SFE), this concept becomes relevant at the moment of interpreting the physical conditions of a galaxy. Leroy et al. (2008) showed that the SFE is more useful than the \(\Sigma_{\rm SFR}\) alone to identify where the conditions propitiate the star formation. Ellison et al. (2020) suggested that the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) relation could be primarily driven by changes in the SFE, and secondary with a weaker dependence on the gas fraction.
In conclusion, the SRs for NGC 1569 displayed in Fig. 10 are driven by the \(\Sigma_{\rm H_{2}}\) supported by the idea that the stars are formed directly from molecular clouds, and thus it seems logic that \(\Sigma_{\rm H_{2}}\) and \(\Sigma_{\rm SFR}\) are immediately more related than \(\Sigma_{\rm H}\)or the total gas mass (\(\Sigma_{\rm gas}\)) with the \(\Sigma_{\rm SFR}\). This idea is also supported by Leroy et al. (2008) who mentioned that under certain SF Laws, indirect evidence for abundant M\({}_{\rm H_{2}}\) in the central parts of dwarf galaxies can be estimated.
### Possible origin of inflows
In this section, we compare our data with the general trend found by Bigiel et al. (2008) and propose the possible presence of inflows in NGC 1569. Since this result is not statistically significant, we do not mention it as a part of our main results. However, a methodology will be implemented in future analysis.
Our KS relation (panel A of Fig. 10 and also panel C of Fig. 7) has a very different slope with respect to Bigiel et al. (2008). Following the magenta line (fitting of Bigiel et al. 2008), we divide this plot into three zones. The first one with log(\(\Sigma_{\rm gas}\))! 1.5 is called the zone of outflows (see next subsection), the second one with 1.5! log(\(\Sigma_{\rm gas}\))! 2 is called the normal zone, and the third one with log(\(\Sigma_{\rm gas}\))? 2 is called the inflow zone.
Now, we focus on the inflow zone and particularly on the points that lie to the right of the magenta fit. We define the offset \(\Delta\Sigma_{\rm gas}\) as the difference between \(\Sigma_{\rm gas,obs}\) (the observed \(\Sigma_{\rm gas}\) value) and \(\Sigma_{\rm gas,KS}\) B08 (the Bigiel et al. (2008) fit value at \(\Sigma_{\rm gas}\)). In other words, \(\Delta\Sigma_{\rm gas}\) is the distance between each data point and the Bigiel et al. (2008) fit in the \(\Sigma_{\rm gas}\) axis.
The offset \(\Delta\Sigma_{\rm gas}\) is plotted versus metallicity in Fig. 11, where we show that the larger the gas excess, the lower is the metallicity. This is probably because the points that lie to the right of Bigiel et al. (2008) fit could have a slight gas excess due to the presence of inflows, supported by the idea that pristine gas dilutes the metallicity. These points correspond only to the center part of the galaxy.
The \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) relation show evidence of an excess of neutral gas, since the HI gas surface density values are larger than in other galaxies, where \(\Sigma_{\rm HI}\) is not larger than 9M\({}_{\sun}\) pc\({}^{-2}\)(Bigiel et al., 2008). In Fig. 10 (panel B), we show that the majority of \(\Sigma_{\rm HI}\) values are larger than 10M\({}_{\sun}\) pc\({}^{-2}\). These large \(\Sigma_{\rm HI}\) values are still present using higher spatial resolution elements of 700 pc (see Appendix D).
Therefore, the data suggests that the possible excess of gas seen in the KS relation for log(\(\Sigma_{\rm gas}\))? 2 (in the central part of the galaxy) could be due to an inflow of neutral gas.
We consider that this type of analysis is important in order to examine the interplay between inflows and outflows in the same galaxy. Although our metallicity variation (0.1 dex) due to the presence of inflows is not statistically significant and it is within the errors, the method allows to test changes in metallicity related to inflows. We also tested this methodology using different metallicity calibrators (O3N2, N2, Pettini & Pagel, 2004; Marino et al., 2013) and found that the shape of the \(\Delta\Sigma_{\rm gas}\) versus metallicity relationship (see Fig. 11) is preserved for those calibrations. We will implement this idea in future analysis with a larger sample and appropriate fits for comparison.
### Possible origin of outflows: a self regulated feedback model
As mentioned previously, another interesting fact of NGC 1569 is its low gas fraction given its metallicity. The global gas fraction that we report is \(\mu\) = 0.34, which in general means that the total mass of the galaxy is dominated by the stellar mass. Common values of gas fractions (\(\mu\)) can also be found in Pilyugin et al. (2004); Lara-Lopez et al. (2019) for galaxies of different types. For spiral galaxies, the mean global gas fraction values are in the range \(\mu\) = 0.50 - 0.85 with a mean 12+log(O/H) in the range 8.4 - 8.7. A particular case is seen for the sample of irregular galaxies in Table 7 of Pilyugin et al. (2004), for which they report \(\mu\) = 0.20 - 0.83 with a 12+log(O/H) between 7.22 - 8.35 dex. When we compare galaxies with almost the same M\({}_{\rm B}\) than NGC 1569 (M\({}_{\rm B}\) \(\sim\) -15.7) we find galaxies with \(\mu\) from 0.22 to 0.66. The hypothetical 1 - 1.5 dex of extra gas mass that we previously mentioned, could drastically enhance the value of gas fraction.
We propose two scenarios to explain the low gas fraction in NGC 1569. The first one is related to the possible interaction of NGC 1569 with other galaxy. As mentioned in Johnson (2013), NGC 1569 is in a system with other 3 galaxies with a possible recent interaction with the nearest companion UGCA 92. Geha et al. (2006) mention that the galaxy internal processes reduce the gas fraction of galaxies, but the presence of a massive or luminous galaxy within 0.5 Mpc could totally remove the gas from a dwarf galaxy. Precisely, one of the companion galaxies of NGC 1569 is a relatively luminous galaxy (IC 342, Johnson, 2013), which could
imply that the interaction with IC 342 is more relevant than with UGCA 92.
The removal of gas by an external massive companion should affect also to the neutral gas. However, we report an excess of neutral gas, so the scenario where removal of gas by massive companion occurs is not supported by our analysis. We propose an alternative scenario. A simple model of stellar feedback can explain the deficit of gas. While stellar feedback is due to different phenomena (_e.g.,_ stellar radiation, stellar winds, supernova explosions), occurring from stellar to galactic scales, the spatially resolved star formation self regulator model can be used to parametrize such a complex process. This simple model has been used in previous studies (_e.g.,_ Zhu et al., 2017; Barrera-Ballesteros et al., 2018; Zaragoza-Cardiel et al., 2020) and can be described by:
\[\dot{\Sigma}_{\rm out}=\eta\cdot\Sigma_{\rm SFR}, \tag{13}\]
where \(\eta\) is the so-called mass loading factor, \(\dot{\Sigma}_{\rm out}\) the gas outflow rate surface density due to stellar feedback and \(\Sigma_{\rm SFR}\) is the SFR surface density. Therefore, the mass loading factor, \(\eta\), is the mass surface density outflow rate per unit of SFR surface density provoked by stellar feedback.
Such expression can be written as a function of the gas outflow surface density, \(\Delta\Sigma_{\rm out}\):
\[\Delta\Sigma_{\rm out}=\eta\cdot\Delta t\cdot\Sigma_{\rm SFR} \tag{14}\]
Cosmological and hydrodynamical simulations predict a relation between \(\eta\) and total stellar mass (Muratov et al., 2015; Hayward and Hopkins, 2017) or local gas surface density (Li et al., 2017). Observationally, the value of \(\eta\) is uncertain, but recent works have made progress to quantify it as a function of galactocentric radius
Figure 11: The offset \(\Delta\Sigma_{\rm gas}\) vs. metallicity. Such offset is defined as the difference between \(\Sigma_{\rm gm,obs}\) (the observed \(\Sigma_{\rm gm}\)value) and \(\Sigma_{\rm gm,KS~{}B08}\).
Figure 10: The \(\Sigma_{\rm gm}-\Sigma_{\rm SFR}\) (panel A), the \(\Sigma_{\rm HI}-\Sigma_{\rm SFR}\) (panel B) and the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) (panel C) SRs. The fits to the data are shown in black solid lines. The purple dashed lines correspond to the fits of Bigiel et al. (2008) for their respective relation. The horizontal dashed red line is the limit in which there is a valid SFR. The circles with black contours correspond to the spaxels in the tail of NGC 1569. The data is color coded by the property shown in the colorbar of each panel.
(Kruijssen et al., 2019), stellar mass surface density and total stellar mass (Roberts-Borsani et al., 2020; Zaragoza-Cardiel et al., 2020), or the local escape velocity (Barrera-Ballesteros et al., 2018). Since we have estimations of \(\Sigma_{*}\), we can estimate \(\eta\) using the relation between \(\Sigma_{*}\) and \(\eta\) from Zaragoza-Cardiel et al. (2020):
\[\log(\eta+1)=(-0.32\pm 0.03)\log(\Sigma_{*})+(3.2\pm 0.3), \tag{15}\]
which is in agreement with models and theory (Muratov et al., 2015; Li et al., 2017; Hayward and Hopkins, 2017).
The median value of the stellar mass surface density is \(\log\Sigma_{*}=7.9\) M\({}_{\odot}\)kpc\({}^{-2}\), then \(\eta=3.7\) according to Eq. 15. We assume a value of \(\Delta t=4.3\) Myr in concordance with the characteristic H\(\alpha\) time scale (Haydon et al., 2020) and \(\Sigma_{\rm SFR}=0.15\)M\({}_{\odot}\)yr\({}^{-1}\) kpc\({}^{-2}\)(Kennicutt, 1998). We estimate a gas outflow of \(\Delta\Sigma_{\rm out}=2.4\) M\({}_{\odot}\)pc\({}^{-2}\).
We define \(\Delta\Sigma_{\rm gas}\) as the difference between the global gas surface densities, that of the global KS relation, and that observed in NGC1569 (our corrected estimation), which are shown as a black dashed line, and as a red symbol in Fig. 9 (panel C), respectively. We found \(\Delta\Sigma_{\rm gas}=3\) M\({}_{\odot}\)pc\({}^{-2}\). Therefore, stellar feedback due to the recent star formation can explain the gas deficit that we propose.
We plot the spaxels in the zone of outflows, where log(\(\Sigma_{\rm gas}\)) < 1.5, in Fig.12. We show that these spaxels are in the outer parts of the galaxy, where filaments associated with outflows are usually seen (Johnson et al., 2012).
Outflows can be also analyzed via X-ray emission, stellar and gas kinematics. For the particular case of NGC 1569, an important quantity of X-ray emission is associated to a diffuse halo and a metal enriched outflows (Heckman et al., 1995; Martin et al., 2002). In some particular scenarios, the X-ray emission can be linked to the kinematics of a galaxy. Indeed, for NGC 1569, components of velocity relative to the systemic velocity (\(v_{\rm sys}\)) reveal the presence of superbubbles or expanding shells of ionized gas (Heckman et al., 1995; Martin, 1998). Particularly, gas kinematics using HI has allowed the detection of an unusual high mean HI velocity dispersion possibly due to outflows (Stil and Israel, 2002). Other studies (_e.g._, Johnson et al., 2012) have found evidence of an outflow with a potential expanding shell near the supermassive star cluster A of NGC 1569, supported by both, gas kinematics and the high velocity dispersion of star kinematics. Finally, recent studies (_e.g._, Sanchez-Cruces et al., 2022) performed analysis of physical phenomena in NGC 1569 (_e.g._, shocks and supernova remnants) using high spectral resolution.
## 6 Conclusions
We compute a set of scaling relations (SRs) and analyze \(\sim\)100 spaxels of the dwarf galaxy NGC 1569 using the Metal-THINGS, THINGS and CARMA Surveys in combination with DustPedia archival data. Emission line fluxes were derived using STARLIGHT and Gaussian line-profile fittings. We estimated spatially resolved physical properties such as star formation rate surface density (\(\Sigma_{\rm SFR}\)), oxygen gas metallicity (Z) and total gas mass surface density (\(\Sigma_{\rm gas}\)), this latter computed by using the broadband spectral energy distribution modelling code -CIGALE- to get the dust mass surface densities that were converted to total gas surface densities adopting a Dust-to-Gas Ratio (DGR). Such DGR was estimated simultaneously with the CO luminosity-to-molecular gas mass conversion factor (\(\alpha_{\rm CO}\)) using the method presented in Leroy et al. (2011) and Sandstrom et al. (2013). With the \(\Sigma_{\rm gas}\), it was possible to derive other properties such as the baryonic mass surface density (\(\Sigma_{\rm bar}\)), the local gas fraction (\(\mu\)), the local effective oxygen yields (Y\({}_{\rm eff}\)), the local star formation efficiency (SFE) and the local depletion time (t\({}_{\rm dep}\)).
For all the mentioned properties, we get spaxel maps with 12" of resolution (\(\sim\)180 pc). By comparing such properties with each other, we estimate different local SRs.
Our study can be divided into two main sections. In the first one, we analyze the classic \(\Sigma_{*}-\Sigma_{\rm SFR}\), \(\Sigma_{*}-Z\) and \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) relations. We recover the known correlations (\(p>0.92\)) and scatterings (up to 0.25 dex) except for the \(\Sigma_{*}-Z\).
In the second one, we derive the relations \(\mu\) - Z, \(\Sigma_{\rm bar}\)- Y\({}_{\rm eff}\) and \(\Sigma_{*}\), \(\mu\), \(\Sigma_{\rm bar}\) - SFE. For all SRs (except for the \(\mu\) - Z), we have low correlations (\(p<0.36\)) and a relative low dispersion (up to 0.22 dex).
We discuss the global and local properties of NGC 1569, since the global ones reveal the presence of inflows, but the local ones the presence of outflows. Thus, we propose two methodologies to explore both scenarios. Our local estimation for the oxygen yields and local gas fractions reveal a deficiency of gas mass possibly ejected by outflows. For this, one of our methodologies is based on a self regulated feedback model, which show that the stellar feedback plays a stronger role to propitiate outflows.
Finally, given our multiwavelength data, we compute the atomic and molecular Kennicutt-Schmidt (KS) relation in order to discuss the slopes and the SFRs of each one, in comparison with our \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\).
Our main results are summarised as follows:
1. We recover the known classic shapes of the \(\Sigma_{*}-\Sigma_{\rm SFR}\), \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) and \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\), all with a low scatter up to 0.25 dex, and high correlation of \(p>0.92\).
2. The SFRs in NGC 1569 are \(\sim\)1.26 dex higher than the SFRs in MANGA galaxies. The slope of our \(\Sigma_{*}-\Sigma_{\rm SFR}\) is \(\sim\)1.6 times steeper than the slope in MANGA galaxies.
3. Our fittings of the \(\Sigma_{\rm gas}-\Sigma_{\rm SFR}\) (m = 0.96) and the \(\Sigma_{\rm H_{2}}-\Sigma_{\rm SFR}\) relation are shown in Fig. 13. We see that the \(\Sigma_{*}-\Sigma_{\rm SFR}\) relation is consistent with the \(\Sigma_{*}-\Sigma_{\rm SFR}\) relation.
Figure 12: The gas mass surface density, \(\Sigma_{\rm gas}\) maps for values log(\(\Sigma_{\rm gas}\)) < 1.5 M\({}_{\odot}\)pc\({}^{-2}\), this is the spaxels in the zone of outflows. The black contours is the H\(\alpha\) emission. The blue crosses correspond to supernovae in NGC 1569 (coordinates are taken from (Sánchez-Cruces et al., 2022)).
\(\Sigma_{\rm SFR}\) (m = 0.58) are flatter than the reported in previous works (_e.g._, Bigiel et al. 2008).
* The shape of our \(\Sigma_{*}-\) Z is not similar to the global one, rather we obtain a flat relation mainly due to the constant metallicity 12+log(O/H) \(\sim\) 8.12. This, in combination with the flat gradient, differ from what it is observed in spiral galaxies. In fact, flat metallicity gradients are rather common in young low mass galaxies that are still assembling their mass.
* We report a log(DGR) = 3.08 and log(\(\alpha_{\rm CO}\)) = 1.6 M\({}_{\odot}\)pc\({}^{-2}\) for NGC 1569, establishing new estimations for the gas mass log(\(\Sigma_{\rm gas}\) ) = 1.49 M\({}_{\odot}\)pc\({}^{-2}\) and molecular gas log(\(\Sigma_{\rm H_{2}}\) ) = 1.05 M\({}_{\odot}\)pc\({}^{-2}\). This is 0.16 and 0.95 dex higher than the reported in Kennicutt (1998), respectively.
* We show in the \(\Sigma_{*}\)- SFE and \(\Sigma_{\rm bar}\)- SFE relations that the tail of NGC 1569 could play an important role in establishing the slope of the fittings.
* Although the global SRs show a possible evidence of inflows, our local relations do not confirm such scenario. On the contrary, they support the idea of the presence of outflows because NGC 1569 has a lower oxygen yield (log(Y\({}_{\rm eff}\)) = -2.83) and a lower gas fraction (\(\mu\) = 0.34) compared with galaxies of similar masses. As pointed out in Dalcanton (2007), outflows can be the mechanisms that really reduce the oxygen yield. Also, as mentioned in Tremonti et al. (2004), an inflow could produce an enhancement in the gas fraction but it is not enough to explain low values in the oxygen yields.
* The position of NGC 1569 in the global KS relation, the shape of the local relation, and the values of \(\mu\) and Y\({}_{\rm eff}\) could reveal a deficiency of gas mass -possibly ejected by outflows. We show how the simple spatially resolved star formation self-regulator model can explain the absence of gas mass that is lost by stellar feedback. We conclude that the stellar feedback plays a stronger role to propitiate outflows and thus to explain the low gas fraction of NGC 1569.
The origin of SRs go down to the star formation process itself, and its wide range of star formation efficiencies for galaxies. However, the analysis have never been very precise because star formation spans a wide range of scales, from cluster-forming cores to molecular clouds to the whole interstellar medium. Moreover, the combination with other physical parameters, for instance, dust distribution as a tracer of gas mass, is important in understanding the evolution and formation of stars from galaxy to galaxy, or the conversion of gas into stars, that control not only the formation history of stars within galaxies, but also their chemical enrichment.
We highlight the importance of dwarf galaxies in the cosmological context. Indeed, it is thought that dwarf galaxies have similar properties to galaxies at high redshift; hence, the analysis of the local ones could uncover insights of evolution and physical processes of the first galaxies. With the advent of the James Webb Space Telescope (JWST), a new era of IFU observations of dwarf galaxies is possible, not only for local galaxies but also for high redshift ones.
In a future study we will contrast our results with simulated low mass galaxies to explore the role of inflows, outflows and the possible changes in global SRs.
## Acknowledgments
This research was supported by the International Space Science Institute (ISSI) in Bern, through ISSI International Team project No. 505 (The Metal-THINGS Survey of Nearby Galaxies). LEG and MV gratefully acknowledge to the Consejo Nacional de Ciencia y Tecnologia del Estado de Puebla (CONCYTEP) for their generous financial support for telescope observations during this project. MALL acknowledges support from the Spanish grant PID2021-123417OB-I00, and the Ramon y Cajal program funded by the Spanish Government (RYC2020-029354-I). MR acknowledge financial support from the CONACYT project CF-86-367. LSP acknowledges support from the Research Council of Lithuania (LMTLT) grant P-LU-PAR-23-28. MEDR acknowledge support from PICT-2021-GRF-TI-00290 of ANPCyT (Argentina). SPO acknowledges support from the Comunidad de Madrid Atracion de Talento program via grant 2022-TI/ITC-23797.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author. All the data cubes, images and spectra will be made available on the Metal-THINGS website (in preparation).
|
2309.13103 | OpportunityFinder: A Framework for Automated Causal Inference | We introduce OpportunityFinder, a code-less framework for performing a
variety of causal inference studies with panel data for non-expert users. In
its current state, OpportunityFinder only requires users to provide raw
observational data and a configuration file. A pipeline is then triggered that
inspects/processes data, chooses the suitable algorithm(s) to execute the
causal study. It returns the causal impact of the treatment on the configured
outcome, together with sensitivity and robustness results. Causal inference is
widely studied and used to estimate the downstream impact of individual's
interactions with products and features. It is common that these causal studies
are performed by scientists and/or economists periodically. Business
stakeholders are often bottle-necked on scientist or economist bandwidth to
conduct causal studies. We offer OpportunityFinder as a solution for commonly
performed causal studies with four key features: (1) easy to use for both
Business Analysts and Scientists, (2) abstraction of multiple algorithms under
a single I/O interface, (3) support for causal impact analysis under binary
treatment with panel data and (4) dynamic selection of algorithm based on scale
of data. | Huy Nguyen, Prince Grover, Devashish Khatwani | 2023-09-22T17:35:03Z | http://arxiv.org/abs/2309.13103v1 | # OpportunityFinder: A Framework for Automated Causal Inference
###### Abstract.
We introduce OpportunityFinder, a code-less framework for performing a variety of causal inference studies with panel data for non-expert users. In its current state, OpportunityFinder only requires users to provide raw observational data and a configuration file. A pipeline is then triggered that inspects/processes data, chooses the suitable algorithm(s) to execute the causal study. It returns the causal impact of the treatment on the configured outcome, together with sensitivity and robustness results. Causal inference is widely studied and used to estimate the downstream impact of individual's interactions with products and features. It is common that these causal studies are performed by scientists and/or economists periodically. Business stakeholders are often bottle-necked on scientist or economist bandwidth to conduct causal studies. We offer OpportunityFinder as a solution for commonly performed causal studies with four key features: (1) easy to use for both Business Analysts and Scientists, (2) abstraction of multiple algorithms under a single I/O interface, (3) support for causal impact analysis under binary treatment with panel data and (4) dynamic selection of algorithm based on scale of data.
causal inference, double machine learning, neural networks, panel data +
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
Footnote †: Both authors contributed equally to this research.
+
+
Footnote †: |
2309.15214 | Residual Corrective Diffusion Modeling for Km-scale Atmospheric
Downscaling | The state of the art for physical hazard prediction from weather and climate
requires expensive km-scale numerical simulations driven by coarser resolution
global inputs. Here, a generative diffusion architecture is explored for
downscaling such global inputs to km-scale, as a cost-effective machine
learning alternative. The model is trained to predict 2km data from a regional
weather model over Taiwan, conditioned on a 25km global reanalysis. To address
the large resolution ratio, different physics involved at different scales and
prediction of channels beyond those in the input data, we employ a two-step
approach where a UNet predicts the mean and a corrector diffusion (CorrDiff)
model predicts the residual. CorrDiff exhibits encouraging skill in bulk MAE
and CRPS scores. The predicted spectra and distributions from CorrDiff
faithfully recover important power law relationships in the target data. Case
studies of coherent weather phenomena show that CorrDiff can help sharpen wind
and temperature gradients that co-locate with intense rainfall in cold front,
and can help intensify typhoons and synthesize rain band structures.
Calibration of model uncertainty remains challenging. The prospect of unifying
methods like CorrDiff with coarser resolution global weather models implies a
potential for global-to-regional multi-scale machine learning simulation. | Morteza Mardani, Noah Brenowitz, Yair Cohen, Jaideep Pathak, Chieh-Yu Chen, Cheng-Chin Liu, Arash Vahdat, Mohammad Amin Nabian, Tao Ge, Akshay Subramaniam, Karthik Kashinath, Jan Kautz, Mike Pritchard | 2023-09-24T19:57:22Z | http://arxiv.org/abs/2309.15214v4 | # Generative Residual Diffusion Modeling for Km-scale Atmospheric Downscaling
###### Abstract
The state of the art for physical hazard prediction from weather and climate requires expensive km-scale numerical simulations driven by coarser resolution global inputs. Here, a km-scale downscaling diffusion model is presented as a cost effective alternative. The model is trained from a regional high-resolution weather model over Taiwan, and conditioned on ERA5 reanalysis data. To address the downscaling uncertainties, large resolution ratios (25km to 2km), different physics involved at different scales and predict channels that are not in the input data, we employ a two-step approach (_ResDiff_) where a (UNet) regression predicts the mean in the first step and a diffusion model predicts the residual in the second step. _ResDiff_ exhibits encouraging skill in bulk RMSE and CRPS scores. The predicted spectra and distributions from ResDiff faithfully recover important power law relationships regulating damaging wind and rain extremes. Case studies of coherent weather phenomena reveal appropriate multivariate relationships reminiscent of learnt physics. This includes the sharp wind and temperature variations that co-locate with intense rainfall in a cold front, and the extreme winds and rainfall bands that surround the eyewall of typhoons. Some evidence of simultaneous bias correction is found. A first attempt at downscaling directly from an operational global forecast model successfully retains many of these benefits. The implication is that a new era of fully end-to-end, global-to-regional machine learning weather prediction is likely near at hand.
## 1 Introduction
Coarse-resolution (25-km) global weather prediction is undergoing a machine learning renaissance thanks to autoregressive machine learning models that have successfully exploited readily available reanalysis data with global coverage at these spatial scales [5, 32, 9, 6, 24, 10].
However, users of weather data generally need higher-resolution predictions. Kilometer-scale atmospheric models better capture extremes, such as damaging winds and rainfall, as well as important local effects of mountains, agriculture and urban land use, which regulate the local weather and its impact at the scales of physical infrastructure [16].
It is thus natural to wonder if one can apply these ML models at km-scale resolutions. If tried globally, this poses a significant challenge. The linear increase in the resolution of training data typically results in a superlinear increase in the training cost of autoregressive ML surrogate models. Moreover, global km-scale simulations are in their infancy [49, 21] with data that tends to cover short periods of time, do not assimilate km-scale observations, and are not as heavily tuned so may have worse systematic biases than coarse-resolution or established regional simulations. Such massive datasets are also difficult to transfer between data centers and are frequently not produced on machines attached to significant accelerated computing resources like GPUs.
In contrast, for regional simulation, scaling ML to km-scales is attractive. High quality training data is abundant as many national weather agencies couple km-scale numerical weather models in a limited domain to coarser resolution global models [41]. Since these predictions are augmented by data assimilation from ground-based precipitation radar and other sensors, good estimates of regional km-scale atmospheric states are possible - a process called dynamical downscaling.
However, dynamical downscaling is a computationally expensive approach that limits the number of ensemble members that can be used to quantify uncertainties [31]. An alternative approach is to learn a computationally inexpensive statistical downscaling from these dynamical downscaling models and observations, thus allowing larger ensembles and better uncertainty quantification [53]. Statistical downscaling is considered less reliable, especially for extreme events. In this context, ML downscaling enters as an advanced (non linear) form of statistical downscaling that exhibits improved predictions.
Various ML methods have been previously used for atmospheric downscaling [44, 13, 50, 31, 52]. Convolutional Neural Networks (CNNs), which reduce input vector and identify crucial features, have shown promise in globally downscaling climate (100km) data to weather scales (25km) [29, 40, 2, 38]. However, their deterministic nature requires some custom approaches to produce a probability (i.e., ensemble inference [40] or predicting parameters of an assumed distribution [2]).
The stochastic nature of atmospheric physics at km-scale [45] makes the downscaling inherently probabilistic. It is thus natural to apply generative models for downscaling at these scales. Generative Adversarial Networks (GANs) have been used in downscaling and forecasting precipitation at km-scale in various regions [26, 36, 17, 39, 15, 52], see the latter for a good review of these works. Training GANs however poses several challenges including mode collapse, training instabilities, and difficulties in capturing long tails of distributions [54, 23, 42].
Diffusion models have been recently introduced as a viable alternative to GANs due to their sample diversity and training stability [20, 11]. They are based on principles of stochastic differential equations (SDE) and use a tandem of noising and denoising processes to learn the data distribution. The former process gradually adds noise until the data turns to noise. The latter process then gradually removes noise to recover the data. They have demonstrated a remarkable ability to generate fine-scale details of samples for both unconditional and conditional image generation tasks. Diffusion models proved successful for downscaling of a single variable to km-scale. [1] produced the full rain density in the UK from vorticity as an input, thus showing channel synthesis abilities - beyond super-resolution, predicting variables that are not in the input data. [18] used a diffusion model for downscaling solar irradiance in Hawaii with a 1 day lead time, thus demonstrating the ability of this model to account for lead time biases. Moreover, diffusion models have been used for probabilistic weather forecasting and nowcasting [25, 27].
The success in ML downscaling of a single variable from global numerical weather models to km-scale motivates us to an ambitious goal: Can we downscale several variables in concert - including variables that are not in the input? If so, this paves the way towards end-to-end ML downscaling systems that predict regional high-resolution weather as a postprocessing of 25-km scale ML predictions. We will show a proof-of-concept of the feasibility of such systems.
To that end, we develop a diffusion model that performs multi-variable downscaling with channel synthesis and test its downscaling abilities from a numerical global weather model at lead times of up to 3 days. We train the model on high resolution data for the region of Taiwan. This, target, data is produced using a radar-assimilating configuration of the Weather Research and Forecasting (WRF) model [34] and were provided to us by the Central Weather Administration of Taiwan (CWA).
In the case of km-scale downscaling, direct application of diffusion models for learning the distribution of high-resolution states is challenging because coarse-resolution data produced from a global model (the input) and high-resolution data produced from a regional km-scale model (the target) approximate the underlying governing equations in different ways. This results in a significant distribution shift between the input and target data. Such distribution shift impedes conditioning of diffusion models, and thus necessitates large noise levels for the diffusion noising process, and consequently many steps for the backward process to remove noise, ultimately resulting in poor generation fidelity.
To address these challenges, we demonstrate a two-stage approach for downscaling (see e.g. [36]). In the first stage, a UNet-regression predicts the conditional mean of the high-resolution state based on the coarse-resolution input. In the second stage, a denoising diffusion model is trained on the residuals (difference between the truth and the regression, both at high-resolution). Since the residuals have significantly smaller
values and are nearly centered compared with the high-resolution forecasts, the denoising diffusion process is easier to learn. This approach greatly facilitates training and sampling of diffusion models, enabling more accurate representation of regional details. In this work, for the diffusion stage, we adopt Elucidated Diffusion Models (EDM) [22], for their high generation quality in unconditional generation tasks and their adaptive tuning inspired by SDEs and physics.
In the next section we outline details of the model. The key contributions of this paper are:
1. A novel physics-inspired two-step approach (ResDiff) to learn mappings between low- and high-resolution weather data with high fidelity, including for extreme weather events that occur infrequently in the training data, like typhoons.
2. ResDiff provides realistic measures of stochasticity and uncertainty, in terms of Continuous Rank Probability Score (CRPS) and by comparing spectra and distributions.
3. ResDiff learns the physics of coherent weather phenomena remarkably well, as measured by the corrections of the discontinuity across a frontal systems and of the axi-symmetric spatial structure and intensity of typhoon winds.
4. ResDiff is sample-efficient, learning effectively from just 4 years of data, which we hypothesize is due to the two-step approach to learning the mean and residuals of complex distribution shifts between the low- and high-resolution data.
5. ResDiff is at least 60 times faster and 100 times more energy efficient than WRF used by CWA, providing an attractive alternative (or complementary method) for dynamical downscaling techniques.
Figure 1: The workflow for training and sampling ResDiff for generative downscaling. Top: a coarse-resolution global forecast at 25km scale is used to predict the mean \(\mathbf{\mu}\) first, and is then added with the residual predicted using EDM denoising diffusion \(\mathbf{r}\) to generate the high-resolution 2km-scale regional forecast. Bottom right: diffusion model is conditioned with the coarse-resolution input to generate the residual \(\mathbf{r}\) after a few denoising steps. Bottom left: the score function for diffusion is learned based on the UNet architecture.
Generative downscaling: residual diffusion models
Consider a specific region on Earth, mapped onto a two-dimensional grid. Within this spatial representation, we can extract a low-resolution meteorological forecast using a global weather forecasting model such as FourCastNet [32] or the Global Forecast System (GFS) [30]. This low-resolution forecast, e.g., of 25km resolution, is denoted as \(\mathbf{y}\in\mathbb{R}^{c_{\mathrm{in}}\times m\times m}\). In this context, \(c_{\mathrm{in}}\) represents various global forecast variables, which include, e.g., surface wind and temperature.
The high-resolution data are denoted as \(\mathbf{x}\in\mathbb{R}^{c_{\mathrm{out}}\times n\times n}\). Note that \(\mathbf{x}\) and \(\mathbf{y}\) are solutions to different sets of PDEs, with significantly different resolutions (\(n\gg m\)), each influenced by distinct scale-dependent physics, e.g., radiation and cloud processes. This leads to a significant distribution shift between the low- and high-resolution data. Downscaling to km-scale comes with large uncertainties because the dynamics of the atmosphere at km-scale is highly stochastic and exhibits large spatial and temporal variability. Thus, the goal of probabilistic downscaling is to mimic the probability density \(p(\mathbf{x}|\mathbf{y})\).
To learn the conditional density \(p(\mathbf{x}|\mathbf{y})\) for generation, we employ diffusion models due to their excellent distribution mode coverage and stable training [11]. Diffusion models solve stochastic differential equations (SDEs) through the concept of score matching [20, 47, 22, 46, 4]. This requires a forward process and a backward process that work in tandem. In the forward process, noise is gradually added to the data until the signal becomes indistinguishable from noise. This step allows the diffusion model to explore and capture the intricate patterns and dependencies present in the data. By incrementally introducing noise, the model gains a deeper understanding of the underlying distribution. The backward process then involves denoising the samples using a dedicated neural network to eliminate the noise. Through this sequential denoising process, the model iteratively refines the samples, bringing them closer to the true data distribution. The denoising neural network plays a critical role in this convergence, providing the necessary guidance to steer the samples towards accurate representations of the original data.
Naively applying conditional diffusion models to learn \(p(\mathbf{x}|\mathbf{y})\) was unsuccessful for downscaling because of the aforementioned large distribution shift and the large uncertainty of variables such as radar reflectivity at these scales. As a result, the diffusion forward process requires substantial noise levels to ensure that the signal ultimately transforms into pure noise. Consequently, the backward process, which gradually removes noise and generates the clean signal, demands a large number of steps. This impedes learning and leads to poor sample fidelity.
To sidestep these challenges, we propose to decompose the generation into two stages: The first stage predicts the conditional mean using (UNet) regression, and the second stage learns a generative diffusion model on the residuals, inspired by the common practice in fluid dynamics to decompose a flow into its mean and perturbation [33]. Following this ansatz, we decompose the fine-resolution forecast as
\[\mathbf{x}=\underbrace{\mathbb{E}[\mathbf{x}|\mathbf{y}]}_{:=\boldsymbol{\mu }(\text{regression})}+\underbrace{(\mathbf{x}-\mathbb{E}[\mathbf{x}|\mathbf{y} ])}_{:=\mathbf{r}(\text{generation})} \tag{1}\]
In this decomposition the regression mean (\(\boldsymbol{\mu}\)) matches the first-moment of the data distribution. As a result the residual (\(\boldsymbol{r}\)) becomes approximately zero mean that leads to a significantly smaller variance for the residual distribution, namely \(\text{var}(\boldsymbol{r})\ll\text{var}(\boldsymbol{x})\), when the forecast has large variations (see section 7.3 for the proof). Hence, learning the distribution \(p(\boldsymbol{r}|\boldsymbol{y})\) becomes easier for the generative diffusion model. The details are described in section 5 and the outline is depicted in Fig. 1.
In the next section we examine the key results of using this model for atmospheric downscaling.
## 3 Results
We compare predictions from ResDiff with the input data (ERA5, [19]), several baseline models and the target data (WRF). We validate the performance by examining deterministic and probabilistic skill scores, distributions and power spectra, as well as case studies showcasing representative forecasts of various coherent atmospheric phenomena such as typhoons and atmospheric fronts.
Figure 2: Power spectra and distributions for the interpolated ERA5 input, ResDiff, RF, Reg, and WRF. These results reflect reductions over space, time and for _ResDiff_ ensemble dimensions. Left: Power spectra for kinetic energy (top), 2 meter temperature (middle) and radar reflectivity (bottom). Right: distributions of windspeed, (top), 2 meter temperature (middle) and radar reflectivity (bottom). Radar reflectivity is not included in the ERA5 dataset. We show the log-PDF to highlight the differences at the tails of the distributions.
### Skill
Table 1 shows the skill scores for ResDiff and several baseline models for 204 samples taken randomly from the out-of-sample year (2021). The baseline models are random forest (RF), a UNet-regression (Reg) and the bilinearly interpolated ERA5 inputs. Since all of these are deterministic models, we only show MAE scores for them, recalling that MAE equals CRPS for a deterministic model. For ResDiff MAE represents the error of the sample mean. RF is trained by selecting spatial samples from 200 randomly selected times within the training period. A separate RF is fit with scikit-learn for each of the 4 output channels with 100 trees and the default hyperparameters. While crude, this RF provides a simple (and easily tuned) baseline for the performance of Reg.
ResDiff has the highest deterministic skill (MAE) followed by the UNet regression, the RF and the interpolation of ERA5. The difference between MAE of ResDiff and that of the UNet reflects the correction of the sample mean by the diffusion model. The consistent improvement in MAE for all the target variables between the UNet and ResDiff suggests that the generative ResDiff model (step 2) can correct some of the remaining biases after the UNet-regression stage (step 1).
Finally, we compare the speed of ResDiff against the operational WRF run by CWA. The CWA-WRF is run on Fujitsu FX-100 system, where each node is equipped with 32 SPARC64 Xifx CPU cores. One hour of a deterministic CWA-WRF forecast (excluding data assimilation) is run on 928 CPU cores (across 29 nodes with a maximum system memory of 6.9GB per node) and takes about 20min. ResDiff inference is however run on a single A100 GPU with 40GB RAM, which takes 24 sec per downscaling sample. Per frame, ResDiff if about 60 times faster than CWA-WRF, and on these hardware systems, it is about 100 times more energy efficient as well. We note that our current implementation of ResDiff is far from optimized and does not utilize GPU parallelization and batching for generating many samples independently. We believe that GPU parallelization, batched inference, as well as fp16 precision will likely lead to even more significant speed-up.
### Spectra and distributions
To produce spectra and distributions we perform a reduction of the predictions from the various models across space and time, and for ResDiff additionally across the ensemble dimension. The left column of Fig. 2 shows the ability of ResDiff to correct the power spectra of kinetic energy (KE), 2-meter temperature and radar reflectivity compared with the various models examined in Table 1 and from ERA5 (where radar reflectivity is absent). For all variables the ResDiff predictions match the target spectra closely. We observe that the performance of UNet-regression is comparable to ResDiff in predicting 2-meter temperature, but slightly less effective in predicting kinetic energy. This implies that stochastic factors are not significant for 2-meter temperature predictions, while they have some, but limited, impact on kinetic energy predictions. However, for radar reflectivity the power spectra of the UNet-regression is significantly worse than ResDiff, suggesting that stochasticity is especially important for this field (see also [52]). Small-scale temperature is mostly driven by horizontal variations in topography in this data which can be learned from the grid embeddings, which are a static input. Radar reflectivity comes from precipitation, which is linked to intrinsically stochastic physics; ResDiff produces a skillful match to the radar spectrum.
Good skill is also found when looking at the probability distributions from these models (right panels of
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Method & Metric & u10m & Radar & v10m & t2m \\ \hline ResDiff & CRPS & **0.29** & **0.53** & **0.31** & **0.14** \\ ResDiff & MAE & **0.40** & **0.74** & **0.43** & **0.19** \\ UNet-regression & MAE & 0.45 & 0.77 & 0.50 & 0.24 \\ Random forest regression & MAE & 1.15 & 3.58 & 1.28 & 0.81 \\ ERA5 bilinear interpolation & MAE & 1.18 & - & 1.28 & 0.97 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Skill scores evaluated over 204 time samples with 192 ensemble members. The table compares ResDiff, UNet-regression, random forest, and interpolated ERA5 predictions in terms of MAE of the ensemble mean and CRPS for different atmospheric variables. For deterministic predictions, MAE and CRPS are equivalent.
Fig. 2), for which we switch focus from kinetic energy to wind speed, which is more relevant to physical hazard evaluation in risk assessment. ResDiff is able to match the target distribution, including the heavy-tailed structure which is significantly improved from the the prediction based on the UNet alone.
For windspeed, the UNet and the baseline models underestimate the probability of winds faster than 20 m/s. For 2-meter temperatures, the baseline models overestimate the probability of warm extremes and underestimate the probability of cold extremes. Radar reflectivity proves to be the most challenging distribution to reproduce. ResDiff outperforms the other models in reproducing the reflectivity. Although it overestimates the occurrence of reflectivity at lower values (i.e. weakly precipitating systems), the match of slopes at high values attests that the power-law relationship regulating the heavy-tail structure of rain extremes is captured well.
### Case studies: downscaling coherent structures
Operational meteorologists value case study analysis, since aggregate skill scores and spectra can be more easily gamed. We thus turn our attention to specific weather regimes. Fig. 3 illustrates the variability of the generated radar reflectivity field at four distinct times. When analyzing 200 samples, the standard deviation of radar reflectivity (second column from the left) is roughly 20% of the magnitude of the mean (left column), with the majority of the variance located in and around existing precipitation regions. Such a pattern is anticipated given that the timing and location of precipitation can vary markedly even within a fixed large-scale configuration since convective physics are inherently stochastic.
The ResDiff prediction for an individual sample (number 200, third column from the left) reveals a fine-scale structure akin to the target data (right column). The similarity between panels (a) and (d) highlights the role of the mean prediction in forming large-scale coherent structures, such as typhoon Chanthu (2021), top row, and frontal systems, bottom row. Specifically the typhoon rainbands (spiral bands of clouds that emanate from the typhoon center) are coherent enough to be captured by the mean, but with large variance and smooth structure. The fine-scale structure reflecting the stochastic physics is captured well by the diffusion model, refining the smooth fields of the mean prediction as seen in the third column of Fig. 3.
#### 3.3.1 Frontal system case study
Frontal systems are an example of organized atmospheric systems. A cold front is a sharp change in temperature and winds associated with a mature cyclonic storm. Fronts produce rainfall since as the front moves eastward, the cold air pushes the warm air to its east upward. This upward motion leads to cooling, condensation and ultimately rainfall. That is, these physics should manifest as multi-variate relationships with linked fine scale structures of two wind vector components and temperature that should co-locate with radar reflectivity.
Fig. 4 shows an example of ResDiff downscaling a cold front. The position of the front is clearly visible in the southeast portion of the domain, where a strong horizontal surface temperature gradient (top) co-locates with strong across-front wind convergence (middle). The along-front components of the wind vector also change abruptly (middle row), which is consistent with the change in temperature. The super resolved gradients in the temperature and winds are encouragingly sharper than the input. The intense rainfall associated with this convergence line can be seen in the radar reflectivity ground truth for the same calendar date (which are shown in bottom row of Fig. 3 above). The generated radar reflectivity is appropriately concentrated near the frontal boundary. Both the location of the front, and the magnitude of the horizontal wind and temperature gradients (sharpening of the front) associated with it, are captured well by ResDiff although some mispositioning of the exact front location is inevitable. These are reassuring signs of learnt physics during the generation task.
#### 3.3.2 Tropical Cyclone case study
Downscaling typhoons (i.e. tropical cyclones) is especially complicated. The average radius of maximum winds of a tropical cyclone is less than 100km, and at 25km resolution of the input data tropical cyclones are only partially resolved, resulting in cyclonic structures that are too wide and too weak compared with high resolution models or observations [7]. A useful downscaling model must simultaneously correct their size and intensity in addition to generating appropriate fine-scale structure.
Figure 3: Demonstration of the stochastic prediction of radar reflectivity (in dBZ). Top to bottom: 2021-09-12 00:00, 2021-04-02 06:00, 2021-02-02 12:00 and 2022-02-13 20:00. Left to right: sample mean, sample standard deviation, sample number 200 and the target forecast.
Figure 4: Examining the downscaling of a cold front on Feb 2, 2022 at 20 UTC. Left to right: prediction of ERA5, ResDiff and Target for different fields, followed by their averaged cross section from 20 lines parallel to the thin dashed line in the contour figures. Top to bottom: 2 meter temperature (arrows are true wind vectors), along front wind (arrows are along front wind component) and across front wind (arrows are across front wind component).
We find that ResDiff is able to correct the structure of tropical cyclones accurately. Fig. 5(a)-(f) shows the structure of typhoon Chanthu (2021), the only typhoon that entered the domain in the out-of-sample year, on September 12 at 00:00:00 UTC. Compared to the target data (panel c) the poorly resolved typhoon in the ERA5 (panel a) is too wide and does not include a closed contour annulus of winds above 16 m/s surrounding an overly quiescent eye-wall. ResDiff downscaling (panel b) is able to recover much of the spatial structure of the windspeed compared with the target. The improvement in the location of the typhoon's center could be a combination of improved prediction and the increase in resolution. The skill of the ResDiff downscaling compared to interpolating ERA5 can be more clearly quantified by calculating the mean axisymmetric structure of the storms as a function of radius from eye-wall center (panel f). Notably, with downscaling the radius of maximum winds decreases from 75km to about 25km while the windspeed increases from 20 m/s to 50 m/s - both favorable improvements.
Probabilities of damaging typhoon winds shown in in panels (d) and (e) in Fig. 5 are significantly improved. In ERA5 (red), occurrence of weak wind speeds is over-estimated and the damaging extreme hurricane winds (above 25m/s) are missing. ResDiff better predicts the chances of the strong winds most likely to impact society and infrastructure. Further analysis (see supplementary material) expands beyond this case study to examine generated wind statistics for several hundred typhoons that crossed the domain during 1980 to 2020. Although no target data is available, when comparing the maximum windspeed and radius of maximum windspeed from ResDiff predictions to a reference from the Japan Meteorological Agency best track dataset [3] ResDiff is found to correct at least 75% of the error in intensity for windspeed below 50m/s but only 50% of the error for higher windspeeds.
Figure 5: A comparison of the 10m windspeed (m/s) maps, distributions and the axisymmetric cross section from typhoon Chanthu (2021) on 2021/09/11:12:00:00UTC. Panels (a),(b),(c) show the 10m windspeed from ERA5, ResDiff downscaling of ERA5 and the target (WRF), respectively. The ResDiff panels show the first ensemble member. The solid black contour indicates the Taiwan coastline. Storm center of the ERA5, ResDiff and WRF are shown in red ‘+-’, orange diamond, and the black dot, respectively. Panels (d) and (e) show the distribution shift (normalized PDFs) for the entire CWA domain and for the typhoon selected region in the top panels. Panel (f) shows the axisymmteric structure of the typhoon about its center. For the ResDiff curves, line is the ensemble mean and the shading shows one standard deviation around the mean.
### Downscaling a forecast from a global model
An attractive use case for ResDiff is to replace dynamical downscaling for regional weather prediction, by instead directly postprocessing predictions from coarse resolution global forecast models. This is a challenging use case as it adds out of sample forecast bias to the input. We nonetheless attempt such "zero-shot" downscaling of a deterministic forecast from the GFS model for a two-day interval. We revisit Typhoon Chanthu (2021) as in the course of 24h such coherent structures facilitate a clear distinction between the trajectory error as a forecast lead time error, and the intensity error as a model resolution and physics related error. An expected enhancement in windspeed intensity is confirmed. At short lead time ResDiff is correcting the typhoon structure well. With increased lead time an error in the position of the typhoon is unavoidable in ResDiff and the coherent structure becomes increasingly degraded relative to the ground truth. This suggests that ResDiff could be used for downscaling from a global model at lead times of up to 48h. For longer lead times, fine-tuning ResDiff to anticipate the existence of global model forecast bias is a logical path.
## 4 Discussion
This study presents a diffusion based, generative, machine learning approach (ResDiff) for downscaling coarse-resolution (25-km) global weather state data (such as ERA5 or GFS) to higher resolution (2km) over a limited regional domain where high quality fine-scale state estimates exist. ResDiff consists of two steps: regression and generation. The regression step approximates the mean, while the generation step further corrects the mean but also generates the distribution, accounting for the fine-scale details stochastically. This approach is akin to the decomposition of physical variables into their mean and perturbations, common practice in fluid dynamics, e.g. [33].
Through extensive testing in the region of Taiwan, we evaluate the skill of the model for high-resolution regional downscaling. The model is shown to skillfully correct kinetic energy spectra, generate realistic probabilities of fine-scale weather extremes, and downscale coherent structures accurately, with minor caveats such as inaccuracies in the 2-meter temperature on the warm sector of a frontal system (4) and over-correction of typhoon sizes in some out-of-sample cases for which we do not have the target (WRF) data (see SI). The model's accuracy could be further improved with a larger training dataset that contains more diverse examples of such rare coherent structures.
The two step approach in ResDiff also offers the possibility to trade off between the fast inference of the mean using the UNet-regression, and the accurate and probabilistic inference of the ResDiff. This is particularly useful given that some variables - like the 2 meter temperature - are well predicted by the UNet-regression while others like radar reflectivity depend on the diffusion step for their skill (see figure 2). Moreover, it could be possible to apply the diffusion stage to a mean prediction obtained in a different way (e.g. a numerical model if available) to generate a plausible distribution from a single prediction.
This paper focused on generation quality, and not on optimal inference speed, for which gains could be easily anticipated. Our prototype of ResDiff is using a dozen iterations thanks to the initial regression stage. However, for future work, we will push to reduce the number of iterations to only a few by using distillation methods [43, 54, 55] and purse other performance optimization techniques [28, 51].
Several potential extensions of the proposed method are worth considering:
1. **Downscaling Coarse-Resolution Medium-Range Forecasts:** To achieve this, it is essential to further quantify and incorporate lead time-dependent forecast errors into the training dataset, enabling a comprehensive evaluation of simultaneous bias correction and downscaling.
2. **Downscaling Different Geographic Locations:** The primary obstacle here is the scarcity of reliable kilometer-scale weather data. Additionally, addressing the computational scalability of ResDiff for regions significantly larger than Taiwan is crucial.
3. **Downscaling Future Climate Predictions:** This introduces further complexities related to conditioning probabilistic predictions on various future anthropogenic emissions scenarios and assessing whether the generated weather envelope appropriately reflects climate sensitivity, particularly concerning extreme events.
Figure 6: ResDiff downscaling from forecast. Comparison of the 10 meter windspeed forecast of typhoon Chanthu (2021) initialized 2021-09-10 00:00UTC. Left to right, first column: forecast from the GFS; second column: ResDiff downscaling of GFS, third column target and fourth column: axisymmetric profile of the typhoon in three models (shading for GFS+ResDiff shows one standard deviation). Top to bottom: results at 2021-09-11 12:00UTC, 2021-09-11 18:00UTC, 2021-09-12 00:00UTC, 2021-09-12 12:00UTC and 2021-09-12 18:00UTC.
4. **Downscaling Outside the Training Region:** Transfer learning could become a viable strategy to extend results beyond the borders of existing km-scale data regions to geographically adjacent or dynamically self-similar regional weather regimes, allowing for more extensive usability.
These extensions have significant potential benefits such as accelerated regional forecasts, increased ensemble sizes, improved climate downscaling, and the provision of high-resolution regional forecasts in data-scarce regions, leveraging training data from adjacent areas.
## 5 Methods
This section elaborates on the proposed ResDiff methodology for probabilistic downscaling. It begins with a background on diffusion models to provide the machinery. It then delves into ResDiff and its associated components. We further detail our experimental setup including the CWA dataset, network architecture, and training protocols. At the end, we briefly discuss evaluation criteria.
### Background on diffusion models
Consider the data distribution represented by \(p_{\text{data}}(\mathbf{x})\). This distribution has an associated standard deviation, denoted by \(\sigma_{\text{data}}\). The forward diffusion process seeks to adjust this distribution, yielding modified distributions denoted by \(p_{\text{data}}(\mathbf{x};\sigma)\). This transformation is achieved by incorporating i.i.d. Gaussian noise with a standard deviation of \(\sigma\) into the data. When \(\sigma\) surpasses \(\sigma_{\text{data}}\) by a considerable margin, the resulting distribution approximates pure Gaussian noise.
Conversely, the backward diffusion process operates by initially sampling noise, represented as \(\mathbf{x}_{0}\), from the distribution \(\mathcal{N}(0,\sigma_{\text{max}}^{2}\mathbf{I})\). The process then focuses on the denoising of this image into a series, \(\mathbf{x}_{i}\), that is characterized by a descending order of noise levels: \(\sigma_{0}=\sigma_{\text{max}}>\sigma_{1}>\ldots>\sigma_{N}=0\). Each noise level corresponds to a specific distribution of the form \(\mathbf{x}_{i}\sim p_{\text{data}}(\mathbf{x}_{i};\sigma_{i})\). The terminal image of the backward process, \(\mathbf{x}_{N}\), is expected to approach the original data distribution.
**SDE formulation**. To present the forward and backward processes rigorously, they can be captured via stochastic differential equations (SDEs). Such SDEs ensure that the sample, \(\mathbf{x}\), aligns with the designated data distribution, \(p\), over its progression through time [48, 22]. A numerical SDE solver can be used here, where a critical component is the noise schedule, \(\sigma(t)\), which prescribes the noise level at a specific time, \(t\). A typical noise scheduler is \(\sigma(t)\propto\sqrt{t}\). Based on [22], the forward SDE is given as
\[d\mathbf{x}=\sqrt{2\hat{\sigma}(t)\sigma(t)}d\boldsymbol{\omega}(t) \tag{2}\]
while the backward SDE is
\[d\mathbf{x}=-2\hat{\sigma}(t)\sigma(t)\nabla_{\mathbf{x}}\log p(\mathbf{x}; \sigma(t))dt+\sqrt{2\hat{\sigma}(t)\sigma(t)}d\hat{\boldsymbol{\omega}}(t) \tag{3}\]
The term \(\hat{\sigma}(t)\) refers to the derivative of \(\sigma(t)\). The equations comprises two terms: \((i)\) a deterministic component representing the probability flow ODE and noise degradation; \((ii)\) noise injection via Wiener process. Note that the forward and backward SDE are derived based on the noise schedule \(\beta(t)=\hat{\sigma}(t)/\sigma(t)\) that plays a pivotal role by dictating the refresh rate of the noise [48].
**Denoising score matching**. An examination of the SDE in Eq. (3) indicates the necessity of the score function, \(\nabla_{\mathbf{x}}\log p(\mathbf{x};\sigma)\), for sampling from diffusion models. Intriguingly, this score function remains unaffected by the normalization constant of the base distribution, regardless of its computational complexity. Given its independence, it can be deduced using a denoising method. If \(\nabla_{\mathbf{x}}\log p(\mathbf{x};\sigma)=(D_{\theta}(\mathbf{x};\sigma)- \mathbf{x})/\sigma^{2}\), a neural network can be trained for the denoising task using
\[\min_{\theta}\mathbb{E}_{\mathbf{x}\sim p_{\text{data}}}\mathbb{E}_{\mathbf{ n}\sim\mathcal{N}(0,\sigma^{2}\mathbf{r})}\|D_{\theta}(\mathbf{x}+\mathbf{n}; \sigma)-\mathbf{x}\|^{2} \tag{4}\]
### Proposed approach
As discussed in section 2, the high resolution state \(\mathbf{x}\) in (1) can be decomposed into the mean \(\boldsymbol{\mu}\) and the residual \(\mathbf{r}\). The residual \(\mathbf{r}\) will be nearly zero mean and exhibits a small distribution shift, which facilities
training diffusion models. It is worth noting that this two-stage method has further implications for learning physics. The UNet-regression stage can anticipate many of the physics of downscaling, some of which are deterministic. These include high-resolution topography (which to first order controls the 2-meter temperature variation due to the lapse-rate effect), and the large-scale horizontal wind which combine leading balances in the free atmosphere with the effect of surface friction and topography. Stochastic phenomena such as convective storms that also change temperatures and winds are easier to model as deviations from the mean. Also, cloud resolving models are explicitly formulated using deviations from a larger scale balanced state [35]. In the next section, we discuss the regression and generation stage in details.
#### 5.2.1 Regression on the mean
In order to predict the conditional mean \(\boldsymbol{\mu}=\mathbb{E}[\mathbf{x}|\mathbf{y}]\), we resort to UNet-regression. A UNet network is supervised with training data \(\{(\mathbf{x}_{n},\mathbf{y}_{n})\}_{n=1}^{N}\) to learn the regression. We adopt a UNet architecture that is commonly used in denoising diffusion models. This particular UNet incorporates attention layers and residual layers, allowing it to effectively capture both short and long-range dependencies in the data (see Fig. 1). Mean-Squared-Error (MSE) loss is optimized for training.
#### 5.2.2 Denoising diffusion on the residuals
Once equipped with the UNet-regression network, we can begin by predicting the conditional mean \(\hat{\boldsymbol{\mu}}\), which serves as an approximation of \(\mathbb{E}\left[\mathbf{x}|\mathbf{y}\right]\). Subsequently, we proceed to train the diffusion model directly on the residual component \(\mathbf{r}=\mathbf{x}-\hat{\boldsymbol{\mu}}\). Notably, the residual exhibits a small departure from the target data, allowing for the utilization of smaller noise levels during the training of the diffusion process.
In our approach, we adopt the Elucidated diffusion model (EDM), a continuous-time diffusion model that adheres to the principles of SDEs (in Eq. (2)-(3)) [22] to design the diffusion process and architecture. As a result it has an intuitive and physics driven hyperparameter tuning, which makes it work across different domains. In our case, we want to generate the residual \(\mathbf{r}\) by sampling from the conditional distribution \(p(\mathbf{r}|\mathbf{y})\) following the SDEs in Eq. (2)-(3). To condition the diffusion model, we concatenate the input coarse-resolution data \(\mathbf{y}\) with the noise over different channels. We also learn the score function \(\nabla_{\mathbf{r}}\log p(\mathbf{r}|\mathbf{y})\) using the score matching loss in Eq. (4) where the denoiser is now \(D_{\theta}(\mathbf{r}+\mathbf{n};\sigma;\mathbf{y})\) with the conditioning input \(\mathbf{y}\). For the denoiser we again follow the desing principles in EDM to use a UNet architecture with skip connections weighted by the noise variance. Architecture details are discussed in Section 5.3.2.
To generate samples from the distribution \(p(\mathbf{r}|\mathbf{y})\), we employ the second-order EDM stochastic sampler [22] [Algorithm 2] to solve for the reverse SDE in Eq. (3). Upon sampling the residual \(\mathbf{r}\), we augment it with the predicted conditional mean \(\hat{\boldsymbol{\mu}}\) from regresion, to generate the sample \(\hat{\boldsymbol{\mu}}+\mathbf{r}\). This entire workflow is illustrated in Fig. 1, providing a visual representation of the steps involved in our proposed method.
### Experimental setup
#### 5.3.1 Dataset
The dataset used in this study is a subset of the proprietary RWRF model data (Radar Data Assimilation with WRFDA 1). The RWRF model is one of the operational regional Numerical Weather Prediction (NWP) models developed by CWA, which focuses on radar Data Assimilation (DA) in the vicinity of Taiwan. Assimilating radar data is a common strategy used in regional weather prediction, which helps constrain especially stochastic convective processes such as mesoscale convective systems and short-lived thunderstorms.
Footnote 1: [https://www.mmm.ucar.edu/models/wrfda](https://www.mmm.ucar.edu/models/wrfda)
By incorporating radar data, RWRF improves the short-term prediction of high-impact weather events. The radar observations possess high spatial resolution of approximately 1km and temporal resolutions of 5-10 minutes at a convective scale. These observations provide wind information (radial velocity) as well as hydrometers (radar reflectivity) usefully, with a particular emphasis on the lower atmosphere. The radar data assimilation relies on the availability of reliable and precise observations, which contributes significantly to enhance the accuracy and performance of the applied deep learning algorithms in the context of NWP applications.
The dataset covers a duration of 52 months, specifically from January 2018 to April 2022. It has a temporal frequency of one hour and a spatial resolution of 2km. The dataset is represented by a grid of 450x450 points, projected using the Lambert conformal conical projection method around Taiwan. The geographical extent of the dataset spans from approximately 116.371'E to 125.568"E in longitude and 19.5483"N to 27.8446"N in latitude.
Initially, the data is provided in the NetCDF format, which is the output of the WRFDA assimilation process. Subsequently, vertical interpolated from sigma levels to isobaric levels then save as a custom CWA DMS format. As part of the preprocessing steps, the data is converted to the Hadoop Distributed File System (HDFS) format. The preprocessing also includes selecting 20 fields (weather variables at specific height or pressure) as deep learning channels based on the advice of domain experts. Additionally, any missing or corrupted data points represented by "inf" or "nan" values are eliminated from the dataset. This leads to a reduction in the number of samples from 37,944 to 33,813.
Each sample in the dataset consists of 20 channels of information. This includes four pressure levels (500 hPa, 700 hPa, 850 hPa, and 925 hPa) with four corresponding fields: temperature, eastward and northward components horizontal wind vector, and Geopotential Height. Additionally, the dataset includes surface fields such as 2 meter Temperature, 10 meter wind vector, and radar reflectivity.
#### 5.3.2 Network architecture and training
The input data from ERA5 consists of 20 channels at a resolution of \(36\times 36\), while the output data from WRF-CWA consists of 4 channels at a higher resolution of \(448\times 448\). The output channels include radar reflectivity, eastward and northward 10 meter wind vectors, and 2 meter temperature. Notably, the radar reflectivity channel is not present in the input data and needs to be predicted based on the other channels. Thus, while the prediction of wind and temperature could be viewed as a super-resolution task, the prediction of radar is strictly generative. The radar reflectivity data also exhibits a distinct distribution compared to the other output channels, with positive values and a prominent zero-mode consistent with typical non-raining conditions. To facilitate training, we interpolate the global input data onto the curvirectangular grid of CWA with bilinear interpolation. Additionally, we introduce 4 channels for sinusoidal positional embedding. To avoid over-fitting we divide the data into training and testing sets. Four years of data 2018-2020 are used for training (2,4154 samples total). For testing we use the full year 2021 as well as the first four months (January to April) of 2022.
To ensure compatibility and consistency, we employ the same UNet architecture used in EDM (Elucidated Diffusion Models) for both the regression and residual diffusion networks. This architecture is based on the UNet model proposed in [47]. We enhance the UNet by increasing its size to include 6 encoder layers and 6 decoder layers. The base embedding size is set to 128, and it is multiplied over channels according to the pattern [1,2,2,2,2]. The attention resolution is defined as 28. For time embedding in the diffusion process, we utilize Fourier-based position embedding. However, in the regression network, time embedding is disabled. No data augmentation techniques are employed during training. Overall, the UNet architecture comprises 80 million parameters.
During the training phase, we use the Adam optimizer with a learning rate of 2e-4, \(\beta_{1}=0.9\), and \(\beta_{2}=0.99\). Exponential moving averages (EMA) with a rate of \(\eta=0.5\) are applied, and dropout with a rate of 0.13 is utilized. The regression network solely receives the input conditioning channels. On the other hand, the diffusion training incorporates the 20 input conditioning channels from the coarse-resolution ERA5 data, which are concatenated with 4 noise channels to generate the output for each denoiser. For diffusion training, we adopt the Elucidated Diffusion Model (EDM), a continuous-time diffusion model. During training, EDM randomly selects the noise variance such that \(\ln(\sigma(t))\sim\mathcal{N}(0,1.2^{2})\) and aims to denoise the samples per mini-batch. EDM is trained for 100 million steps, while the regression network is trained for 30 million steps. The training process is distributed across 16 DGX nodes, each equipped with 8 A100 GPUs, utilizing data parallelism and a total batch size of 512. The total training time for regression and diffusion models was 7 days that amounts to approximately \(28,224\) GPU-hours.
For sampling purposes, we employ the second-order stochastic sampler provided by EDM. This sampler performs 18 steps, starting from a maximum noise variance of \(\sigma_{\max}=800\) and gradually decreasing it to a minimum noise variance of \(\sigma_{\min}=0.002\). We adopt the rest of hyperparameters from EDM as listed in [22].
### Evaluation criterion
Probabilistic predictions aim to maximize sharpness subject to calibration [37]. Qualitatively, calibration means that the likelihood of observing the true value is the same as observing a member drawn from the ensemble. A necessary condition for calibration is that the spread-error relationship be 1-to-1 when averaged over sufficient samples [12]. Calibration also manifests as a flat rank-histogram. A simple metric used below is the root-mean-squared error of the sample mean. In the large sample limit, the sample mean becomes deterministic. So we expect this error to be comparable for generative and deterministic models.
Instead of considering both calibration and spread separately, it can be easier to use proper scoring rules like the continuous-ranked-probability score (CRPS) [14]. Let \(x\) be a scalar observation and \(F\) be the cumulative distribution of the probabilistic forecast (e.g., the empirical CDF of generated samples). Then, CRPS is defined as
\[CRPS(F,x)=\int_{-\infty}^{\infty}(F(y)-\mathbb{1}_{\left\{y\geq x\right\}})^{2 }\,dy.\]
The \(F\) which minimizes CRPS is the true cumulative distribution of \(x\). For a deterministic forecast, \(F(y)=\mathbb{1}_{\left\{y>=x_{0}\right\}}\) where \(x_{0}\) is the forecast value, CRPS is equivalent to the mean absolute deviation.
## 6 Acknowledgements
We extend our profound appreciation to the Central Weather Administration (CWA) of Taiwan2, a premier government meteorological research and forecasting institution, for granting us access to the invaluable operational Numerical Weather Prediction (NWP) model dataset and for their expert guidance on data consultation. Our gratitude also extends to the AI-Algo team at NVIDIA, especially Kamyar Azizzadenesheli, Anima Anandkumar, Nikola Kovachki, Jean Kossaifi, and Boris Bonev for their insightful discussions. Additionally, we are indebted to David Matthew Hall for his constructive feedback on the manuscript.
Footnote 2: [https://www.cwa.gov.tw/eng/](https://www.cwa.gov.tw/eng/)
|
2309.05007 | FOLLOWUPQG: Towards Information-Seeking Follow-up Question Generation | Humans ask follow-up questions driven by curiosity, which reflects a creative
human cognitive process. We introduce the task of real-world
information-seeking follow-up question generation (FQG), which aims to generate
follow-up questions seeking a more in-depth understanding of an initial
question and answer. We construct FOLLOWUPQG, a dataset of over 3K real-world
(initial question, answer, follow-up question) tuples collected from a Reddit
forum providing layman-friendly explanations for open-ended questions. In
contrast to existing datasets, questions in FOLLOWUPQG use more diverse
pragmatic strategies to seek information, and they also show higher-order
cognitive skills (such as applying and relating). We evaluate current question
generation models on their efficacy for generating follow-up questions,
exploring how to generate specific types of follow-up questions based on
step-by-step demonstrations. Our results validate FOLLOWUPQG as a challenging
benchmark, as model-generated questions are adequate but far from human-raised
questions in terms of informativeness and complexity. | Yan Meng, Liangming Pan, Yixin Cao, Min-Yen Kan | 2023-09-10T11:58:29Z | http://arxiv.org/abs/2309.05007v2 | # FollowupQG: Towards Information-Seeking Follow-up
###### Abstract
Humans ask follow-up questions driven by curiosity, which reflects a creative human cognitive process. We introduce the task of _real-world information-seeking follow-up question generation (FQG)_, which aims to generate follow-up questions seeking a more in-depth understanding of an initial question and answer. We construct FollowupQG, a dataset1 of over 3K real-world (initial question, answer, follow-up question) tuples collected from a Reddit forum providing layman-friendly explanations for open-ended questions.
Footnote 1: Data available at [https://github.com/vivian-my/followupQG](https://github.com/vivian-my/followupQG)
In contrast to existing datasets, questions in FollowupQG use more diverse pragmatic strategies to seek information, and they also show higher-order cognitive skills (such as _applying_ and _relating_). We evaluate current question generation models on their efficacy for generating follow-up questions, exploring how to generate specific types of follow-up questions based on _step-by-step_ demonstrations. Our results validate FollowupQG as a challenging benchmark, as model-generated questions are adequate but far from human-raised questions in terms of informativeness and complexity.
## 1 Introduction
Question asking is considered a fundamental cognitive process. People typically ask concise and natural questions to seek information Ram (1991). _Question Generation_ (QG) has recently gained much interest, targeting the study of how intelligent systems can generate relevant questions. This can evaluate the cognitive reasoning ability of models while benefiting many downstream tasks, such as generating assessments for course materials in education Laban et al. (2022) and enriching training data for question answering Pan et al. (2021).
Existing works Duan et al. (2017); Zhao et al. (2018); Pan et al. (2020); Ghanem et al. (2022) focus on generating simple factoid questions, while few works to date target complex practical questions. The task of QG is often framed as generating questions from a source text and a specific target answer from reading comprehension datasets like SQuAD Rajpurkar et al. (2016), as exemplified by Figure 1(a). Although useful in practical applications, such generated questions are quite different from actual human questions. First, they do not reflect the information-seeking nature of human question-asking, since the model already knows the answer beforehand. Second, they also do not reflect the creative human cognitive process in question-asking such as inferences and synthesis.
To bridge this gap, we propose the task of _real-world information-seeking follow-up question generation (FQG)_, which aims to generate _follow-up questions_ that seek new information given the _initial question_ and the _human-provided answer_. For example, the follow-up question in Figure 1 extends the provided answer to a reasonable counterfactual situation. Conventional follow-up question
Figure 1: Examples of (a) _answer-aware QG_ and (b) _information-seeking QG_.
generation works focus on benefiting multi-hop reasoning QA systems (Malon and Bai, 2020) or generating multi-turn conversational questions (Reddy et al., 2019; Richardson et al., 2023). In contrast, our task is more practical and challenging, since it requires a higher level of cognition to know what one does not know (Miyake and Norman, 1979). First, it demands a deep comprehension of the teacher-provided answer, identifying the uncertainty or gaps in knowledge; and second, applying high cognitive skills such as analogy to generate a meaningful follow-up question.
In this paper, we construct a dataset, FollowupQG, containing 3,790 real-world (initial question, answer, follow-up question) tuples. We collect the data from the Reddit forum _Explain Like I'm Five_2 which contains real-life questions and self-contained answers. The layperson-friendly nature of this forum makes the question and answer highly comprehensible, serving as a suitable context for follow-up question generation. We further ask crowd-workers to select relevant follow-up questions from replies to the answer as they are real curiosity-driven questions by humans. Our data analysis shows that FollowupQG captures a variety of high cognitive skills in question-asking, such as relating and causal inference.
Footnote 2: [https://www.reddit.com/r/explainlikeimfive/](https://www.reddit.com/r/explainlikeimfive/)
We establish benchmarks on this data using GPTNeo (Black et al., 2021), BART (Lewis et al., 2020) and T5 (Raffel et al., 2020). Automatic and human evaluation reveals that the best model can generate fluent follow-up questions. However, they still fall short of human questions in terms of semantic validity, complexity, and informativeness. Also, we find that \(\sim\)30% of the generated questions do not seek new information.
We note that one limitation of fine-tuning pre-trained language models on QG task is to control _what to ask_ and _how-to ask_. Inspired by recent prompting methods on large language models (Wei et al., 2022; Saha et al., 2023), we investigate on _chain-of-thought_ prompt-based learning via GPT family3, and observe that incorporating an intermediate reasoning chain can better control the models to ask specific types of questions compared to the standard promoting. However, there is still a large improvement in generating specific high-level questions. These observations make FollowupQG a challenging benchmark for advancing QG.
Footnote 3: ChatGPT, GPT-3.5, GPT-4
## 2 Related Work
Question Generation (QG) aims to automatically generate questions from textual input. Existing QG studies (Du et al., 2017; Du and Cardie, 2018; Nema et al., 2019; Pan et al., 2020; Murakhov'ka et al., 2021) are typically trained and evaluated on reading comprehension benchmarks such as SQuAD (Rajpurkar et al., 2016) and HotpotQA (Yang et al., 2018). Questions in those datasets are designed to test machine's reading comprehension ability, which fail to reflect the information-seeking nature of human question-asking. This gap has led to work on "answer-agnostic" question generation (Subramanian et al., 2018; Wang et al., 2019; Pan et al., 2019), in which the target answer is not given to the model as input. However, the sources of data are still reading comprehension datasets and the generated questions are still required to be answerable by the input source, which are quite different with the follow-up questions in our work, which aims to seek for unknown information from known knowledge.
To explore the generation of real human-like information-seeking questions, prior works have investigated generating clarification questions for forum posts in _StackExchange_(Rao and III, 2018; Kumar and Black, 2020), Amazon product reviews (Rao and III, 2019; Majumder et al., 2021), and online courses (Chen et al., 2018). However, clarification is only one of the pragmatic goals in asking follow-up questions. FollowupQG covers broader types of information-seeking behaviors beyond clarification, such as association, analogy, critical evaluation, and generalization. In addition, instead of focusing on restricted and highly-technical domains like StackExchange and Amazon products, we select _Explain Like I'm Five_ as the underlying data source, which covers a boarder range of real-life topics (Fan et al., 2019).
The closest prior work is InquisitiveQG (Ko et al., 2020). They asked crowd-workers to write follow-up questions for news articles and trained models for follow-up question generation. However, our analysis reveals that crowd-sourced questions in InquisitiveQG are typically shallow in reasoning and biased towards monotonous cognitive skills, in contrast with our natural follow-up questions collected from the web. In addition, our work focuses on a scenario different with InquisitiveQG but common in real-life: asking follow-up questions based on the initial question and its answer.
## 3 The FollowupQG Dataset
We construct the FollowupQG dataset as follows. The follow-up questions and the source documents are collected from _Reddit_4(SS 3.1). We first collect around 200,000 posts that contains question, answer, and replies to the answer with a site-specific web crawler. Then, we automatically select data samples that contain follow-up questions in SS 3.2. Afterward, the selected 10,890 data samples are further validated by online workers from Amazon Mechanical Turk (AMT) (SS 3.3). The final dataset contains 3,790 high-quality samples.
Footnote 4: License of usage: [https://www.redditinc.com/policies/data-api-terms](https://www.redditinc.com/policies/data-api-terms)
### Data Sources
To gather real-world information-seeking questions, we initially explored several websites which provide forums to ask open-ended questions, such as Quora, Khan Academy, as well as numerous Reddit forums (subreddits). After a careful comparison, we choose to focus on the subreddit _Explain Like I'm Five_ (ELI5), where users are encouraged to provide answers which are comprehensible by a five year-old child. ELI5 is appealing because the questions are close to real-life and the answers are self-contained, and thus rely less on prior specialized knowledge. Their high comprehensibility makes the question and answer suitable to serve as the context of follow-up question generation.
### Data Collection
A thread in the ELI5 forum (Figure 2) usually consists of: (1) a thread title, usually in question format and is considered as the _Initial Question_, (2) a vote that measures the quality of the thread, (3) top-level comments, most of which are detailed answers to the initial question, and (4) replies to the top-level comments, and many of them are asking follow-up questions to the answer.
Fan et al. (2019) have collected a large amount of (question, answer) pairs from ELI5 for question-answering. However, we could not reuse their corpus since they did not collect the follow-up questions. Therefore, we implement a site-specific web crawler to massively crawl data from the ELI5 forum. The crawler is built on _Pushshift API_ and _Reddit API_, which give access to the post ID, body, vote, and the comments. We restrict the data collection size to 200,000 and only collect the first three levels of comments.
We then define rules based on regular expressions to automatically filter out the invalid samples in the crawled data. A thread is considered invalid if: 1) its thread title is not a question, 2) the answer is not self-contained (shorter than 30 characters5) or receive low votes, or 3) the replies to the answer do not contain any question. After applying this automatic filtration, 10,890 data samples remain.
Footnote 5: A pilot study is conducted to check the answers ranging from 10 to 50 characters, and results show that answers shorter than 30 characters are generally less informative.
### Crowd-Sourced Data Validation
We find that the automatic-filtered data samples are still noisy. Especially, some replies in question-format are irrelevant to the initial question and answer or contain toxic or offensive contents. To ensure that our final corpus contain high quality follow-up questions, we design a crowd-sourcing task for data validation. We release 10,000 HITs (Human Intelligence Tasks) on the AMT platform, evenly divided into 10 batches. Each HIT presents the crowd-worker with one data sample of _(initial question, answer, follow-up question)_. To conduct human validation, we ask workers to answer three questions as follows:
\(\bullet\)**Q1:** Is the follow-up question a complete question asking for new information?
\(\bullet\)**Q2:** Does the data sample contain controversial topics, such as racism, hate speech, sexual topics, or offensive comments?
\(\bullet\)**Q3:** What is the relatedness of the follow-up question to the initial question and the answer?, where workers use the 5-point classification set of "strongly related", "related", "slightly related", or "not related".
Figure 2: Sample _Explain like I’m five_ (ELI5) forum.
To select qualified workers, we restrict our task to workers who are located in five native English-speaking countries6 and who maintain an approval rating of at least 90%. To ensure the annotations fulfil our guidelines, we give ample examples in our annotation interface with detailed explanations to help workers understand the requirements. The detailed annotation guidelines are in Appendix A. Each data sample is annotated by two different workers. We find substantial agreement between annotators, with an average Cohen's Kappa is 0.78, where the inter-annotator Kappa for Q1, Q2, and Q3 are 0.80, 0.61 and 0.92, respectively.
Footnote 6: Australia, Canada, Ireland, United Kingdom, USA.
To evaluate the quality of annotation, we add 50 test samples to each batch of HITs. We get an average test accuracy of 0.73 for all 10 batches, indicating the high-quality of the data annotation. In the end, 112 workers participated in the task, with 96.35% average acceptance rate. The average completion time for one HIT is around 40 seconds, and we set payment at USD 1.00/HIT. To construct the final dataset, we retain only the samples that are annotated as high-quality7 by both annotators, resulting in 3,790 instances. We randomly select 2,790 for training, 500 for validation, and 500 for testing.
Footnote 7: Choosing “Yes” answer for Q1, “No” answer for Q2, and choosing “strongly related” or “related” for Q3.
## 4 Data Analysis
The pragmatic functions of human-raised follow-up questions and their required cognitive skill levels are crucial for understanding the mechanism of human question-asking. These factors should be studied for building efficient question generators. In SS 4.1, we first characterize the pragmatic functions of questions in FollowupQG in accordance with the cognitive skills defined Bloom's Revised Taxonomy [1]. Then in SS 4.2, by comparing against existing datasets, we will show that the questions in FollowupQG are of higher level and have richer pragmatic functions.
### Categories of follow-up questions
We analyze 800 questions randomly sampled from our dataset and find that most follow-up questions fall into one of the following five categories that correspond to different cognitive levels in Bloom's Taxonomy. We show examples of each category in Table 1, where question-triggering text spans in the context are highlighted.
\(\bullet\)**Definition**: 23.6% of questions seek clarifications for the definition or meaning of entities or facts in the context. Examples are: _What is the definition of...?_ We map these to the _Remembering_ level in Bloom's taxonomy.
\(\bullet\)**Interpretation**: 38.9% of questions seek interpretations for reasons, means, goals, or background information to gain a deeper understanding of the answer. Examples are: _Could you explain the reason...?_ They correspond to the _Understanding_ level in Bloom's taxonomy.
\(\bullet\)**Counterfactual**: 18.7% of questions apply the learned knowledge in the answer to a reasonable counterfactual case. Examples are: _What will happen if...?_ These mostly correspond to the _Applying_ level in Bloom's taxonomy.
\(\bullet\)**Relating**: 6.3% of questions ask patterns or relationships between existing examples in the context and other related cases, which belong to _Analysis_ level in Bloom's taxonomy. Examples are like: _What is the relationship between...?_
\(\bullet\)**Creative**: 11.1% of questions require the asker's creative thinking to invent new solutions or suggestions for learned facts in the context. They belong to _Creating_ level in Bloom's taxonomy. Examples are: _Could... be changed to improve...?_
\(\bullet\)**Others**: 1.3% are rhetorical questions, _e.g._, expressing surprise by asking _Oh, really?_.
\begin{table}
\begin{tabular}{l l l l} \hline
**Context** & **FollowupQG Examples** & **Category** & **Ratio** \\ \hline
**Initial question:** Is a pregnant mother’s & What does the placenta exchange? & Definition & 23\% \\ blood kept separate from her fetus’ blood? & Why is it that nutrients and oxygen can only be passed over & Interpretation & 38\% \\
**Answer:** Yes, there is a placenta blood & by diffusion? & & \\ barrier: One of the placenta’s jobs is to make & What will happen if the blood mixes? & Counterfactual & 19\% \\ sure blood from the mother and fetus never & Will the placenta still function if the woman is not pregnant? & Relating & 6\% \\ surface between the mother and the fetus. & Could someone give some suggestions on keeping the blood & Creative & 11\% \\ Nutrients and oxygen are passed over by & completely separate? & & \\ diffusion only. & & & \\ \hline \end{tabular}
\end{table}
Table 1: Question examples of different types of pragmatic functions in FollowupQG. The question-triggering text spans in the context are highlighted in different colors.
In summary, 62.5% of human-raised follow-up questions are clarification questions asking for definitions and interpretation, while 36.1% of questions require higher-level cognitive thinking. This shows that FollowupQG has a relatively high proportion of questions that promote deep reasoning, considering the fact that asking deep questions is challenging for humans, revealed by prior studies [1, 13].
### Comparison with existing datasets
We further compare FollowupQG with three existing QG datasets: SQuAD [11], the most widely-used dataset for answer-aware QG, and LearningQ [12] and InquisitiveQG [13], two similar datasets designed for information-seeking QG.
Table 2 shows the comparison on question and document length, question categories, and the leading question words. The question category is based on the level of cognitive skill defined in the Bloom's Taxonomy. We reuse the analytic results of Chen et al. (2018) for SQuAD and LearningQ. For InquisitiveQG, we analyze question categories by manually annotating 100 sampled questions.
Our findings are as follows. First, questions in FollowupQG are much longer than in other datasets. The reason is that natural follow-up questions usually contain additional context that is either a conditional clause to limit the scope of the question, or a summarization of the user's understanding of the context.Such additional context is often given before the actual questioning sentence to make the whole follow-up question more complete and clear. The include of additional context makes FollowupQG closer to real-world question-asking. Second, FollowupQG has a more balanced distribution of questions in terms of cognitive skills, and a high percentage of questions ( 36%) in high cognitive levels such as _applying_ and _creating_. This makes FollowupQG significantly different with SQuAD, which is designed to test the reading comprehension ability on low cognitive skill level (_i.e. remembering_). Although InquisitiveQG also contain a high percentage of high-level questions, the key distinction is that their questions are written by crowd-workers instead of natural-occurring, which results in questions that are typically short and generic (_e.g._, _Is there a particular example?_). LearningQ collect real questions from an online educational platform, therefore containing a large portion of clarification questions. Compared with FollowupQG, the source contexts of LearningQ (course materials and video captions) are much noisy and considerably longer, making it hard to model and evaluate the problem of FQG.
## 5 Follow-up Question Generation
In this section, we evaluate the ability of three pre-trained language models to generate follow-up questions via fine-tuning, while Section 6 explores large language models' ability via prompting. Through comprehensive evaluation, we discover the strengths and limitations of current models for follow-up question generation and identify areas ripe for future research.
### Models
We choose three generation models that have shown state-of-the-art results on answer-aware QG: BART [10], T5 [12], and GPT-Neo [1]. We use _Huggingface_ to implement BART-large and T5-base models, and fine-tune these two models on the training set of FollowupQG by predicting the follow-up question given the concatenation of the initial question and the answer as input8. We use
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Dataset & Avg. of Words & \multicolumn{4}{c}{Distribution of Cognitive Skills} & \multicolumn{4}{c}{Most Frequent Question Types} \\ & Ques. & Doc. & Rem. & Und. & App. & Anal. & Crea. & 1st & 2nd & 3rd \\ \hline
**FollowupQG** & **43.6** & 143.5 & 23 & 38 & **19** & 6 & **11** & other & why & how \\ \hline SQuAD [11] & 9.9 & 134.8 & **100** & 0 & 0 & 0 & 0 & what & how & when \\ \hline LearningQ [12] & 16.9 & **1729.5** & 18 & **56** & 13 & **15** & 3 & why & other & what \\ \hline InquisitiveQG [13] & 7.1 & 150.4 & 46 & 49 & 5 & 0 & 0 & what & why & how \\ \hline \hline \end{tabular}
\end{table}
Table 2: Descriptive features and statistics of FollowupQG and the datasets in comparison. We follow the Bloom’s Taxonomy [1] to define the cognitive skills of questions. **Rem.**: Remembering; **Und.**: Understanding; **App.**: Applying; **Anal.**: Analyzing; **Crea.**: Creating. For question types, we follow Liu et al. (2019) to categorize questions based on the interrogative word and define 9 question types: who, where, when, why, which, what, how, boolean, other.
the _aitextgen9_ library for implementing GPT-Neo, and the input sequence for fine-tuning this model is the concatenation of the initial question, answer and follow-up question10. In the testing time, only initial question and answer is given11.
Footnote 9: [https://github.com/minimaxir/aitextgen](https://github.com/minimaxir/aitextgen)
Footnote 10: Initial Question <SEP> Answer <QU> Follow-up Question
Footnote 11: Initial Question <SEP> Answer <QU>
The batch size for BART, T5 and GPT-Neo is 8, 8 and 16, and we fine-tune for 10 epochs. We use Adam [10] as the optimizer, with a learning rate of 5e-5 for all models. All the models are training on 1 RTX-4080 GPU. Table 4 shows the details of the models.
### Automatic Evaluation
We automatic evaluate the generated questions using BLEU1-4 [13], METEOR [11] and ROUGE-L [12]. Results are shown in the top rows of Table 3 (Rows 1-3). In general, all models achieve much lower scores in automatic metrics, compared with their performance on answer-aware QG. For example, BART achieves a BLEU4 of 21.3 on SQuAD [12], while on FollowupQG, it only achieves a BLEU4 of 2.61. Similar observations also hold for T5 and GPT-Neo. This is largely due to the open-ended nature of follow-up question generation. Compared with answer-aware QG, where the target answer is given and the questions are mostly factoid, follow-up questions are more open-ended, where the model may generate other plausible questions different from the human references, leading to low performance in \(n\)-gram based evaluation metrics. This open-ended nature of follow-up questions makes the automatic evaluation less informative.
### Human Evaluation
To better evaluate the quality of generated questions, we conduct human evaluation on 100 randomly sampled pairs in the test set of FollowupQG. We ask four workers to rate the questions raised by humans and the questions generated by different models for these samples. Workers are blinded by the identity of the models in the annotation. For each question, we ask workers to give ratings on four criteria: _Relevance_, _Fluency Complexity_, and _Informativeness_. The detailed criteria are shown in our designed questionnaire in Appendix B. We average the scores from the workers on each question, reporting averaged performance.
We find that questions generated by BART and T5 achieve comparable scores with human questions in terms of _fluency_ and _relevant_. However, the _complexity_ and _informativeness_ scores are much lower. This indicates that pre-trained models face challenges in solving the key issues of the FQG task, which aims at generating deep and informative questions. Furthermore, the Pearson correlation between automatic and human evaluation results is around 0.38, indicating a weak relationship. FollowupQG poses a new challenge for developing more faithful question evaluation metrics.
## 6 Controllable Follow-up QG
We see that the key difficulty in follow-up question generation is due to its open-ended nature. This increases the difficulty of controlling _what to ask_ and _how to ask_ for models. We now explore large language models' ability in tackling controllable follow-up question generation via in-context learning. Instead of relying on supervised fine-tuning methods, we adopt the idea of simply "prompting" the model with a few input-output exemplars to guide models to generate similar types of follow-up questions. Inspired by [11] works on
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & Hidden Dimension & Layer & Head \\ \hline BART-large & 1024 & 24 & 26 \\ \hline T5-base & 768 & 12 & 12 \\ \hline GPT-Neo & 768 & 12 & 12 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Model details
\begin{table}
\begin{tabular}{l|c c c c c c|c c c} \hline \hline
**Models** & **B1.** & **B2.** & **B3.** & **B4.** & **MET.** & **ROU\({}_{L}\).** & **FLU** & **REL.** & **COM.** & **INF.** \\ \hline BART & **17.22** & **7.11** & **3.89** & **2.61** & **8.00** & **13.35** & 4.54 & **0.99** & 1.36 & **1.31** \\ T5 & 13.69 & 4.32 & 1.85 & 1.02 & 5.79 & 12.49 & **4.89** & 0.95 & **1.51** & 1.26 \\ GPT-Neo & 14.08 & 4.09 & 1.89 & 1.20 & 5.26 & 11.65 & 4.56 & 0.35 & 1.29 & 1.26 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic and human evaluation performance for pre-trained language models on FollowupQG. **B1.** BLEU1; **B2.** BLEU1; **B3.** BLEU3; **B4.**: BLEU4; **MET.**: METEOR; **ROU\({}_{L}\)**: ROUGE\({}_{L}\); **FLU**: Fluency (1–5); **REL.**: Relevance (0–1); **COM.**: Complexity (1–3); **INF.** Informativeness (1–3).
utilizing chain-of-thought reasoning steps for the summarization task, we also give an intermediate reasoning step in a given prompt _(initial question, answer chain-of-thought, follow-up question)_ to show its effectiveness for question generation.
### Experimental Setting
We create standard and chain-of-thought prompts for each type of follow-up question in SS 5.1, including _definition_, _interpretation_, _counterfactual_, _relating_, and _creative_. Figure 3 illustrates one example of a _creative_ prompt for both the standard and chain-of-thought settings12. Specifically, chain-of-thought prompts aim to enhance the ability of large language models to accurately control the patterns of follow-up questions during the generation process with an intermediate reasoning step. To evaluate the results of controllability, we elicit language models to generate follow-up questions for 50 sampled (_initial question, answer_) pairs for different types of prompts. This results in 500 generated questions in total. To verify whether prompting an LLM in this way can bring controllability, we manually annotate the question types of the generated questions and calculate the question type accuracy, by comparing whether the types of the generated questions match the prompt type.
Footnote 12: We list the complete set of exemplars in Appendix C
### Result Analysis
We evaluate chain-of-thought prompting on ChatGPT, GPT-3.5 (text-davinci-003), and GPT-4, respectively. Figure 4 shows the distribution of generated question types by utilizing standard and chain-of-thought prompts for the large language models. First, we observe that generating the same type of questions for the given prompts for low-level question types is relatively simpler for large language models, as compared to higher-order question types. For example, all three models have relatively higher accuracy (\(\sim\)60%) in generating _definition_ and _interpretation_ questions when given corresponding standard or chain-of-thought prompts. However, accuracies drop (\(\sim\)40%) when generating _relating_ or _creative_ questions.
Secondly, the evaluation of different language models indicates that GPT-4 outperformed other models in terms of controllable question generation tasks, particularly for high-order question generation. GPT-4 achieves around 66% accuracy in generating _creating_ questions while ChatGPT and GPT-3.5 only reaches around 50% when using chain-of-thought prompts.
Third, our findings indicate that incorporating a chain-of-thought reasoning step results in an improvement in controllable accuracy when generating follow-up questions, particularly for higher-order question types. Notably, GPT-4 showcases an approximately 16% increase in accuracy in generating "creative" questions when compared to using standard prompts alone. However, ChatGPT and GPT-3.5 still exhibit relatively lower accuracy in controlling high-level questions, even with the utilization of chain-of-thought prompts. These results suggest potential future directions for further advancements in addressing the challenge of controlling and improving accuracy in generating high-level follow-up questions for these models.
### Case Study
To give a clear understanding of the differences between follow-up questions raised by humans and large language models, we compare several model-generated questions with different prompts and human-raised questions in our dataset FollowupQG. Table 5 shows one _relating_ human-raised follow-up question and six model-generated questions with _relating_ standard and chain-of-thought prompts. More examples are shown in Appendix D In general, we observe that all models
Figure 3: Standard and chain-of-thought _creative_ prompt examples. Chain-of-thoughts are highlighted.
are able to generate fluent and logical follow-up questions based on the given context (_initial question, answer)_. However, they still have some limitations compared with human-written questions.
First, humans are capable of proposing _relating_ questions by providing new and specific relevant examples. For instance, when presented with a context regarding a higher percentage of oxygen, humans can generate additional factors not mentioned in the given context, such as "increased plant population" or "human-induced pollution," as shown in Table 5. However, both ChatGPT and GPT-4 struggle to perform this task effectively. While GPT-4 may generate associations with new terms, the generated content appears to be more generic and lacks the specific and concrete examples that humans provide.
Second, LLM-generated questions tend to follow a formal and formulaic pattern, which contrasts with the characteristics of our dataset that are layer-person friendly and conversational in nature. This underscores another challenge of using this prompting approach with LLMs to control the style of generated questions.
Figure 4: Distribution of generated question types by using different prompts on ChatGPT, GPT-3.5, GPT-4. Vertical labels represent prompts, horizontal labels represent question types.
\begin{table}
\begin{tabular}{p{343.0pt}} \hline \hline
**Initial Question:** Why are the size of insects small? Not larger as human beings? \\
**Answer:** In the simplest terms, millions of years ago there were insects that were larger and over 7-8 feet long. It had to do with the higher percentage of oxygen in the atmosphere back then. But as the levels of gases in the atmosphere stabilized and Oxygen decreased, so did the size of these insects... \\
**Human:** May I ask, why was there a higher percentage of oxygen? Was it because there were more plants and no human pollution? And why did it go down? \\ \hline
**ChatGPT (CoT):** Are there any other examples of animals being affected by changes in atmospheric gases? \\
**GPT3.5 (CoT):** What specific factors cause the decrease in oxygen levels that ultimately led to the decrease in insect size? \\
**GPT-4 (CoT):** How did higher oxygen concentrations in the past contribute to the growth and survival of giant insects like the ones in prehistoric times? \\ \hline \hline \end{tabular}
\end{table}
Table 5: Examples of model-generated follow-up questions via standard and chain-of-thought relating prompts.
Conclusion and Future Work
We explore the problem of information-seeking follow-up question generation by presenting FollowupQG, a dataset consisting of 3K _(initial question, answer, follow-up question)_ tuples that represent real-life human question-asking, including rich pragmatic functions and diverse cognitive skills. We then propose question generation models on this data via fine-tuning and _chain-of-thought_ prompting. Extensive evaluations demonstrate several difficult aspects of follow-up question generation, showing FollowupQG is a challenging dataset that deserves future investigation. Future works include how to promote higher-order deep questions, how to control the generation process, how to improve the evaluation metrics, and how to adapt follow-up QG in practical applications such as education.
## Acknowledgement
This research has been supported by the WING research group of the National University of Singapore. We would like to thank Professor Kan for his support, and anonymous reviewers for their valuable feedback on how to improve the paper.
## Limitations
We acknowledge several limitations in our work. First, follow-up questions are difficult to evaluate with current automatic evaluation metrics, especially judging whether the questions are seeking new information. Despite human evaluation is involved in this work, it is time-consuming and costly, and also it is difficult to reproduce and guarantee the evaluation consistency.
Second, since the data is collected from the online question-answering forum, the pragmatic functions we found in FollowupQG may not cover all types of follow-up questions in real-life. Although FollowupQG covers diverse types of follow-up questions from low to high cognitive levels, follow-up questions raised in other scenarios (_e.g._, in the classroom, in paper review, in conversation) might be different and are worthwhile to explore. For example, _criticizing_ questions rarely appear in our dataset, probably because in forum QA, the questioners are often less knowledgeable than the answerer in the domain they are asking for. However, in paper reviewing, _criticizing_ questions may be more commonly seen.
Third, for the modeling part, we focus on revealing the limitations of state-of-the-art large language models in follow-up question generation. Although we design one method to improve the generation via _chain-of-thought_ prompting, they are quite straightforward and only slightly contribute to generating deeper and more informative questions. More specialized model designs should be explored in the future to improve this, such as modeling the reasoning chain or discourse structure.
|
2309.04319 | Cascade of multi-electron bubble phases in monolayer graphene at high
Landau level filling | The phase diagram of an interacting two-dimensional electron system in a high
magnetic field is enriched by the varying form of the effective Coulomb
interaction, which depends strongly on the Landau level index. While the
fractional quantum Hall states that dominate in the lower energy Landau levels
have been explored experimentally in a variety of two-dimensional systems, much
less work has been done to explore electron solids owing to their subtle
transport signatures and extreme sensitivity to disorder. Here we use chemical
potential measurements to map the phase diagram of electron solid states in
$N=2$, $N=3$, and $N=4$ Landau levels in monolayer graphene. Direct comparison
between our data and theoretical calculations reveals a cascade of
density-tuned phase transitions between electron bubble phases up to two, three
or four electrons per bubble in the N=2, 3 and 4 Landau levels respectively.
Finite temperature measurements are consistent with melting of the solids for
T$\approx$1K. | Fangyuan Yang, Ruiheng Bai, Alexander A. Zibrov, Sandeep Joy, Takashi Taniguchi, Kenji Watanabe, Brian Skinner, Mark O. Goerbig, Andrea F. Young | 2023-09-08T13:33:59Z | http://arxiv.org/abs/2309.04319v1 | # Cascade of multi-electron bubble phases in monolayer graphene at high Landau level filling
###### Abstract
The phase diagram of an interacting two-dimensional electron system in a high magnetic field is enriched by the varying form of the effective Coulomb interaction, which depends strongly on the Landau level index. While the fractional quantum Hall states that dominate in the lower energy Landau levels have been explored experimentally in a variety of two-dimensional systems, much less work has been done to explore electron solids owing to their subtle transport signatures and extreme sensitivity to disorder. Here we use chemical potential measurements to map the phase diagram of electron solid states in \(N=2\), \(N=3\), and \(N=4\) Landau levels in monolayer graphene. Direct comparison between our data and theoretical calculations reveals a cascade of density-tuned phase transitions between electron bubble phases up to two, three or four electrons per bubble in the N=2, 3 and 4 Landau levels respectively. Finite temperature measurements are consistent with melting of the solids for T\(\approx\)1K.
In an electron solid, spatial translation symmetry is spontaneously broken so that the ground state charge density forms a periodic structure incommensurate with the underlying crystal lattice. One known example is obtained in high Landau levels (LLs) in two-dimensional (2D) electron systems. Theoretically, the phase diagram is expected to host a rich interplay of competing phases[1; 2; 3; 4; 5; 6; 7; 8]. A unique feature of electron solids in higher LLs is that a variable number of electrons may cluster on each site of the emergent crystal. The formation of the phases--known as "electron bubbles"--is driven by the structure of the electronic form factors in the LLs. Electron bubble phases were first identified in the GaAs 2D electron gas by the observation of re-entrant integer quantum Hall effect (RIQHE) in transport measurement[9; 10], in which the crystallized electrons freeze and no longer contribute to the Hall conductivity. Similar phases are also expected in graphene[11; 12; 13], and recent measurements have confirmed their existence[14; 15]. While the existence of electron solids is straightforward to confirm using transport measurements, distinguishing them from each other to construct a comprehensive phase diagram is not. To this end, other experimental methods, such as microwave spectroscopy[16], surface acoustic wave transmission[17; 18], and tunnelling spectroscopy [19] have been developed to study vibrating modes related to the lattice structure of electron solids. More recently, temperature dependent transport has shown that the same RIQH state may host more than one bubble phase, distinguished by different melting temperatures[20; 21; 22]. However, a detailed phase diagram of the electron bubble phases across different LLs, long been predicted by theory, has not been conclusively established.
Measuring thermodynamic properties provides a probe of quantities directly related to the ground state energy, offering a chance to map out a complete phase diagram independent of the detailed transport phenomenology of the ground state. In this Letter, we use chemical potential measurements [23] to construct just such a phase diagram for partially filled LLs in monolayer graphene. Our data demonstrate the existence of multiple distinct electron bubble phases characterized by different bubble sizes. By directly comparing our data with mean-field-theory calculations, we establish a one-to-one correlation between the filling factor and the electron bubble morphology.
Our measurement is performed in a graphene/hBN heterostructure assembled using standard dry pickup techniques[24]. Two graphene monolayers are separated by an hBN dielectric layer of 40nm thickness, with additional hBN dielectric and graphite gates forming a four-plate capacitor geometry. The top graphene serves as a charge detector, which combined with a feedback loop allows us to determine changes in chemical potential of the bottom'sample' graphene accurately[23].
Fig. 1 presents the chemical potential \(\mu\) measured across individual LLs with orbital quantum numbers \(N=0\), 1, 2, 3, and 4. The qualitative behavior of \(\mu\) depends strongly on \(N\). For \(N=0\) and \(N=1\) (Figs. 1a-b) fractional quantum Hall states are favored, with incommensible states (manifesting here as nearly discontinuous jumps in \(\mu\)) observed at filling factors associated with two-flux and four-flux composite fermion sequences[25]. For \(\nu^{*}>-1/5\) (or \(\nu^{*}<-4/5\)) within the \(N=0\) and \(N=1\) LL, \(\mu\) changes smoothly, showing a large neg
ative inverse compressibility \(d\mu/d\nu\)[26]. This behavior has been identified with the formation of Wigner crystal states in previous experiments in both GaAs[27; 28] and graphene[23; 29].
For \(N\geq 2\) (Figs. 1c-e), a qualitatively different behavior is observed, with \(\mu\) dominated by much weaker oscillatory features that are not associated with any particular fractional \(\nu\). As we elaborate upon below, these features are signatures of multi-electron bubble states. Bubble states are generically expected in higher LLs due to the nature of the single-particle wave functions, which feature multiple nodes. This form factor considerably modifies the Coulomb repulsion at short distances, favoring charge-density-wave-type states instead of incompressible fractional quantum Hall states. In the \(N=2\) LL, our measurement reveals a competition between the FQH states observed at \(\nu^{*}=-1/5\) and \(-4/5\) and electron bubble states, as reported previously[14]. In the \(N=3\) and \(N=4\) LLs, the electron bubble phases are favored over the entire range of filling factors, manifesting as a slow modulation of \(\mu\) and \(d\mu/d\nu\), as shown in 2a-b. The number of oscillatory features increases with \(N\). In the \(N=3\) and \(N=4\) LL we observed three and four pairs of features, related by particle hole symmetry about \(\nu^{*}=1/2\), respectively.
The panels of Fig. 2a-b show \(d\mu/d\nu\) measured over a range spanning several LLs each, grouped by their orbital quantum number. For the \(N=3\) orbital (Fig. 2a), the four curves depicted are acquired in filling factor ranges corresponding to each of the four symmetry broken levels spanning \(-10<\nu<-6\). Due to limitations on the range of the electrostatic gates, for the \(N=4\) LL (Fig. 2b) only \(-12<\nu<-10\) is shown. Remarkably, the repetition of the pattern of \(\mu\) oscillations across different symmetry-broken levels indicates that this physics is independent of the spin and valley order. We may conclude that the formation of the bubble phases is governed only by single-component LL physics; as a consequence, the bubbles are not expected to be accompanied by complex spin or valley textures as have been shown to play a role in lower LLs[29; 30].
The energy scale characterizing the bubble phases may be directly accessed via the temperature dependence, shown in Fig. 2c-d. Signatures of the bubble phases disappear rapidly for \(T\approx 1-2K\) in the \(N=3\) LL, and below 1K in the \(N=4\). This is consistent with the general scale of the chemical potential changes associated with these phases, which are on the order of a few hundred \(\mu eV\), as well as previously reported transport
Figure 2: **Electronic compressibility and temperature dependence of electron bubble phases.** (a) \(d\mu/d\nu\) in the \(N=3\) and (b) \(N=4\) LLs. The data is obtained via numerical differentiation of \(\mu\) measured at \(13T\) and \(15mK\). Within each LL, the four symmetry breaking levels are plotted by blue, red, orange and purple curves with increasing \(|\nu|\). The curves are offset as indicated by the gray dashed lines. Stars indicate the center of the regions identified with electron bubble states. (c) Temperature dependence of electron bubble states in \(N=3\) and (d) \(N=4\) LLs, measured at \(B=13T\).
Figure 1: **FQH and electron solid states in graphene monolayer probed by chemical potential measurements.** (a) Chemical potential change as a function of effective filling factor, \(\nu^{*}\equiv\nu-\left|\nu\right|,\) in the \(N=0\), (b) \(N=1\), (c) \(N=2\), (d) \(N=3\), and (e) \(N=4\) LLs. In the \(N=0\) and \(N=1\) LLs, FQH states are observed as jumps in \(\nu\) at \(\nu^{*}=p/(2p\pm 1)\) and \(\nu^{*}=p/(4p\pm 1)\) (\(p=1,2,3,...\)), a selection of which are labeled. For \(N\geq 2\), broad oscillatory features dominate, which we associated with electron solids. The \(N=2\) LL is a marginal case where fractional quantum Hall states and electron bubbles compete within a narrow range of filling factors. All data measured at \(B=13T\) and \(T=15mK\).
data[14]. The order of magnitude of this scale is consistent with simplified Lindemann criterion[31] for crystal melting, according to which the thermal position fluctuations need to be roughly 15% of the lattice spacing to make the crystal melt. Within the harmonic approximation for the crystal, one obtains critical temperatures in the \(\sim 1\) K range (see supplementary material). Notably, the energy scale of the bubble phases is considerably smaller than that of the fractional quantum Hall physics in the lower LLs, where gaps (at comparable magnetic fields) typically are in the \(>10K\) range.
Theoretically, the ground state of the interacting electron system in a partially filled high-N Landau level is expected to evolve through a series of multi-electron bubble phases, as illustrated in Fig. 3a for the case of N=4. These crystalline phases can be described within a mean-field approach as presented in detail in the Supplementary Material. Fig. 3a shows the cohesive energy per particle for the bubble crystals with \(M\) electrons per lattice site as a function of the effective filling factor \(\nu^{*}\). The cohesive energy is the energy per particle, from which we have already subtracted the Hartree-Fock energy of a featureless electronic liquid[8] as well as the charging energy of the parallel plate capacitor in which the sample is embedded. For a fixed value of \(M\), the energy of the triangular bubble crystals depends on the spacing \(\Lambda_{B}=\sqrt{4\pi M/\sqrt{3}\nu^{*}}l_{B}\) between the bubbles, which in turn depends on the effective filling \(\nu^{*}\). Here, \(l_{B}=\sqrt{\hbar/eB}\) is the magnetic length.
One obtains a family of curves, with minima at positions described approximately by \(\nu^{*}\sim M/N\). The \(M\)-bubble phase is realized whenever it is lowest in energy within a a certain filling-factor range. Within a given Landau level, the maximum stabilized value of \(M\) equals \(N\). Theoretically, one may even stabilize a bubble phase with \(M=N+1\) in the vicinity of a half-filled, singly-degenerate Landau level (\(\nu^{*}\sim 1/2\))[7]. However, this phase is thought to compete energetically with a stripe phase; we find no evidence for it in the experimental data.
Notably, the family of minimum energy curves shown in Fig. 3b are not convex upon variation of \(\nu^{*}\), a signature of thermodynamic instability to the formation of mixed phases in which parts of the sample area are occupied by crystals with differing number of electrons per bubble. However, we note that for our experimental geometry, the variations in internal energy caused by the bubble phases are dwarfed by the electrostatic energy of the electron gas. Taking this into account, mixed phases are only found in a range \(\delta\nu\approx 2\times 10^{-3}\) (see supplementary information) in the vicinity of the level crossings visible in Fig. 3b. In this picture, then, we expect a succession of pure bubble phases, separated by sharp phase transitions.
To facilitate comparison between experiment and theory, in Fig. 4a-c, we plot the experimentally measured \(\mu\) scaled by the Coulomb energy, \(E_{c}=e^{2}/(\epsilon\ell_{B})\). Each panel presents \(\mu\) measured at different values of the magnetic field \(B\) for the same LL fillings, with an offset of \(0.01E_{C}\) between curves introduced for clarity. The \(\mu\) modulations observed in the curves are almost identical in these units, as expected given the Coulomb-driven nature of the electron bubble phases. Fig. 4d-f presents the calculated chemical potential of electron bubble phases in the \(N=2\), \(N=3\), and \(N=4\) LLs in the absence of disorder. The solid curves are obtained from the calculated energy per particle \(E\) of the \(M\)-bubble phases via \(\mu=\partial\left(\nu E\right)/\partial\nu\)[8]. Note that in these calculations, we restore the contribution of the featureless background charge omitted above in the calculation of the cohesive energy. Our calculations account for screening caused by both the dielectric environment as well as inter-Landau level excitations in the graphene[32; 33]. As in the N=0 and N=1 Landau levels[23], accurately accounting for screening is required for quantitative agreement between experiment and theory in graphene.
Despite the comparative simplicity of our model, it agrees quantitatively with the data in the overall scale of the chemical potential modulation across the Landau level, as well as in the locations of the various bubble phases, which we identify with positive compressibility regions for the \(M>=2\). However, in contrast to the theoretical model, where the phase transitions are sharp, in the experimental data the phase transitions are marked by broad regions of negative compressibility typically rather than sharp jumps. It is natural to associate these regions with a mixed phase arising from disorder potentials. To capture this physics, we convolve
Figure 3: **Cohesive energy for electron bubble states.** (a) Schematic depiction of electron bubble phases in the \(N=4\) LL. (b) Calculated cohesive energy for the \(N=4\) LL (see supplementary information for details). The ground state is obtained by tracing the lowest energy state at each filling factor, which is highlighted by colored lines. The color codes here match those in panel (a).
the disorder-free curves with a Gaussian 'inhomogenous broadening' of width \(\Delta\nu=0.015\) at \(13T\). Given the negligible quantum capacitance in the bubble regime, this is equivalent to an energy broadening \(\Delta E=7.5meV\). The dashed curves in Fig. 4d-f show the results of this model. We use the same color code to label the regions associated with pure and mixed electron bubble phases in both experimental and simulation data in the figure; the disordered model quantitatively reproduces the key missing feature of the experimental data, replacing the cusps of the disorder-free model with negative compressibility regimes as observed experimentally.
We note in closing several open questions raised by our work. First, while electron solids evidently dominate the ground states for \(N>2\), it is likely that they appear in the lower LLs as well, but are difficult to detect with bulk methods where their subtle thermodynamic or transport phenomenology may be overwhelmed by the incompressibility of the fractional quantum Hall states. Second, it is unclear whether the particular orbital wavefunctions of single- and multi-layer graphene may lead to any particularities in the electron solid ground states as compared to semiconductor systems. Finally, our disorder model is likely to be gross oversimplification. In particular, the lack of observed magnetic field dependence in the sharpness of the phase transitions is at odds with a model of quenched disorder where the effective broadening \(\Delta E\) would be expected to be magnet field independent. These and other questions might be directly resolved via scanning tunneling microscopy measurements of the real space structure of these phases[30; 34], as well as more detailed theoretical modeling that accounts for the interplay of disorder, finite temperature, and mesoscopic phase separation.
The authors acknowledge discussions with M. Zaletel. This work was primarily supported by Office of Naval Research under award N00014-23-1-2066. A.F.Y. acknowledges the additional support of the Gordon and Betty Moore Foundation EPIQS program under award GBMF9471. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative
Figure 4: **Quantitative comparison with theoretical model of electron bubble cascade**. (a) \(\mu(\nu)\) at several magnetic fields in the \(N=2\), (b) \(N=3\), and (c) \(N=4\) Landau level. The data at \(B=31.5T\) and \(20T\) were measured at \(300mK\), while the data at \(13T\), \(7T\), and \(5T\) were measured at \(15mK\). The chemical potential change is presented in units of the Coulomb energy \(E_{c}=\frac{e^{2}}{\epsilon_{B}}\approx 12.5meV\cdot\sqrt{B/\text{Tesla}}\). The red, orange, and purple curves are offset by \(-0.01E_{c}\), \(-0.02E_{c}\), and \(-0.03E_{c}\) from the blue curve, respectively. (d) Chemical potential calculated by mean field-theory (solid lines, see supplementary materials) in the \(N=2\), (e) \(N=3\), and (f) \(N=4\) Landau level. The dashed lines in these panels are chemical potential taking disorder broadening into account. The pink, blue, purple, and green color bars represent the domain of stability for the \(M=1\), \(M=2\), \(M=3\), and \(M=4\) electron bubble phases within the disorder broadened model, respectively. The gray regions represent broadened phase transitions where neighboring pure electron bubble phases coexist. Panel (a)-(c) use the same color codes to label the corresponding regions identified by experiments from the sign of the compressibility.
Agreement No. DMR-1644779 and the State of Florida. This work made use of shared facilities supported by the National Science Foundation through Enabling Quantum Leap: Convergent Accelerated Discovery Foundries for Quantum Materials Science, Engineering and Information (Q-AMASE-i) award number DMR-1906325. K.W. and T.T. acknowledge support from JSPS KAKENHI (Grant Numbers 19H05790, 20H00354 and 21H05233). S.J. and B.S. were supported by the NSF under Grant No. DMR-2045742.
|
2309.08523 | Breathing New Life into 3D Assets with Generative Repainting | Diffusion-based text-to-image models ignited immense attention from the
vision community, artists, and content creators. Broad adoption of these models
is due to significant improvement in the quality of generations and efficient
conditioning on various modalities, not just text. However, lifting the rich
generative priors of these 2D models into 3D is challenging. Recent works have
proposed various pipelines powered by the entanglement of diffusion models and
neural fields. We explore the power of pretrained 2D diffusion models and
standard 3D neural radiance fields as independent, standalone tools and
demonstrate their ability to work together in a non-learned fashion. Such
modularity has the intrinsic advantage of eased partial upgrades, which became
an important property in such a fast-paced domain. Our pipeline accepts any
legacy renderable geometry, such as textured or untextured meshes, orchestrates
the interaction between 2D generative refinement and 3D consistency enforcement
tools, and outputs a painted input geometry in several formats. We conduct a
large-scale study on a wide range of objects and categories from the
ShapeNetSem dataset and demonstrate the advantages of our approach, both
qualitatively and quantitatively. Project page:
https://www.obukhov.ai/repainting_3d_assets | Tianfu Wang, Menelaos Kanakis, Konrad Schindler, Luc Van Gool, Anton Obukhov | 2023-09-15T16:34:51Z | http://arxiv.org/abs/2309.08523v2 | # Breathing New Life into 3D Assets with Generative Repainting
###### Abstract
Diffusion-based text-to-image models ignited immense attention from the vision community, artists, and content creators. Broad adoption of these models is due to significant improvement in the quality of generations and efficient conditioning on various modalities, not just text. However, lifting the rich generative priors of these 2D models into 3D is challenging. Recent works have proposed various pipelines powered by the entanglement of diffusion models and neural fields. We explore the power of pretrained 2D diffusion models and standard 3D neural radiance fields as independent, standalone tools and demonstrate their ability to work together in a non-learned fashion. Such modularity has the intrinsic advantage of eased partial upgrades, which became an important property in such a fast-paced domain. Our pipeline accepts any legacy renderable geometry, such as textured or untextured meshes, orchestrates the interaction between 2D generative refinement and 3D consistency enforcement tools, and outputs a painted input geometry in several formats. We conduct a large-scale study on a wide range of objects and categories from the ShapeNetSem dataset and demonstrate the advantages of our approach, both qualitatively and quantitatively.
## 1 Introduction
Creating high-quality 3D assets based on textual descriptions for a diverse range of objects is an endeavor with great potential for digital media and artists. Recently, there has been a rise in denoising diffusion-based (DDPM) [14] text-to-image models [30, 32] producing results of unprecedented quality. The generative power of these 2D image models prompts the question: Can we use them to generate multi-view consistent 3D content? As it turns out, lifting these rich generative priors to 3D is a non-trivial task. In this work, we focus on the problem of text- and geometry-conditioned painting, an adjacent problem of text-to-3D generation.
The overview of our pipeline for generating a diverse multi-view consistent painting from a text description and input geometry is presented in Fig. 1. We bootstrap our pipeline from two crucial components: a pretrained generative text- and depth-conditioned image diffusion model [30] and neural radiance fields (NeRF) [19]. The design of our pipeline separates these components into distinct processes, which communicate using the interface of image files. This is contrary to several recent approaches that rely on gradient flow between the components, either in the form of Score Distillation [18, 26], or differentiable rendering [29]. We rely on traditional rendering techniques to enable communication between the components, including Z-buffer extraction for the rendered views. The image file interface is naturally
interpretable and better suited for building modular and partially upgradable systems. This is especially important as both DDPM and NeRF research fields advance rapidly.
Prior generative 3D works often employ the UV texture unwrapping [16], a costly operation and a potential point of failure. Since our method requires only Z-buffer queries from the input geometry, the input does not necessarily have to have a UV texture map attached or even be a valid mesh. To support this claim, we experimented with Point-E [21], thus extending our pipeline to a pure text-to-3D setting.
The output of our pipeline is a NeRF corresponding to the input geometry, painted in a multi-view consistent manner. The NeRF can be converted into the explicit input format with extra coloring information.
Our pipeline's performance depends on each component's performance, so it will keep improving as the components get faster. For example, recent progress in DDPMs [30] led to a tenfold decrease in image generation time; NeRF research has seen similar speedups [20]. Capitalizing on that, we conduct a large-scale study of painting the ShapeNetSem [6] dataset, composed of 12K objects from over 270 categories (Fig. 2). The study shows that our pipeline sets new state-of-the-art results on several generative metrics while attaining proper 3D consistency.
The summary of our contributions is as follows:
* We introduce a novel approach for giving 3D assets a new life, by painting their geometry using text inputs and pretrained generative image diffusion models.
* Our method is unique in that it combines pretrained 2D diffusion models and 3D neural radiance fields as _standalone_ pipelines. The weak coupling of tools is achieved through the interpretable interface of image files and permits partial upgrades.
* We conduct a large-scale study of painting ShapeNetSem [6] dataset and attain the new state-of-the-art on several metrics and perceived 3D consistency.
* Our method is robust to input corruptions and produces the output assets in several formats.
## 2 Related Work
Generative Text-to-Image ModelsUntil recently, generative imaging was dominated by unconditional or few-classes-conditional models [5, 12, 35]. With advancements in natural language processing, Contrastive Language-Image Pretraining (CLIP) [27] bridged the gap between visual and text modalities. This opened an avenue for open-category and text-conditioned image generation. Currently, Denoising Diffusion Probabilistic Models (DDPM) [14, 32] dominate the niche of high-quality and affordable text-conditioned generative imaging. Stable Diffusion [30] proposed shifting the diffusion process to a low dimensional latent space, achieving competitive performance while reducing the computation requirements. Subsequent models could further condition the process on various modalities, such as depth maps, images, and inpainting masks. These new modalities and accessible pretrained checkpoints gave rise to new applications of diffusion models, such as image un-cropping [31] and perpetual view generation [4]. Likewise, our method
Figure 2: **Texturing the ShapeNetSem [6] dataset with the proposed method**. We discard the original texture and paint objects with our method using the dataset metadata “name” field as a text prompt. We show several objects from 5 views spaced with 45-degree increments around the vertical axis. Our method produces high-quality results from the input text and geometry. More visual results in Figs. 6, 7.
relies on standalone pretrained DDPMs with their various ways of conditioning.
Neural Radiance FieldsNeural scene representations gained popularity due to their simplicity of usage and ability to capture complex scenes efficiently. Neural Radiance Fields (NeRF) [19] have recently demonstrated their versatility as a solution for 3D reconstruction from posed images. Recently, numerous improvements and variants of NeRF have been developed [24, 7, 20]. In particular, Instant NGP [20] proposed an efficient multi-resolution hash-based grid data structure, which reduces the training time of NeRF from hours to minutes. Similarly to COLMAP [33] for structure for motion, Instant NGP has become the go-to standalone tool for images to NeRF conversion.
Generative 3D ModelsResearch on high-quality 3D models and assets generation gained a lot of interest recently [34, 38, 39, 11, 26]. Previous methods leveraged Generative Adversarial Networks (GANs) [12] coupled with 3D-aware learned pipelines, such as differentiable renderers [11], face convolutional neural networks (CNNs) [34], voxel grids [39], and NeRFs [40, 5]. However, most of the methods require training a separate model per category, and thus, the evaluation focuses on a handful of classes, typically "cars" and "chairs", such as seen in ShapeNet [6]. With the rise of popularity in diffusion models and accessible text conditioning, recent works focused on integrating them into 3D content generation pipelines [38, 26]. DreamFusion [26] proposed score distillation sampling to couple a pretrained text-to-image diffusion model with a NeRF module to form an end-to-end trainable pipeline. Although score distillation cleverly avoids backpropagation through the diffusion model, thus reducing computational costs, it still requires significant computations. Pipelines with surrogate 3D output [21] have also received attention. Mesh-based inpainting schemes such as Latent-Paint [18] and TEXTure [29] employ differentiable rendering to generate a texture image for the input mesh. However, these methods are susceptible to artifacts introduced during UV texture unwrapping and gradient interaction between the generative model and the texturing target. Another two relevant works appeared recently: Text2Tex [8] utilizes a mesh-based inpainting scheme similar to TEXTure [29]; TextMesh [37] combines NeRF with SDS loss akin to DreamFusion [26]. Our method overcomes the discussed limitations by using NeRF for both scene representation and iterative consistency enforcement.
## 3 Method
The pipeline of our method is outlined in Fig. 3. It takes an input geometry and a text description and generates a NeRF model that adheres to the structure of input geometry but is enhanced with text-guided painting. It paints the geometry progressively: starting from the object facade initialization, it iteratively picks a novel view according to the camera pose selection strategy, generates a novel view, and reconciles it with the previous views using NeRF.
Prerequisites and AssumptionsOur pipeline is object-centric; hence, we create a virtual scene with the object scaled and positioned in the origin and a camera positioned on a unit sphere, pointing at the origin. We additionally assume that the object surface is opaque, which is required to perform unambiguous queries of the renderer's Z-buffer. This constraint limits processing models with transparency or with large sprite surfaces (e.g., for trees or flowers), sometimes seen in ShapeNetSem. As discussed in the previous chapters, the input geometry is not required to have UV unwrapping or other properties attached to the geometry. Whenever normals are available, the inpainting procedure can benefit from them through an additional inpainting zone step; however, this is optional.
We require a pretrained image diffusion model with text and depth conditioning to paint novel views. From the NeRF pipeline, we expect that it can ingest view images, poses, and optional depth maps, and output a model that can be queried at arbitrary poses for color and depth.
Figure 3: **Geometry painting pipeline that takes the geometry, a text prompt, and outputs a painted NeRF of the model.** We utilize the diffusion image generation process and the 3D reconstruction process of NeRF as standalone procedures. We start by generating the facade views using only diffusion view generation. Our pipeline progressively builds the 3D model by using NeRF to generate view-consistent images and feeding them back to the diffusion process to generate a new input view according to the view selection strategy.
**Initialization** The first view generation defines and constrains the object's overall painting and style. To obtain the first painted view, we render the object's depth map and give it together with the text prompt to the depth-to-image pipeline. At this point, it is possible to query the user if the generated initialization is according to expectation and make early alterations by changing text or the pipeline seed.
**Novel View Remapping** Multi-view consistency is crucial for generating meaningful geometry painting. However, it is tricky to achieve in a pipeline with disentangled stages applied one after another, such as our design. To this end, we employ an occlusion-aware backward remapping scheme for image view reprojection from a previously-painted view to the novel one (Fig. 4).
At its core is the view transformation \(\mathbf{P}=\mathbf{KEK}^{-1}\), which transforms normalized device coordinates (NDC) of the previous view into the novel view, where \(\mathbf{K}\) is the projection from world to NDC space and \(\mathbf{E}=[\mathbf{R}|\mathbf{T}]\) is the relative transformation of camera poses in world coordinates.
As a first step, we use the inverse transform \(\mathbf{P}^{-1}\) to map the novel view NDC coordinates with \(z\)-values assigned from the Z-buffer of the novel view rendering into the previous view. This gives us an \(xy\)-map (depicted as a green-red tile) of pixels of the novel view and their source locations directly in the previous view. The transform also gives us the depth map of the source locations as seen from the previous view, which is used for the occlusion test.
Secondly, we obtain the previous view's backward remapping into the novel view using the bilinear interpolation of the previous view at the \(xy\)-map locations. Compared to the direct application of the transform \(\mathbf{P}\) to the previous view image, the backward remapping is continuous by design and guarantees the absence of seams or holes in the remapped image.
However, an additional occlusion mask is required to identify areas of the novel view that are not visible from the previous view to handle these areas properly. Thus, as a third step, we obtain this mask by comparing the previous view depth map resampled using our \(xy\)-map, with the \(z\)-values obtained from the transformation on the first step. Evidently, the positions with agreeing depth are visible in both views
Figure 4: **Novel view remapping from a previous view.** Multi-view consistency is enforced by remapping the previous view into the novel view and preparing the inpainting mask of the unseen areas. The remapping procedure consists of three steps: (1) obtaining sampling coordinates of the previous view in the novel view, (2) sampling the novel view from the previous view, and (3) obtaining the inpainting mask by analyzing occlusions. An optional inpainting zone step provides better control of inpainting for inputs with surface normals.
under the assumptions we declared in the prerequisites. The final remapped view is thus obtained by combining the outputs of the previous two steps.
Additionally, the occlusion mask is stored for the future inpainting stage. Since most inpainting methods permit varying inpainting strength per pixel, we additionally compute inpainting zones map (similar to "trimaps" in TEXTure [29]), whenever the input geometry has surface normals. Specifically, we assign the visibility score to each fragment as a dot product between the surface normal and the unit vector originating in the camera origin and pointing at the fragment. By comparing visibility scores between the previous and novel views' fragments, we classify zones into areas that are kept intact, areas of full inpainting, or refinement. As we identify in the ablation study, inpainting zoning helps with multi-view consistent painting details.
Finally, as we expand the painted area of the input, more views become available for color transfer to a novel view. The described procedure is thus easily extended to perform remapping from multiple previous views.
Novel View Inpainting We employ a custom text- and depth-conditioned latent diffusion inpainting pipeline to complete novel views after the remapping. The pipeline inherits from the previous works on inpainting with diffusion models [17, 9] and is largely based on the pretrained Stable Diffusion [30]. The input to the pipeline is the same as for image generation, with the addition of a mask that defines the inpainting area and the remapped image constraint (Fig. 5).
The mask \(M\) is taken from the remapping stage and downsampled to match the latent diffusion resolution. Upon availability, inpainting zoning additionally assigns an intermediate weight value for the refined areas.
At each denoising step \(t\), we take the latent representation of the remapped image \(x_{0}\) and inject noise through \(t\) forward diffusion steps to obtain \(x_{t}\). At the same time, we perform a single reverse diffusion step to obtain \(y_{t}\) from the more noisy \(\tilde{y}_{t+1}\) step, at which point we use the depth and the text prompt as conditions. We now blend the denoised latent \(y_{t}\) with remapped conditoon \(x_{t}\) using the inpainting mask \(M\): \(\tilde{y}_{t}=(1-M)y_{t}+Mx_{t}\). This process starts with \(\tilde{y}_{\max}\sim\mathcal{N}(0,1)\) and is repeated until obtaining \(\tilde{y}_{0}\), which is then decoded into the inpainting output. Notably, latent diffusion is the primary source of inconsistency between the inpainted images and their remapped constraints, which calls for a solution to enforce multi-view consistency globally.
NeRF Reconstruction Using the remapping and inpainting techniques introduced above, we can ensure the soft consistency of a subset of proximal views. However, we aim for global multi-view consistency, which requires considering all the generated views simultaneously. To this end, we employ a flavor of NeRF to resolve multi-view conflicts and reconcile painting from all viewpoints. Since the standard NeRF formulation supports different colors of the same 3D location depending on the viewpoint, we disable such view-dependent effects and fit the NeRF to predict view-invariant colors instead. Starting with a set of facade views and until there are no more unvisited poses, we submit all the generated images, their respective camera poses, and depth maps,
Figure 5: **Our text- and depth-conditioned latent diffusion inpainting pipeline for constrained novel view synthesis.** It is inspired by both the inpainting pipeline that takes an inpainting mask and applies it to the latents, and the text- and depth-conditioned generation pipeline from the Stable Diffusion distribution [30]. At each diffusion time step, the latents are composed from the forward diffusion step over the inpainting constraints (“Remap” in the figure), and the reverse diffusion step, conditioned on the input text prompt and depth.
as inputs to NeRF. Once the scene is fitted, all painted training images are replaced with renders from the fitted NeRF, so that our subsequent remapping steps always start from multi-view consistent inputs.
## 4 Experiments
As a first step towards painting ShapeNetSem, we chose a few hyperparameters for our pipeline. To paint each model, we rely on 9 views regularly spaced around the object in the horizontal plane (\(40^{\circ}\) increment). Starting from the front view, we generate 5 facade views using just the remapping and inpainting procedures. This facade configuration maximizes the coverage of the input geometry within the range of efficiency of our remapping technique. Before generating each subsequent view, we perform NeRF reconstruction. Our pose selection strategy picks the next view from the clockwise and counter-clockwise increments in alternating steps. We remap two of the closest painted views from the left and right paths around the model each time. This technique helps minimize the content gap in the last view, where the clockwise and counter-clockwise painting paths meet.
Text PromptingThe base is set to "_A photo of_ {_object_}". An additional "{_dir_} _view_" modifier specifies the coarse relation of the viewpoint and the object, helping with 3D consistency. Other modifiers are discussed in the Appendix.
NeRF SetupWe chose Instant NGP [20] as a standalone NeRF backbone for its high degree of configurability and great performance. Additionally, we leverage depth supervision in NeRF training to facilitate faster convergence and obtain higher-quality reconstruction.
Our setting slightly differs from the default NeRF objective because our training images are generated from diffusion and can have soft view conflicts. As mentioned previously, the purpose of NeRF in our pipeline is to bring multi-view painting to agreement rather than to simulate light transport. We disable view-dependent effects in the NeRF configuration to align with this purpose. Additionally, we adjust the parameters for the grid encoding settings. We found that a higher number of levels (5) and encoded features (16) achieve good rendering fidelity while keeping a sufficiently smooth and continuous NeRF surface.
ShapeNetSem ProcessingWe demonstrate that our method can be applied to a wide range of object categories and shapes by conducting a study of texturing a significant subset of the ShapeNet [6] dataset called ShapeNetSem, which contains 12K models in over 270 categories. We pre-process each model by orienting it using the up and front vectors from the metadata, centering, and scaling to fit the unit sphere. We take the text prompt's "_object_" part from the name field of the dataset metadata.
Each model has a list of associated categories attached to it. We compute frequencies of all categories in the entire dataset and assign each model a primary category. These primary categories are used for both qualitative and quantitative studies. We demonstrate high-quality painting results on a select set of categories, including electronics, animals, and game characters, in Fig. 2. See Figs. 6, 7 for more results.
Comparison with Other MethodsWe compare our method quantitatively with two recent mesh texturing methods, Latent-Paint [18] and TEXTure [29] (Fig. 6). We ran both pipelines on the ShapeNetSem [6] dataset using the same 360-degree camera views and text prompts. While the
Figure 6: **Qualitative Comparisons** of our method to TEXTure [29], Latent-Paint [18], the original texturing from ShapeNetSem [6], and the “upper-bound quality” generative prior applied to each individual view without 3D consistency constraints. As can be seen, our method generates noise- and seam-free texturing with a high degree of detail.
TEXTure method handles well-defined camera trajectories, Latent-Paint requires way more views to perform decently; otherwise, we kept their default settings and ensured alignment of the cameras. We rendered interpolated views of the output models and compared the results of the two pipelines.
To facilitate the quantitative study, we additionally generated painting results for the evaluation views using only the Stable Diffusion [30] depth-to-image model. Although this set of images completely lacks 3D consistency, it provides a useful upper bound on the image fidelity that is attainable with the generative model.
After processing all models with the selected methods, we render their 360-degree spin views using synchronized camera setups and aggregate them in the video gallery (Fig. 7).
A closer look at the output renders in the video (also the car model in Fig. 8) reveals discernible quality differences between different geometry painting methods. We can see that the original ShapeNet [6] textures are rather primitive. Latent-Paint [18] exhibits blurred and overall coarse texturing. TEXTure [29] produces much more realism and details; however, compared to our method, its output contains spurious artifacts and texture filtering issues. This effect is prevalent in complex meshes containing many fine-grain geometry details. We observe that both prior methods have distinct artifacts that stem from the effective resolution of the UV texture maps, texture atlas patch discontinuities, and imperfect UV unwrapping. These issues are further exacerbated when differentiable rendering is employed. Our method is free of these issues; refer to the Appendix for discussion.
**Compute Requirements** Unlike the other two methods, whose memory footprint fluctuates depending on the 3D model complexity and requires at least 16GB GPU RAM, our method's resources are defined purely by NeRF configuration and are fixed across the whole dataset to 12GB RAM. Our pipeline configured as stated above takes \(\sim\)15-20 min to complete, which is on par with the competition.
**Quantitative Evaluation** We execute our pipeline, collect the output NeRF, and sample it at 8 different evaluation views at \(45^{\circ}\) increments. Using collections of these views obtained for all models in the dataset, we compared distribution metrics between each method and the reference (no 3D consistency) for the whole dataset and several primary categories. Through this evaluation, we aim to understand how close we can get to the upper bound of lifting the learned generative prior in 3D while maintaining 3D consistency by design. Frechet Inception Distance (FID) [13] and Kernel Inception Distance (KID) [2] are the standard metrics for comparing distributions of images: natural or sampled from generative models.
Following the footsteps of [15], we report FID\({}_{\mathrm{CLIP}}\) with the CLIP feature extractor. We additionally propose two new metrics: FID\({}_{\mathrm{DINOV2}}\) and KID\({}_{\mathrm{DINOV2}}\), which utilize the novel self-supervised feature extraction techniques [25]. Unlike the decade-old Inception backbone and CLIP, which focuses on named entities, DINOV2 is a powerful self-supervised feature extractor trained on natural images. All metrics are computed through a verified evaluation protocol of torch-fidelity[23]. The results of this quantitative study
Figure 8: **A Closer Look** reveals that our method produces more realistic results with invisible seams, while other methods often exhibit texture filtering issues and lower realism.
Figure 7: **Large-Scale Comparison of ShapeNetSem Texturing** with the original textures [6], Latent-Paint [18], TEXTure [29], and our method. We present spin-views of \(\sim\)12K models from over 270 categories. The models are grouped by category and sorted by group size. Categories, IDS, and model names (prompts) are specified under the corresponding video tiles. _Tip_: Use timecodes to conveniently skip to categories of interest.
Figure 9: **Exporting NeRF as Mesh.** Given the input mesh and the painted NeRF, we remesh the input almost isotropically with planarity constraints and sample vertex colors from the NeRF. This technique does not require a UV texture map for the input geometry.
are presented in Tab. 1. Evidently, our method achieves state-of-the-art fidelity to the generative prior while maintaining 3D consistency.
Geometry ExportThe output of our pipeline is contained in the final NeRF reconstruction. While NeRF as a 3D asset format gains popularity as hardware acceleration catches up, we take an extra step to transfer the generated painting back into a standard editable format. Since we do not require UV texture maps on the input and want to support use cases such as Point-E discussed below, we opt for transferring colors to the input mesh vertices. However, to ensure sufficient spatial resolution for such a scheme, vertices should be uniformly distributed on the surface of the input, which is usually not the case. To overcome this issue, we designed an algorithm for approximately-isotropic remeshing [22] that preserves the input geometry and only focuses on planar regions (Fig. 9). Using our remeshing technique helps obtain an identical mesh but with sufficient resolution for color transfer. Thanks to unambiguous color querying from our view-invariant NeRF flavor, we directly transfer color onto the remeshed input by sampling NeRF at all vertices locations. We further note, that the output asset files with per-vertex colors occupy significant space, which can be reclaimed by compression techniques such as DRACO [1].
Pure Text-to-3D via Point-EWe extend our pipeline with Point-E [21], a diffusion-based generative model that produces 3D point clouds from text prompts. Following [21], we convert the point cloud generated by Point-E to a signed distance field and use marching cubes with grid size \(64\) to obtain the mesh serving as an input to our method. Since the resulting geometry has surface normals of limited quality, we skip inpainting zoning in our method. Fig. 10 demonstrates an overall pipeline that takes only a text prompt as the input and outputs a mesh with improved painting. From the opposite point of view, since Point-E cannot generate detailed textures, our method can be seen as a downstream modular extension of Point-E to boost the texture quality of the produced 3D models.
## 5 Discussions and Conclusion
In this work, we presented a novel pipeline combining a generative 2D diffusion prior and 3D neural radiance fields as _standalone_ modules and demonstrated their ability to paint the input geometry using a text prompt in a 3D-consistent manner. We conducted a large-scale study on the ShapeNetSem [6] dataset and demonstrated the advantages of our approach against several prior art methods on a wide range of object categories. We believe that our pipeline will reach the community of artists, content creators, and game developers and enable quick prototyping of 3D assets, particularly from existing ones, thus giving them a new life.
\begin{table}
\begin{tabular}{l l c c c c c c c c c c c} \hline \hline & & All & Misc. & Chair & Lamp & ChstDrew. & Table & Couch & Computer & V(229) & WallArt & Bed & Cable \\ FID [13] & & (11992) & (2912) & (682) & (655) & (503) & (416) & (405) & (241) & TV (229) & (220) & (218) & (216) \\ Features & & & & & & & & & & & & & & \\ \hline \multirow{4}{*}{Inception} & Orig. texture [6] & 30.10 & 31.82 & 40.79 & 47.61 & 113.6 & 49.18 & 81.39 & 63.02 & 60.89 & 64.92 & 67.37 & 111.9 \\ & Latent-Point [18] & 27.73 & 30.87 & 36.65 & 38.36 & 67.60 & 28.44 & 65.98 & 68.85 & 67.85 & 90.99 & 49.04 & 73.60 \\
[13, 36] & TEXTure [25] & 16.10 & 18.34 & 23.44 & 30.75 & 32.65 & 34.98 & 40.40 & 46.48 & 45.85 & 61.23 & 43.04 & 38.88 \\ & **Ours** & **9.60** & **11.05** & **16.30** & **19.54** & **32.64** & **22.01** & **26.23** & **39.96** & **29.60** & **35.77** & **33.13** & **36.28** \\ \hline \multirow{4}{*}{CLIP} & Orig. texture [6] & 18.86 & 18.71 & 24.89 & 27.66 & 40.15 & 25.72 & 33.57 & 20.60 & 27.29 & 18.86 & 28.79 & 37.07 \\ & Latent-Point [18] & 15.84 & 16.42 & 17.08 & 12.29 & 29.51 & 11.34 & 22.22 & 24.50 & 22.47 & 27.30 & 19.35 & 27.83 \\
[15, 27] & TEXTure [25] & 6.85 & 6.85 & 9.62 & 9.37 & 11.29 & 11.00 & 9.48 & 11.28 & 11.38 & 11.37 & 11.09 & 9.79 \\ & **Ours** & **3.24** & **3.33** & **3.90** & **3.47** & **7.77** & **4.12** & **4.69** & **8.22** & **6.16** & **6.18** & **6.54** & **7.30** \\ \hline \multirow{4}{*}{DINOv2} & Orig. texture [6] & 588.1 & 585.9 & 620.6 & 787.3 & 1640 & 883.9 & 1265.6 & 767.1 & 999.4 & 857.9 & 946.3 & 1517. \\ & Latent-Point [18] & 332.9 & 366.1 & 285.8 & 329.0 & 696.6 & 280.6 & 556.3 & 673.4 & 773.9 & 866.5 & 533.4 & 765.1 \\ \cline{1-1} & TEXTure [25] & 175.0 & 194.6 & 181.1 & 278.9 & 321.6 & 248.0 & 282.1 & 404.4 & 501.5 & 580.3 & 276.7 & 366.2 \\ \cline{1-1} & **Ours** & **125.1** & **136.7** & **130.8** & **181.6** & **299.4** & **173.1** & **239.4** & **383.0** & **333.5** & **312.2** & **226.3** & **320.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of geometry painting with various methods on ShapeNetSem [6] dataset measured through Frechet Inception Distance (FID \(\downarrow\)) [13] metric with various feature extractors. Lower values are better. Results with Kernel Inception Distance [2] metric are in Tab. 2.
Figure 10: **Painting Point-E [21]. We extend our pipeline to pure text-to-3D by chaining it after Point-E. The same text prompt is used to generate the geometry and then repaint it with our method.**
## Appendix A Large-Scale Study of ShapeNetSem
Out of 12,288 models in the dataset, we processed 11,992 with all methods. The remaining 296 models either had flat geometry or could not be processed by the Latent-Paint [18] pipeline, TEXTure [29], or both. The failure cases happened most commonly due to the complex geometry not fitting 16GB GPU RAM within the respective method pipeline or failures in xatlas texture UV unwrapping module [16]. Our method produced results consistently even on these models, but for a fair comparison, we excluded these models completely.
In addition to the FID [13] evaluation from Tab. 1, we provide a quantitative evaluation of all pipelines on ShapeNetSem with the KID metric [2] in Tab. 2.
The ability of our method to handle complex geometry, low memory footprint, weak dependence on the geometry format or the rendering pipeline, and potentially unknown texture coordinates - all these properties make our method a reliable go-to solution for 3D assets revamping.
## Appendix B Subjective User Study
We conducted a limited crowd-sourced perceptual comparison between Latent-Paint [18], TEXTure [29], and our method. The study was based on 50 randomly sampled models from 10 categories, c.f. Tab. 1. Subjects were instructed (Fig. 11, left) to analyze and vote for higher quality and realism after observing a full 360\({}^{\circ}\) spin of models painted with a pair of methods, side by side. Each subject submitted 20 votes, plus 2 validation questions with predefined correct answers (Fig. 11, right). 35 subjects participated in our study, of which 29 (83%) passed the validation. 638 votes were collected, ensuring at least 3 votes for every pair, and aggregated into preference scores with the Crowd Bradley-Terry [3] model. The resulting scores were (log-scale, up to additive constant, 95% confidence intervals, higher is better): \(S_{\mathrm{Latent-Paint}}=0.15_{\pm 0.15},S_{\mathrm{TEXTure}}=0.30_{\pm 0.11},S_{ \mathrm{ours}}=\mathbf{1.86}_{\pm 0.13}\). The scores agree with the quantitative results.
## Appendix C ShapeNet Rendering Settings
To facilitate a fair comparison of different methods on the ShapeNetSem dataset [6], we choose the mesh rendering settings in all pipelines such that the output result is adequate for all methods. Notably, TEXTure [29] relies on mesh normals to determine inpainting regions. However, a subset of ShapeNetSem [6] meshes have faces with inappropriately oriented surface normals. For these meshes, directly passing them as input to TEXTure [29] produces corrupt texturing.
To address this issue, we utilize back-face culling of mesh to disable the rendering of mesh faces that are oriented away from the camera. We build our method on top of PyTorch3D [28], which provides a built-in implementation of back-face culling. However, since both Latent-Paint [18] and TEXTure [29] pipelines rely on the Kaolin renderer [10], which did not implement back-face culling as of the time of writing, we implemented back-face culling in software. This allowed us to address the rendering discrepancy and level the settings for all pipelines.
We experimented with double-face rendering as an alternative approach to resolving face orientation issues. However, the result of using double-face rendering is worse than that of using back-face culling, as seen in Fig. 12 (left). We suspect this is due to areas of the mesh having overlapping front-facing faces in the double-face rendering setting, thereby negatively affecting texture back-projection in the TEXTure [29] method. Overall, our rendering protocol is chosen to maximize the output quality of the pipelines relying on differential rendering under complex geometry.
## Appendix D Ablation: Inpainting Zoning
Inpainting zoning works in areas of the mesh that face away from the camera in one generated view so that they can be further refined in the subsequent views. Fig. 12 (middle) shows that our refinement scheme brings more details to the areas of the model with challenging visibility constraints.
Figure 11: **Subjective Study.** Left: User instruction with quality and realism judgment examples; **Right**: Two validation questions “Which one is better?” shared among all subjects to ensure engagement (the right column answers were expected for a pass).
## Appendix E Ablation: Number of Input Views
We show a qualitative comparison between models painted using various numbers of input views in Fig. 12 (right). With just 4 input views, we find holes and artifacts on the object's surface. With 18 views, the shape is smooth, but the generated color lacks detail. The choice of 9 views achieves the best quality.
## Appendix F Prompt Augmentation
Our method transparently exposes the style guidance functionality of the underlying generative models. It permits prompt augmentation, enabling greater variety in the generated painting while preserving 3D consistency. Specifically, our pipeline extends the input object description prompt as follows: "_A photo of a_ {_[modifier_} {_object_}, {_[dir]_} _view_". The "{_[modifier_}]_" style specifier term could be the color or the material of the object. In the same vein as text guides image generation models, the texture of our 3D models changes according to the modifier, as shown in Fig. 13. |
2308.16434 | Precise Error Bounds for Numerical Approximations of Fractional HJB
Equations | We prove precise rates of convergence for monotone approximation schemes of
fractional and nonlocal Hamilton-Jacobi-Bellman (HJB) equations. We consider
diffusion corrected difference-quadrature schemes from the literature and new
approximations based on powers of discrete Laplacians, approximations which are
(formally) fractional order and 2nd order methods. It is well-known in
numerical analysis that convergence rates depend on the regularity of
solutions, and here we consider cases with varying solution regularity: (i)
Strongly degenerate problems with Lipschitz solutions, and (ii) weakly
non-degenerate problems where we show that solutions have bounded fractional
derivatives of order between 1 and 2. Our main results are optimal error
estimates with convergence rates that capture precisely both the fractional
order of the schemes and the fractional regularity of the solutions. For
strongly degenerate equations, these rates improve earlier results. For weakly
non-degenerate problems of order greater than one, the results are new. Here we
show improved rates compared to the strongly degenerate case, rates that are
always better than 1/2. | Indranil Chowdhury, Espen R. Jakobsen | 2023-08-31T03:48:59Z | http://arxiv.org/abs/2308.16434v2 | # Precise error bounds for numerical approximations of fractional HJB equations
###### Abstract.
We prove precise rates of convergence for monotone approximation schemes of fractional and nonlocal Hamilton-Jacobi-Bellman (HJB) equations. We consider diffusion corrected difference-quadrature schemes from the literature and new approximations based on powers of discrete Laplacians, approximations which are (formally) fractional order and 2nd order methods. It is well-known in numerical analysis that convergence rates depend on the regularity of solutions, and here we consider cases with varying solution regularity: (i) Strongly degenerate problems with Lipschitz solutions, and (ii) weakly non-degenerate problems where we show that solutions have bounded fractional derivatives of order \(\sigma\in(1,2)\). Our main results are optimal error estimates with convergence rates that capture precisely both the fractional order of the schemes and the fractional regularity of the solutions. For strongly degenerate equations, these rates improve earlier results. For weakly non-degenerate problems of order greater than one, the results are new. Here we show improved rates compared to the strongly degenerate case, rates that are always better than \(\mathcal{O}\big{(}h^{\frac{1}{2}}\big{)}\).
Key words and phrases:Fractional and nonlocal equations, fully nonlinear equation, HJB equations, degenerate equation, weakly non-degenerate equation, stochastic control, Levy processes, error estimate, rate of convergence, viscosity solution, numerical method, monotone scheme, powers of discrete Laplacians 2020 Mathematics Subject Classification: 49L25, 35J60, 34K37, 35R11, 35J70, 45K05, 49L25, 49M25, 93E20, 65N06, 65R20, 65N15, 65N12 E.R.J. received funding from the Research Council of Norway under Grant Agreement No. 325114 "IfMod. Partial differential equations, statistics and data: An interdisciplinary approach to data-based modelling".
###### Contents
* 1 Introduction
* 2 The \(\mathcal{I}^{\alpha}\)-linear \(\mathcal{I}^{\alpha}\)
\(\mathcal{I}^{\alpha}\) will be non-degenerate and uniformly elliptic. We refer to (**B.1**) and (**A.6**) for precise assumptions. The HJB equation (1.1) is _strongly degenerate_ if the operators \(\mathcal{I}^{\alpha}\) are degenerate for every \(\alpha\),2 and it is _weakly non-degenerate_ if there is at least one \(\alpha\) for which \(\mathcal{I}^{\alpha}\) is elliptic/non-degenerate. Obstacle problems for elliptic operators are examples of weakly non-degenerate problems (1.1), and they are known to have non-smooth solutions (at the contact set). The correct (weak) solution concept for this type of problems is viscosity solutions [46, 47, 3]. Wellposedness, regularity, asymptotics, approximations, and other properties of viscosity solutions for nonlocal PDEs has been intensely studied in recent years. Regularity in the strongly degenerate case comes from comparison type of arguments and typically gives preservation of the regularity of the data [47]. Solutions can then be at most Lipschitz continuous. In non-degenerate cases there is a regularizing effect. The regularity theory has mostly been developed for uniformly elliptic/parabolic problems, and the huge literature includes seminal works of Caffarelli and Silvestre [18, 19]. In the weakly non-degenerate case there are few results, and most relevant for us (our inspiration) is [32] for local problems. We show here that weakly non-degenerate problems of order \(\sigma\in(1,2)\) have solutions with bounded fractional derivatives of order \(\sigma\).3 Hence solutions are more smooth than in the strongly degenerate case. Independently, similar type of regularity results have been obtained in the very recent preprint [59] on nonlocal obstacle problems.
Footnote 2: E.g. there could be no diffusion in some directions, or the operator could be a 0 order operator with bounded Lévy measure. There could be different degeneracies for different \(\alpha\)’s.
Footnote 3: We assume that the data is semiconçç to achieve this.
There is a huge literature on numerical methods for local HJB equations including finite differences, semi-Lagrangian, finite elements, spectral, Monte Carlo, and many more, see e.g. [29, 53, 36, 6, 54, 17, 14, 30, 61, 16, 39]. For fractional and nonlocal problems, there is the added difficulty of discretizing the fractional and nonlocal operators in a monotone, stable, and consistent way. These operators are singular integral operators, and can be discretized by quadrature after truncating the singular part and correct with a suitable second derivative term. This diffusion corrected approximation was introduced on the level of processes in [2] and then for linear PDEs e.g. in [28] in connection with difference-quadrature schemes, see also [48, 11, 41]. In the setting of HJB equations, it was introduced in [48, 22, 11] with further developments in e.g. [8, 26, 56, 34]. We will give new results for this approximation here, and focus on a version based on semi-Lagrangian type approximations [21, 30] of the nonlocal operators [22]. Another way of discretizing certain fractional operators, is via subordination: When the operator is a fractional Laplacian, it can be discretized by a (fractional) power of the discrete (FDM) Laplacians which can be seen as a quadrature rule with explicit weights [25]. While the diffusion corrected approximation has fractional order accuracy, the power of discrete Laplacian approximation is always of second order and faster when the order of the equation is close to 2. This last approximation has previously been used to solve linear and porous medium equations [35, 13]. In this paper, we will explain how it can be used to solve HJB equations and provide error bounds.
The main focus of the paper is on precise error bounds for the schemes and regularity settings mentioned above, especially the weakly non-degenerate case. In numerical analysis it is well-known that such bounds must depend on both the accuracy of the method and the regularity of solutions. In our fractional setting,
both of these may be fractional, and previous results are either not optimal or lacking. While linear, local, and smooth problems can analyzed in a rather simple and classical way [57], error analysis is more complicated in our fully nonlinear and non-smooth setting. There are two main approaches:4 (i) The 'doubling of variables' technique for fully nonlinear equations of 1st order [23, 29, 62] or fractional order less than 2 [8, 26]; and (ii) the'shaking of coefficients' method for convex HJB equations of 2nd order [4, 5, 32, 49, 50, 51] or fractional order [10, 11, 48].
Footnote 4: In the uniformly elliptic case, there are other methods [20, 52, 63]. These results are not explicit nor optimal, but they apply also to nonconvex problems. See also [44, 15, 45].
The'shaking of coefficients' method, originally introduced by Krylov, relies on constructing smooth subsolutions of both the equation and the scheme which can then be used to get one-sided error estimates via the comparison principle and local consistency bounds. If precise regularity results for both the scheme and the equation are known, along with sharp consistency bounds, the method produces optimal rates. We refer to [32, 43, 51] for local 2nd order problems and [10, 22, 48] for nonlocal problems. If regularity of the scheme is not known (this is difficult in general), sub-optimal rates can still be proved [4, 5, 11], and these latter bounds holds for a very large class of monotone schemes. Note that the'shaking of coefficients' method has the advantage that it can handle arbitrary high order error equations and therefore also higher order methods, while the 'doubling of variables' method only work optimally for schemes with (at most) 2nd order truncation errors. For nonlocal HJB equations, most of the progress on optimal error bounds for monotone schemes have addressed bounded (non-singular) integral operators [10, 48]. Non-optimal bounds for problems with singular operators can then be obtained after first approximation by bounded operators. Without this approximation step, sub-optimal rates have been obtained in [11] for singular integral operators.
**Our main contributions:**
(a) A rigorous _error analysis for monotone approximations of weakly non-degenerate problems_ is developed in Section 5. This is new and based on the "method of shaking the coefficients". The proof amounts to extending the analysis of [32] to nonlocal/fractional equations and schemes. Our setting is more involved and technical. The main challenges are related to the _fractional_ approximation, regularization, and regularity results needed - both for the equation and the scheme. As opposed to previous nonlocal results, we cannot use standard mollifiers for regularization but crucially need fractional heat kernels. For the schemes, the results are discrete and contain error terms, and a very careful analysis is needed to get optimal results.
(b) _\(C^{1,\sigma-1}\)-regularity results for weakly non-degenerate HJB-equations_ of order \(\sigma\in(1,2)\) given in Theorem 2.7. These are natural extensions to nonlocal/fractional problems of the \(W^{2,\infty}\) results of [32]. They seem to be new for equations of fractional order (but see also [59]) and are of independent interest. Our proof is based on uniform estimation of approximate fractional derivatives based on semi-concavity estimates and exploitation of weak non-degeneracy followed by an application of regularity results for linear problems in [58]. We also need and prove discrete versions of such results.
(c) Precise error bounds for _diffusion corrected difference-quadrature schemes_ in Section 3. Under various assumptions, we roughly speaking show that if \(\sigma\) is the
order of equation (1.1), \(u\) its solution, and \(u_{h}\) the solution of scheme, then
\[\|u-u_{h}\|_{L^{\infty}}\leq\left\{\begin{array}{ll}C\,h^{\frac{1}{2}(4-\sigma) }&\quad\text{when solutions are smooth ($C_{b}^{4}$),}\\ C\,h^{\frac{\sigma}{4+\sigma}(4-\sigma)}&\quad\text{in the weakly non-degenerate case and $\sigma>1$,}\\ C\,h^{\frac{1}{4+\sigma}(4-\sigma)}&\quad\text{in the strongly degenerate case or when $\sigma\leq 1$.}\end{array}\right.\]
Here the accuracy is a decreasing function of \(\sigma\), which is reflected in decreasing rates in \(\sigma\) when the regularity is fixed (strongly degenerate and smooth cases). In the weakly non-degenerate case, regularity is increasing with \(\sigma\) and so are the rates despite decreasing accuracy. Rates are higher when solutions are more regular and maximal in the smooth case. These results are sharper than previous results [10, 11, 8] in the strongly degenerate case, and new in the weakly non-degenerate case where the rate increases from \(\frac{3}{5}\) at \(\sigma=1\) to \(\frac{2}{3}\) in the limit as \(\sigma\to 2\).
(d) New _approximations based on powers of discrete Laplacians_ are introduced in Section 4 for HJB equations with fractional Laplacians, \(\mathcal{I}^{\alpha}[\phi]=-a^{\alpha}(-\Delta)^{\frac{\sigma}{2}}\phi\). These problems are always weakly non-degenerate, and we prove precise error bounds,
\[\|u-u_{h}\|_{L^{\infty}}\leq\left\{\begin{array}{ll}Ch^{\frac{1}{2}}&\quad \text{for}\quad 0<\sigma\leq 1,\\ Ch^{\frac{\sigma}{2}}&\quad\text{for}\quad 1<\sigma<2.\end{array}\right. \tag{1.4}\]
Under our assumptions these rates are optimal, and as \(\sigma\to 2\), the error bounds approach the \(\mathcal{O}(h)\) bound in the local 2nd order case [32].5
Footnote 5: When \(\sigma\to 2\), problem (1.1) converges by [24] to local the 2nd order problem of [32].
**Outline.** The remaining part of this paper is organized as follows: In Section 2 we introduce the notation and assumptions for the strongly degenerate and weakly non-degenerate problems, and give wellposedness and regularity results for equation (1.1) in both cases. In Section 3 we consider the diffusion corrected difference-quadrature approximations of (1.1) for general nonlocal operators and state our main error bounds. In Section 4 we give the results for approximation based on powers of discrete Laplacians. The proofs of these results are given in Sections 5 and 6. In Section 7 we discuss extensions to problems with non-zero drift and more non-symmetric diffusions.
## 2. Strongly and weakly non-degenerate fractional HJB equations
In this section we present the assumptions on nonlocal HJB equations and give wellposedness and regularity results. We start by introducing some notation. By \(C,K\) etc. we mean various constants which may change from line to line, \(|\cdot|\) is the euclidean norm, and the norms \(\|u\|_{0}=\sup_{x}|u(x)|\) and \(\|u\|_{1}=|u|_{0}+\sup_{x\neq y}\frac{|u(x)-u(y)|}{|x-y|}\). \(C_{b}(Q)\) is the space of bounded continuous functions on \(Q\subset\mathbb{R}^{N}\), while \(C^{n}(Q)\) and \(C^{n,\gamma}(Q)\) for \(n\in\mathbb{N}\) and \(\gamma\in(0,1]\), denote the spaces of \(n\)-th time continuously differentiable functions on \(Q\) with finite norms
\[\|u\|_{n}=\sum_{j=0}^{n}\|D^{j}u\|_{0}\qquad\text{and}\qquad\|u\|_{n,\gamma}= \|u\|_{n}+\sup_{x\neq y}\frac{|D^{n}u(x)-D^{n}u(y)|}{|x-y|^{\gamma}},\]
where \(D^{n}u\) is the (\(n\)-form of) \(n\)-th order derivatives of \(u\).
### Assumptions and wellposedness of (1.1)
First we list assumptions needed for wellposedness and Lipschitz regularity of viscosity solutions of (1.1).
1. \(\mathcal{A}\) is a separable metric spaces, \(c^{\alpha}(x)\geq\lambda>0\), and \(c^{\alpha}(x),f^{\alpha}(x)\), and \(\eta^{\alpha}(z)\) are continuous in \(\alpha\), \(x\), and \(z\).
2. There is a \(K>0\) such that \[\|f^{\alpha}\|_{1}+\|c^{\alpha}\|_{1}+\|\eta^{\alpha}\|_{0}\leq K\quad\text{ for}\quad\alpha\in\mathcal{A}.\]
3. There is a \(K>0\) such that \[|\eta^{\alpha}(z)|\leq K|z|\quad\text{for}\quad|z|<1,\quad\alpha\in\mathcal{A}.\]
4. \(\nu_{\alpha}\) is a nonnegative Radon measures on \(\mathbb{R}^{N}\) and there is \(K>0\) such that \[\int_{|z|\leq 1}|z|^{2}\nu_{\alpha}(dz)+\int_{|z|>1}\nu_{\alpha}(\,dz)\leq K.\]
In some results we also need symmetry assumptions on the nonlocal terms and upper bounds on the density of the Levy measure.
1. \(\nu_{\alpha}(dz)\,1_{|z|<1}\) is symmetric for \(\alpha\in\mathcal{A}\).
2. \(\nu_{\alpha}\) is absolutely continuous on \(|z|<1\), and there are \(\sigma\in(0,2)\), \(M\in\mathbb{N}\), and \(C>0\) such that \[0\leq\frac{d\nu_{\alpha}}{dz}\leq\frac{C}{|z|^{M+\sigma}}\qquad\text{for} \qquad|z|<1,\quad\alpha\in\mathcal{A}.\]
3. \(\eta^{\alpha}(-z)=-\eta^{\alpha}(z)\ \text{ for }\ |z|<1\) and \(\alpha\in\mathcal{A}\).
**Remark 2.1**.: (a) Under (**A.3**) and (**A.4**), _any_ pure jump Levy process is allowed as a driver for the SDE (1.3). This includes stable, processes, tempered processes, spectrally one-sided process, compound Poisson processes, and most jump processes considered in finance [1, 27]. The generators of these processes are \(\mathcal{I}^{\alpha}\).
(b) Assumption (**A.6**) is a restriction implying that \(\mathcal{I}^{\alpha}\) (which may be degenerate) contains fractional derivatives of orders at most \(\sigma\). It can be replaced by a more general integral condition to also cover non-absolutely continuous Levy measures,
\[r^{-2+\sigma}\int_{|z|<r}|z|^{2}d\nu_{\alpha}+r^{-1+\sigma}\int_{r<|z|<1}|z|d \nu_{\alpha}+r^{\sigma}\int_{r<|z|<1}d\nu_{\alpha}\leq C\]
for some \(C>0\) independent of \(\alpha\) and \(r\in(0,1)\). This condition is satisfied e.g. sums of one-dimensional operators (possibly of different orders) satisfying (**A.6**).
(c) By symmetry (**A.5**) and (**A.7**) it is clear that \(\int_{\delta<|z|<1}\eta^{\alpha}(z)\,\nu_{\alpha}(dz)=0\). Hence we can also define \(\mathcal{I}^{\alpha}\) in (1.2) using principal values and dropping the gradient (compensator) term.
(d) Note that (**A.3**)-(**A.7**) give no restrictions on the tails of the Levy measures and the nonsingular part of the nonlocal operators. This possibly non-symmetric part could be the generator of any compound Poisson process.
(e) The fractional Laplacian \(-(-\Delta)^{\frac{\sigma}{2}}\), where \(\eta^{\alpha}=z\) and \(\nu(dz)=\frac{c_{\alpha,N}}{|z|^{M+\sigma}}dz\), is a special case satisfying all assumptions (**A.3**)-(**A.7**), see also section 4.
A definition and general theory of viscosity solution for the nonlocal equations like (1.1) can be found e.g. in [46, 3], but we do not need this generality here. In particular since there is no local diffusion, we could follow the simpler (comparison) arguments of [24]. Wellposedness and Lipschitz regularity for solutions of equation (1.1) are given in the next result.
**Proposition 2.2**.: _Assume (**A.1**)- (**A.4**)._
* _If_ \(u\) _and_ \(v\) _are bounded upper semicontinuous viscosity subsolution and bounded lower semicontinuous supersolution of (_1.1_), then_ \[u\leq v\quad\text{in}\quad\mathbb{R}^{N}.\]
* _There exists a unique viscosity solution_ \(u\in C_{b}(\mathbb{R}^{N})\) _of equation (_1.1_)._
* _The viscosity solution_ \(u\) _of (_1.1_) is Lipschitz continuous,_ \[\|u\|_{0}\leq\frac{1}{\lambda}\sup_{\alpha\in\mathcal{A}}\|f^{\alpha}\|_{0}, \qquad\|Du\|_{0}\leq\frac{1}{\lambda}\sup_{\alpha\in\mathcal{A}}\big{(}\|Df^{ \alpha}\|_{0}+\|Dc^{\alpha}\|_{0}\|u\|_{0}\big{)}.\]
Proof.: We refer to [24] Theorems 2.1, 2.3, and Corollary 2.3 for the proof (see also [42]) of parts (a), (b), and the first part of (c). The second estimate in (c) follows by the comparison principle in a standard way.
### Extra regularity for weakly non-degenerate equations
A weakly non-degenerate version of (1.1) is
\[\lambda u(x)+\sup_{\alpha\in\mathcal{A}}\left\{f^{\alpha}(x)-\,\mathcal{I}^{ \alpha}[u](x)\right\}=0, \tag{2.1}\]
where to simplify we have set \(c^{\alpha}(x)\equiv\lambda>0\). We assume slightly more regularity of \(f\) and weak degeneracy in the following sense:
* **Weak-degeneracy:** There are \(\alpha_{0}\in\mathcal{A}\), \(c_{\alpha_{0}}>0\), and \(K\geq 0\), such that \[(i) \frac{d\nu_{\alpha_{0}}}{dz}\geq\frac{c_{\alpha_{0}}}{|z|^{N+\sigma}} \text{for}\quad|z|<1,\] \[(ii) |\eta^{\alpha_{0}}(z)-\eta^{\alpha_{0}}(0)-z|\leq K|z|^{2} \text{for}\quad|z|<1.\]
* There is \(\beta>(\sigma-1)^{+}\) and \(K>0\) such that \(\|f^{\alpha}\|_{1,\beta}\leq K\) for every \(\alpha\in\mathcal{A}\).
**Remark 2.3**.: (a) Assumption (**B.1**) is a lower bound on the order of differentiability of \(\mathcal{I}^{\alpha_{0}}\) and implies that it is elliptic/non-degenerate. The lower bounds behaves as \(z\to 0\) as the \(\frac{\sigma}{2}\)-fractional Laplacian.
(b) weakly non-degenerate in (**B.1**) means that there is at least one \(\alpha_{0}\) such that \(\mathcal{I}^{\alpha_{0}}\) is non-degenerate. If \(\mathcal{I}^{\alpha}\) is non-degenerate for all \(\alpha\), with uniform bounds in (**B.1**), then equation (1.1) is (uniformly/strongly) non-degenerate and have classical solutions.
We prove our regularity results via an approximate problem where the Levy measure is truncated near origin:
\[\lambda u(x)+\sup_{\alpha\in\mathcal{A}}\left\{f^{\alpha}(x)-\,\mathcal{I}^{ \alpha,r}[u](x)\right\}=0\qquad\text{in}\qquad\mathbb{R}^{N}, \tag{2.2}\]
where \(\mathcal{I}^{\alpha,r}\) is defined by
\[\mathcal{I}^{\alpha,r}\phi(x):=\int_{|z|>r}\left(\phi(x+\eta^{\alpha}(z))-\phi (x)\right)\nu_{\alpha}(dz).\]
Note that \(\mathcal{I}^{\alpha,r}\) is a bounded operator, well-defined for bounded functions, and then that viscosity solutions of equation (2.2) also will be pointwise/classical solutions. This problem is well-posed by Proposition 2.2, and we have the following stability and approximation results:
**Lemma 2.4**.: _Assume (**A.1**)-(**A.4**), (**A.6**), \(u_{r}\) and \(u\) are the unique bounded solutions of (2.2) and (2.1). Then there is a \(C>0\) independent of \(r\) such that_
\[\|u_{r}\|_{0,1}\leq\frac{1}{\lambda}\sup_{\alpha\in\mathcal{A}}\|f^{\alpha}\|_ {0,1}\qquad\text{and}\qquad\|u-u_{r}\|_{0}\leq C\,r^{1-\frac{\sigma}{2}}.\]
Proof.: The first part follows from Proposition 2.2 (c). By a continuous dependence result,
\[\|u-u_{r}\|_{0}\leq K\sup_{\alpha\in\mathcal{A}}\Big{(}\int_{|z|<r}|z|^{2}\, \nu_{\alpha}(dz)\Big{)}^{\frac{1}{2}}\]
for some \(K>0\) independent of \(r\). Since \(\int_{|z|<r}|z|^{2}\,\nu_{\alpha}(dz)=C\,r^{2-\sigma}\) by (**A.6**), the second part follows. The continuous dependence result is the stationary version of Theorem 4.1 in [46] and can be proved in a similar way. We omit the proof here.
We introduce a truncated fractional Laplacian,
\[\Delta^{\sigma,r}[\phi](x)=\int_{|z|>r}\big{(}\phi(x+z)-\phi(x) \big{)}\,\frac{dz}{|z|^{N+\sigma}}.\]
**Theorem 2.5**.: _Assume (**A.1**)-(**A.7**), (**B.1**)-(**B.2**), and \(u_{r}\) is the unique viscosity solution of (2.2). Then for any \(r>0\) there is a \(K>0\) independent of \(r\) such that_
\[\|\,\Delta^{\sigma,r}[u_{r}]\,\|_{0}\leq\frac{K}{c_{\alpha_{0}}}. \tag{2.3}\]
Proof.: Let us define the bounded auxiliary operator
\[\mathcal{J}^{r}[\phi](x)=\int_{|z|>r}\big{(}\phi(x+\eta^{\alpha_{0}}(z))-\phi (x)\big{)}\,\frac{c_{\alpha_{0}}dz}{|z|^{N+\sigma}}.\]
1) _A uniform bound on \(w_{r}:=-\mathcal{J}^{r}[u_{r}]\)._ Fix \(x\in\mathbb{R}^{N}\). By (2.2) and properties of suprema, for any \(\epsilon>0\) there exists \(\bar{\alpha}\in\mathcal{A}\) such that
\[\lambda\,u_{r}(x)+f^{\bar{\alpha}}(x)-\,\mathcal{I}^{\bar{\alpha},r}u_{r}(x)\,\geq-\epsilon, \tag{2.4}\]
and (trivially) for any \(y\in\mathbb{R}^{N}\),
\[\lambda\,u_{r}(x+y)+f^{\bar{\alpha}}(x+y)-\,\mathcal{I}^{\bar{ \alpha},r}u_{r}(x+y)\,\leq 0. \tag{2.5}\]
Take \(y=\eta^{\alpha_{0}}(z)\), subtract equations (2.4) and (2.5), multiply by \(\frac{c_{\alpha_{0}}}{|z|^{N+\sigma}}\), and integrate over \(|z|>r\). The result is then
\[\lambda\,\big{(}-\mathcal{J}^{r}[u_{r}](x)\big{)}-\mathcal{J}^{r }[f^{\bar{\alpha}}](x)-\mathcal{J}^{r}\big{[}-\mathcal{I}^{\bar{\alpha},r}[u_{ r}]\big{]}(x)\geq-\epsilon.\]
This inequality holds for \(\bar{\alpha}\) and then also holds for the supremum over all \(\alpha\in\mathcal{A}\). Since \(\epsilon>0\) and \(x\in\mathbb{R}^{N}\) are arbitrary, \(\mathcal{J}^{r}\) and \(\mathcal{I}^{\alpha,r}\) are linear operators, and by Fubini \(\mathcal{J}^{r}\big{[}\mathcal{I}^{\bar{\alpha},r}[u_{r}]\big{]}=\mathcal{I}^ {\bar{\alpha},r}\big{[}\mathcal{J}^{r}[u_{r}]\big{]}\), by the definition of \(w_{r}\) we have
\[\lambda w_{r}(x)+\sup_{\alpha\in\mathcal{A}}\big{\{}-\mathcal{J}^ {r}[f^{\alpha}](x)-\mathcal{I}^{\alpha,r}[w_{r}](x)\big{\}}\geq 0\qquad\text{in} \qquad\mathbb{R}^{N}. \tag{2.6}\]
By assumption (**B.2**), \(C:=\sup_{\alpha\in\mathcal{A}}\|\mathcal{J}^{r}[f^{\alpha}]\|_{0}<\infty\), so \(-\frac{C}{\lambda}\) is a subsolution of (2.6).6 Then by comparison, Proposition 2.2 (a),7
Footnote 6: Replace \(\geq\) by \(=\) in (2.6).
Footnote 7: Equation (2.6) (replace \(\geq\) by \(=\)) is of same form as in (1.1).
\[-\mathcal{J}^{r}[u_{r}]=w_{r}\geq-\frac{C}{\lambda}\qquad\text{in}\qquad \mathbb{R}^{N}. \tag{2.7}\]
To get a lower bound on \(\mathcal{J}^{r}[u_{r}]\), we use the upper bound and weak degeneracy: \(\tilde{\nu}_{\alpha_{0}}(z)-\frac{c_{\alpha_{0}}}{|z|^{N+\sigma}}\geq 0\) for \(|z|<1\). Let \(y=\eta^{\alpha_{0}}(z)\), subtract (2.5) and (2.4), multiply by \((\tilde{\nu}_{\alpha_{0}}(z)-\frac{c_{\alpha_{0}}}{|z|^{N+\sigma}})\), and integrate over \(r<|z|<1\). The result is
\[\lambda\left(\,-\left(\mathcal{I}_{1}^{\alpha_{0},r}-\mathcal{J} _{1}^{r}\,\right)[u_{r}](x)\right) -(\mathcal{I}_{1}^{\alpha_{0},r}-\mathcal{J}_{1}^{r}\,)[f^{ \bar{\alpha}}](x)\] \[-\mathcal{I}^{\bar{\alpha},r}\Big{[}-(\mathcal{I}_{1}^{\alpha_{0 },r}-\mathcal{J}_{1}^{r}\,)[u_{r}]\Big{]}(x)\geq-\epsilon.\]
where \(\mathcal{J}_{1}^{r}[\phi](x)=\int_{r<|z|<1}\big{(}\phi(x+\eta^{\alpha_{0}}(z))- \phi(x)\big{)}\)\(\frac{c_{\alpha_{0}}dz}{|z|^{N+\sigma}}\). Then arguing as for the upper bound we have
\[-(\mathcal{I}_{1}^{\alpha_{0},r}-\mathcal{J}_{1}^{r}\,)[u_{r}]\geq-\frac{C}{ \lambda}\qquad\text{in}\qquad\mathbb{R}^{N}. \tag{2.8}\]
The above estimate implies \(-\mathcal{J}_{1}^{r}[u_{r}](x)\leq\frac{C}{\lambda}+\sup_{\alpha\in\mathcal{A} }\big{\{}-\mathcal{I}_{1}^{\alpha_{0},r}[u_{r}](x)\big{\}}\), and therefore since \(u_{r}\) solves (2.2), that
\[-\mathcal{J}_{1}^{r}[u_{r}](x)\] \[\leq\frac{C}{\lambda}+\sup_{\alpha\in\mathcal{A}}\big{\{}- \mathcal{I}^{\alpha,r}[u_{r}]+f^{\alpha}(x)\big{\}}+\lambda u_{r}(x)\] \[\quad+\sup_{\alpha\in\mathcal{A}}\|f^{\alpha}\|_{0}+\lambda\|u_{r }\|_{0}+\sup_{\alpha\in\mathcal{A}}\Big{|}\int_{|z|>1}\big{(}u_{r}(x+\eta^{ \alpha}(z))-u_{r}(x)\big{)}\nu_{\alpha}(dz)\Big{|}\] \[\leq\frac{C}{\lambda}+0+\sup_{\alpha\in\mathcal{A}}\|f^{\alpha}\| _{0}+\Big{(}\lambda+2\sup_{\alpha\in\mathcal{A}}\int_{|z|>1}\nu_{\alpha}(dz) \Big{)}\|u_{r}\|_{0}. \tag{2.9}\]
Let \(\mathcal{J}^{r}=\mathcal{J}_{1}^{r}+\mathcal{J}^{1,r}\) where \(\mathcal{J}^{1,r}=\int_{|z|>1}(\cdots)\frac{c_{\alpha_{0}}dz}{|z|^{N+\sigma}}\). By (**A.2**), (**A.4**), and Lemma 2.4, both the right hand side of (2.9) and \(\mathcal{J}^{1,r}[u_{r}]\) are bounded, and hence
\[-\mathcal{J}^{r}[u_{r}]\leq C\qquad\text{in}\qquad\mathbb{R}^{N}, \tag{2.10}\]
for some constant \(C>0\) independent of \(r\). By (2.7) and (2.10) we conclude that \(|w_{r}|=|\mathcal{J}^{r}[u_{r}]|\leq C_{1}\) for some other \(C_{1}>0\) independent of \(r\).
2) _The bound on \(\Delta^{\sigma,r}[u_{r}]\)._ Since \(c_{\alpha_{0}}>0\) by (**B.1**), from step 1) it follows that
\[I:=\Big{|}\int_{|z|>r}\big{(}u_{r}(x+\eta^{\alpha_{0}}(z))-u_{r}(x)\big{)}\, \frac{dz}{|z|^{N+\sigma}}\Big{|}\leq\frac{C_{1}}{c_{\alpha_{0}}}.\]
From this estimate, the bound \(\|u_{r}\|_{0,1}\leq K\), and (**B.1**)\((ii)\) and (**A.3**) (implying \(\eta^{\alpha}(0)=0\)), we see that
\[|\Delta^{\sigma,r}[u_{r}](x)| \leq I+\int_{|z|>r}\big{|}u_{r}(x+\eta^{\alpha_{0}}(z))-u_{r}(x+ z)\big{|}\,\frac{dz}{|z|^{N+\sigma}}\] \[\leq\frac{C_{1}}{c_{\alpha_{0}}}+\|Du_{r}\|_{0}\int_{r<|z|<1}|z|^{ 2}\,\frac{dz}{|z|^{N+\sigma}}+2\|u_{r}\|_{0}\int_{|z|>1}\frac{dz}{|z|^{N+\sigma }}.\]
The right hand side is uniformly bounded so the proof is complete.
Sending \(r\to 0\) in the above result, we get a key result for this paper.
**Corollary 2.6**.: _Assume (**A.1**)-(**A.7**), (**B.1**)-(**B.2**), and \(u\) it the unique viscosity solution of (2.1). Then \((-\Delta)^{\frac{\sigma}{2}}[u]\in L^{\infty}(\mathbb{R}^{N})\)._
Proof.: Note that since \(u\) is bounded, \((-\Delta)^{\frac{\sigma}{2}}[u]\) defines a distribution by
\[((-\Delta)^{\frac{\sigma}{2}}[u],\phi)=\int_{\mathbb{R}^{N}}u(x)\,(-\Delta)^{ \frac{\sigma}{2}}[\phi](x)\,dx\quad\text{for any}\quad\phi\in C_{c}^{\infty}( \mathbb{R}^{N}).\]
To complete the proof we must show that this distribution can be represented by a function in \(L^{\infty}(\mathbb{R}^{N})\). Let \(u_{r}\) be the bounded solution of (2.2), and note that
\[\Big{|}\int_{\mathbb{R}^{N}}u(x)\,(-\Delta)^{\frac{\sigma}{2}}[ \phi](x)\,dx-\int_{\mathbb{R}^{N}}u_{r}(x)(-\Delta^{\sigma,r}[\phi](x))\,dx \Big{|}\] \[\leq\Big{|}\int_{\mathbb{R}^{N}}(u-u_{r})(x)(-\Delta)^{\frac{ \sigma}{2}}[\phi](x)\,dx\Big{|}+\|u_{r}\|_{0}I, \tag{2.11}\]
where \((-\Delta)^{\frac{\sigma}{2}}[\phi]\in L^{1}(\mathbb{R}^{N})\)8 and by Taylor,
Footnote 8: A Taylor expansion shows that \(\|(-\Delta)^{\frac{\sigma}{2}}[\phi]\|_{L^{1}}\leq c\|\phi\|_{W^{2,1}}\), and \(\|\phi\|_{W^{2,1}}<\infty\) for \(\phi\in C_{c}^{\infty}\).
\[I = \int_{\mathbb{R}^{N}}\Big{|}\,\big{(}-\Delta^{\sigma,r}[\phi]-(- \Delta)^{\frac{\sigma}{2}}[\phi]\big{)}(x)\Big{|}\,dx\] \[= \int_{\mathbb{R}^{N}}\Big{|}\int_{|z|<r}\big{(}\phi(x+z)-\phi(x)- z\cdot\nabla\phi(x)\big{)}\frac{dz}{|z|^{N+\sigma}}\Big{|}dx\] \[\leq \|D^{2}\phi\|_{L^{1}(\mathbb{R}^{N})}\int_{|z|<r}|z|^{2}\frac{dz }{|z|^{N+\sigma}}\,\leq C\|D^{2}\phi\|_{L^{1}(\mathbb{R}^{N})}r^{2-\sigma}.\]
By Lemma 2.4, \(\|u_{r}\|_{0}\) is bounded independently of \(r\) and \(u_{r}\to u\) in \(L^{\infty}\), hence since \(\Delta^{\sigma,r}\) is self-adjoint, it follows from (2.11) that
\[\int_{\mathbb{R}^{N}}u(x)\,(-\Delta)^{\frac{\sigma}{2}}[\phi](x) \,dx =\lim_{r\to 0}\int_{\mathbb{R}^{N}}u_{r}(x)(-\Delta^{\sigma,r}[ \phi])(x)\,dx\] \[=\lim_{r\to 0}\int_{\mathbb{R}^{N}}(-\Delta^{\sigma,r}[u_{r}])(x) \,\phi(x)\,dx. \tag{2.12}\]
By Theorem 2.5, \(\|\Delta^{\sigma,r}[u_{r}]\|_{0}\leq K\) for some \(K>0\) independent of \(r\). By weak star compactness (Alaoglou/Helly) there is an \(f\in L^{\infty}(\mathbb{R}^{N})\) and a subsequence \(\{r_{n}\}_{n}\) such that \(r_{n}\to 0\) and \((-\Delta^{\sigma,r_{n}}[u_{r_{n}}])\stackrel{{*}}{{\rightharpoonup}}f\) in \(L^{\infty}\). Passing to the limit in (2.12),
\[\int_{\mathbb{R}^{N}}u(x)\,(-\Delta)^{\frac{\sigma}{2}}[\phi](x)\,dx=\lim_{n \to\infty}\int_{\mathbb{R}^{N}}(-\Delta^{\sigma,r_{n}}[u_{r_{n}}])(x)\,\phi(x )\,dx=\int_{\mathbb{R}^{N}}f(x)\,\phi(x)\,dx.\]
The proof is complete.
We immediately observe an improvement of regularity for the viscosity solution of (2.1) in the case that \(\sigma>1\) (compare with Proposition 2.2).
**Theorem 2.7**.: _Assume \(\sigma>1\), (**A.1**)-(**A.7**), (**B.1**)-(**B.2**), and \(u\) is the unique viscosity solution of (2.1). Then \(u\in C^{1,\sigma-1}(\mathbb{R}^{N})\) and_
\[\|u\|_{1,\sigma-1}\leq K\big{(}\|u\|_{0}+\|\,(-\Delta)^{\frac{\sigma}{2}}[u]\, \|_{0}\big{)}.\]
Proof.: By the Corollary 2.6, \((-\Delta)^{\frac{\sigma}{2}}[u]\in L^{\infty}(\mathbb{R}^{N})\), and from the definition of viscosity solution \(u\in L^{\infty}(\mathbb{R}^{N})\). Therefore the result follows from Theorem 1.1(a) of the article [58] by Ros-Oton and Serra.
**Remark 2.8**.: When \(\sigma<1\) we get no improvement in regularity from Lipschitz (Proposition 2.2(c)). But here Lipschitz regularity is sufficient for solutions to be point-wise classical solutions of (2.1).
## 3. Diffusion corrected difference-quadrature scheme
In this section we construct monotone discretizations for equation (1.1) (and (2.1)), and give precise results on their convergence rates. There are two main steps to construct the schemes: (i) approximate the singular part of the nonlocal operator by a local diffusion, and (ii) discretize the resulting equations using semi-Lagrangian type of difference quadrature schemes.
By symmetry (**A.5**) and (**A.7**), \(\left(\int_{\delta<|z|<1}\eta^{\alpha}(z)\,\nu_{\alpha}(dz)\right)\cdot\nabla \phi(x)=0\). For \(\delta\in(0,1)\), we then write the nonlocal operator \(\mathcal{I}^{\alpha}\) as
\[\mathcal{I}^{\alpha}[\phi](x) =\left(\int_{|z|<\delta}+\int_{|z|>\delta}\right)\left(\phi(x+ \eta^{\alpha}(z))-\phi(t,x)-\eta^{\alpha}(z)\cdot\nabla\phi(x)\right)\nu_{ \alpha}(dz)\] \[=\int_{|z|<\delta}\left(\phi(x+\eta^{\alpha}(z))-\phi(t,x)-\eta^ {\alpha}(z)\cdot\nabla\phi(x)\right)\nu_{\alpha}(dz)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\int_{|z |>\delta}\left(\phi(x+\eta^{\alpha}(z))-\phi(t,x)\right)\nu_{\alpha}(dz)\] \[:=\mathcal{I}^{\alpha}_{\delta}[\phi](x)+\mathcal{I}^{\alpha, \delta}[\phi](x). \tag{3.1}\]
The \(\delta\) will be chosen later. We say that \(\mathcal{I}^{\alpha}_{\delta}\) is the singular part9 of \(\mathcal{I}^{\alpha}\), while \(\mathcal{I}^{\alpha,\delta}\) is always a bounded operator.
Footnote 9: When \(\nu\) has a singularity at the origin, this is a singular integral operator. If the singularity is strong enough, the operator will be a fractional differential operator of positive order.
### Approximation of the singular part of the nonlocal operator
The simplest (but not very accurate) discretization of \(\mathcal{I}^{\alpha}_{\delta}[\phi]\) is to replace it by \(0\). Better approximations can be obtained using local diffusion terms [27, 48]. This corresponds to approximating the small jumps in the SDE (1.3) by an appropriate Brownian motion [2]. We define
\[a^{\alpha}_{\delta}=\frac{1}{2}\int_{|z|<\delta}\eta^{\alpha}(z)\eta^{\alpha} (z)^{T}\,\nu_{\alpha}(dz)\qquad\text{and}\qquad\mathcal{L}^{\alpha}_{\delta}[ \phi](x):=tr[a^{\alpha}_{\delta}D^{2}\phi],\]
where \(a^{\alpha}_{\delta}\) is a constant non-negative matrix and \(\phi\in C^{2}_{b}(\mathbb{R}^{N})\). We approximate equation (1.1) by replacing \(\mathcal{I}^{\alpha}_{\delta}[\phi]\) with \(\mathcal{L}^{\alpha}_{\delta}[\phi](x)\):
\[\sup_{\alpha\in\mathcal{A}}\left\{f^{\alpha}(x)+c^{\alpha}(x)u(x)-\mathcal{L} ^{\alpha}_{\delta}[\phi](x)-\mathcal{I}^{\alpha,\delta}[u](x)\right\}=0\quad \text{in}\quad\mathbb{R}^{N}. \tag{3.2}\]
**Lemma 3.1**.: _Assume (**A.1**)-(**A.7**) and \(\delta\in(0,1)\). Then there are \(C,K>0\) independent of \(\delta,\alpha,\phi\) such that_
\[(i)\quad|\mathcal{I}^{\alpha}_{\delta}[\phi]-\mathcal{L}^{\alpha}_{\delta}[ \phi]|\leq C\delta^{4-\sigma}\|D^{4}\phi\|_{0}, \tag{3.3}\]
\[(ii)\quad|a^{\alpha}_{\delta}|\leq\int_{|z|\leq\delta}|\eta^{\alpha}(z)|^{2}\, \nu_{\alpha}(dz)\leq K\delta^{2-\sigma}. \tag{3.4}\]
Proof.: By Taylor's expansion theorem and smooth \(\phi\),
\[\int_{|z|<\delta}\left(\phi(x+\eta^{\alpha}(z))+\phi(x)-\eta^{\alpha }(z)\cdot\nabla\phi\right)\nu_{\alpha}(dz)\] \[=\int_{|z|<\delta}\left(\eta^{\alpha}(z)\cdot D^{2}\phi(x)\cdot \eta^{\alpha}(z)^{T}+\sum_{|\beta|=3}\frac{1}{\beta!}[\eta^{\alpha}(z)]^{\beta }D^{\beta}\phi(x)\right)\nu_{\alpha}(dz)+Err_{\delta},\]
where \(Err_{\delta}=\frac{|\beta|}{\beta!}\sum_{|\beta|=4}\big{[}\int_{|z|<\delta} \int_{0}^{1}(1-s)^{|\beta-1|}[\eta^{\alpha}(z)]^{\beta}\,D^{\beta}\phi(x+sz)\, ds\,\nu_{\alpha}(dz)\big{]}.\) By the assumptions (A.5) and (A.7) and then by (A.6) we have
\[\sum_{|\beta|=3}\int_{|z|\leq\delta}[\eta^{\alpha}(z)]^{\beta}D^{\beta}\phi(x )\,\nu_{\alpha}(dz)=0\quad\text{and}\quad|Err_{\delta}|\leq C\delta^{4-\sigma }\|D^{4}\phi\|_{0}.\]
That proves part \((i)\). Part \((ii)\) follows by (A.3) and (A.4).
### Consistent monotone discretization of the approximate equation
We now approximate the local and nonlocal part of equation (3.2) separately.
_(i) Discretization of the local term:_ Since \(a_{\delta}^{\alpha}\) is symmetric and nonnegative \((\xi^{T}a_{\delta}^{\alpha}\xi=\int_{|z|<\delta}(\eta^{\alpha}(z)\cdot\xi)^{2} \,\nu_{\alpha}(dz)\geq 0)\), it has a square root with columns \((\sqrt{a_{\delta}^{\alpha}})_{i}\). We then introduce the semi Lagrangian (SL) approximation (inspired by [21, 30])
\[\mathcal{L}_{\delta}^{\alpha}[\phi] =tr[a_{\delta}^{\alpha}D^{2}\phi]\] \[\approx\sum_{i=1}^{N}\frac{\phi(x+k(\sqrt{a_{\delta}^{\alpha}})_{ i})+\phi(x-k(\sqrt{a_{\delta}^{\alpha}})_{i})-2\phi(x)}{2\,k^{2}}\equiv\mathcal{D}_{ \delta,k}^{\alpha}[\phi](x). \tag{3.5}\]
This approximation is monotone by construction, and by Taylor expansions,
\[|\mathcal{L}_{\delta}^{\alpha}[\phi]-\mathcal{D}_{\delta,k}^{\alpha}[\phi]| \leq K|a_{\delta}^{\alpha}|^{2}k^{2}\|D^{4}\phi\|_{0}\leq K\delta^{2(2-\sigma )}k^{2}\|D^{4}\phi\|_{0}. \tag{3.6}\]
Since \(x_{\mathbf{j}}\pm k(\sqrt{a_{\delta}^{\alpha}})_{i}\) may not be on the grid, we interpolate to get a full discretization. To preserve monotonicity, we use linear/multilinear interpolation \(i_{h}(\phi)(x)=\sum_{\mathbf{j}\in\mathbb{Z}^{N}}\phi(x_{j})\omega_{\mathbf{j}}(x)\) where the basis functions \(\omega_{\mathbf{j}}\geq 0\) and \(\sum_{\mathbf{j}\in\mathbb{Z}^{N}}\omega_{\mathbf{j}}=1\). Let
\[\mathcal{L}_{\delta,k,h}^{\alpha}[\phi](x)=\sum_{i=1}^{N}\frac{i_{h}\big{[} \phi(x+k(\sqrt{a_{\delta}^{\alpha}})_{i})\big{]}+i_{h}\big{[}\phi(x-k(\sqrt{a _{\delta}^{\alpha}})_{i})\big{]}-2\phi(x)}{2k^{2}}. \tag{3.7}\]
By the property of multilinear interpolation, this approximation is monotone with
\[|\mathcal{L}_{\delta,k,h}^{\alpha}[\phi]-\mathcal{D}_{\delta,k}^{\alpha}[\phi ]|\leq C\frac{h^{2}}{k^{2}}\|D^{2}\phi\|_{0}. \tag{3.8}\]
By (3.6) and (3.8) we have a truncation error bound for the local approximate term.
**Lemma 3.2**.: _Assume (A.3)-(A.7). Then there is \(K>0\) independent of \(h,\delta,\alpha,\phi\) such that_
\[\big{|}\mathcal{L}_{\delta,k,h}^{\alpha}[\phi](x)-\mathcal{L}_{\delta}^{\alpha }[\phi](x)\big{|}\leq K\Big{(}\delta^{2(2-\sigma)}k^{2}\|D^{4}\phi\|_{0}+ \frac{h^{2}}{k^{2}}\|D^{2}\phi\|_{0}\Big{)}. \tag{3.9}\]
_(ii) Discretization of the nonlocal term:_ We follow [8, Section 3] and approximate \(\mathcal{I}^{\alpha,\delta}\) by the quadrature
\[\mathcal{I}_{h}^{\alpha,\delta}[\phi]=\sum_{\mathbf{j}\in\mathbb{Z}^{N}} \big{(}\phi(x+x_{\mathbf{j}})-\phi(x)\big{)}\kappa_{h,\mathbf{j}}^{\alpha, \delta};\quad\kappa_{h,\mathbf{j}}^{\alpha,\delta}=\int_{|z|>\delta}\omega_{ \mathbf{j}}(\eta^{\alpha}(z);h)\nu_{\alpha}(dz), \tag{3.10}\]
where \(\{\omega_{\mathbf{j}}\}_{\mathbf{j}}\) is the basis for multilinear interpolation defined above. Since \(\omega_{\mathbf{j}}\geq 0\), \(\kappa^{\alpha,\delta}_{h,\mathbf{j}}\geq 0\), and the approximation \(\mathcal{I}^{\alpha,\delta}_{h}\) is monotone. A Taylor expansion gives an estimate on the local truncation error, c.f. [8]:
**Lemma 3.3**.: _Assume (A.3)-(A.4) and (A.6). Then there is \(K>0\) independent of \(h,\delta,\alpha,\phi\) such that_
\[\left|\mathcal{I}^{\alpha,\delta}[\phi](x)-\mathcal{I}^{\alpha,\delta}_{h}[ \phi](x)\right|\leq K\frac{h^{2}}{\delta^{\sigma}}\|D^{2}\phi\|_{0}. \tag{3.11}\]
_(iii) Discretization of the nonlocal equation (1.1):_
\[\sup_{\alpha\in\mathcal{A}}\left\{f^{\alpha}(x)+c^{\alpha}(x)u(x)-\mathcal{L} ^{\alpha}_{\delta,k,h}[u](x)-\mathcal{I}^{\alpha,\delta}_{h}[u](x)\right\}=0 \quad\text{in}\quad\mathbb{R}^{N}, \tag{3.12}\]
or in weakly non-degenerate case (2.1) where \(c^{\alpha}(x)=\lambda\),
\[\lambda\,v(x)+\sup_{\alpha\in\mathcal{A}}\left\{f^{\alpha}(x)-\mathcal{L}^{ \alpha}_{\delta,k,h}[v](x)-\mathcal{I}^{\alpha,\delta}_{h}[v](x)\right\}=0 \quad\text{in}\quad\mathbb{R}^{N}. \tag{3.13}\]
### Properties and convergence analysis for the schemes
We state wellposedness, comparison, \(L^{\infty}\)-stability, and \(L^{\infty}\)-convergence results for the schemes in different settings.
**Theorem 3.4** (wellposedness, stability).: _Assume (A.1)-(A.4)._
* _There exists a unique solution_ \(u_{h}\in C_{b}(\mathbb{R}^{N})\) _of (_3.12_)._
* _If_ \(u_{h},v_{h}\in C_{b}(\mathbb{R}^{N})\) _are sub and supersolutions of (_3.12_), then_ \(u_{h}\leq v_{h}\)_._
* _If_ \(u_{h}\) _is the unique solution of (_3.12_), then_ \(|u_{h}|_{0}\leq C\sup_{\alpha\in\mathcal{A}}|f^{\alpha}|_{0}\)_._
Proof.: Part (a) can be proved using Banach fixed point arguments, we refer to [9, Lemma 3.1] for details. Part (b) is a consequence of the scheme having positive coefficients. Finally, part (c) follows from (b) by taking \(\pm\frac{1}{\lambda}\sup_{\alpha\in\mathcal{A}}|f^{\alpha}|_{0}\) as super and sub-solution of the scheme (3.12) respectively.
If the solutions of (1.1) are very smooth (\(C_{b}^{4}\)), then we get the best possible convergence rate for our scheme - what some would call the accuracy of the method:
**Proposition 3.5** (Smooth solutions).: _Assume (A.4)-(A.7), \(\sigma\in(0,2)\), \(u\in C_{b}^{4}(\mathbb{R}^{N})\) solves (1.1), and \(u_{h}\) solves (3.12) with \(k=O(h^{\frac{\sigma}{4}})\) and \(\delta=O(h^{\frac{1}{2}})\). Then there is \(C>0\) such that_
\[|u-u_{h}|\leq Ch^{2-\frac{\sigma}{2}}.\]
This rate is always better than \(1\), and approaches \(1\) as \(\sigma\to 2^{-}\). We will not discuss assumptions to have so smooth solutions, but below we will give results that holds for the solutions that exist under the assumptions of this paper.
Proof.: By equation (1.1) and the errors bounds (3.3), (3.9), (3.11), for any \(\alpha\in\mathcal{A}\),
\[f^{\alpha}(x)+c^{\alpha}(x)u(x)-\mathcal{L}^{\alpha}_{\delta,k,h}[u](x)- \mathcal{I}^{\alpha,\delta}_{h}[u](x)\leq\mathcal{I}^{\alpha}[u](x)-\mathcal{ L}^{\alpha}_{\delta,k,h}[u](x)-\mathcal{I}^{\alpha,\delta}_{h}[u](x)\]
\[\leq C\Big{(}\delta^{4-\sigma}\|D^{4}u\|_{0}+\frac{h^{2}}{k^{2}}\|D^{2}u\|_{0}+ \delta^{2(2-\sigma)}k^{2}\|D^{4}u\|_{0}+\frac{h^{2}}{\delta^{\sigma}}\|D^{2}u \|_{0}\Big{)}:=B_{h,\delta}.\]
This implies \(u(x)-\frac{B_{h,\delta}}{\lambda}\) is a subsolution of (3.12), and by Theorem 3.4 (b) that
\[u-u_{h}\leq\frac{B_{h,\delta}}{\lambda}.\]
Again by (1.1), the definition of the sup, and the errors bounds, for any \(x\in\mathbb{R}^{N}\) and \(\epsilon>0\), there is a \(\alpha_{\epsilon}\in\mathcal{A}\) such that
\[\begin{split}& f^{\alpha_{\epsilon}}(x)+c^{\alpha_{\epsilon}}(x)u (x)-\mathcal{L}^{\alpha_{\epsilon},\delta}_{k,h}[u](x)-\mathcal{I}^{\alpha_{ \epsilon},\delta}_{h}[u](x)\\ &\geq-\epsilon+\mathcal{I}^{\alpha_{\epsilon}}[u](x)-\mathcal{L} ^{\alpha_{\epsilon},\delta}_{k,h}[u](x)-\mathcal{I}^{\alpha_{\epsilon},\delta} _{h}[u](x)\geq-\epsilon-B_{h,\delta}.\end{split}\]
Let \(\tilde{u}=u+\frac{B_{h,\delta}}{\lambda}\), and note that
\[\sup_{\alpha\in\mathcal{A}}\Big{\{}f^{\alpha}(x)+c^{\alpha}(x)\tilde{u}(x)- \mathcal{L}^{\alpha}_{\delta,k,h}[\tilde{u}](x)-\mathcal{I}^{\alpha,\delta}_ {h}[\tilde{u}](x)\big{\}}\geq-\epsilon.\]
Since \(\epsilon\) and \(x\) are arbitrary, \(\tilde{u}\) is a supersolution of (3.12), and then \(u_{h}-u\leq\frac{B_{h,\delta}}{\lambda}\) by Theorem 3.4 (b). Since \(u\in C^{4}_{b}(\mathbb{R}^{N})\), we have shown that
\[|u-u_{h}|\leq\frac{C}{\lambda}\Big{(}\delta^{4-\sigma}+\frac{h^{2}}{k^{2}}+ \delta^{2(2-\sigma)}k^{2}+\frac{h^{2}}{\delta^{\sigma}}\Big{)}.\]
We conclude by taking the optimal choices \(k^{2}=O(\frac{h}{\delta^{2-\sigma}})\) and then \(\delta=O(h^{\frac{1}{2}})\).
The next two results form the main contribution of this paper along with the result of section 4. These results give very precise rates of convergence for our monotone numerical approximations in cases of strongly and weakly non-degenerate equations respectively. Note that in these results the solutions \(u\) of (1.1) and (2.1) will not be smooth. The proofs of these results are given in Section 5.
**Theorem 3.6** (Strongly degenerate equations).: _Assume \(\sigma\in(0,2)\), \(h\in(0,1)\), (A.1)-(A.7), \(u\) and \(u_{h}\) are solutions of (1.1) and (3.12) for \(k=O(h^{\frac{2\sigma}{4+\sigma}})\) and \(\delta=O(h^{\frac{4}{4+\sigma}})\). Then there is a \(C>0\) such that_
\[|u-u_{h}|\leq C\,h^{\frac{4-\sigma}{4+\sigma}}. \tag{3.14}\]
**Remark 3.7**.: (a) The rate \(\frac{4-\sigma}{4+\sigma}\) is decreasing in \(\sigma\). It equals \(\frac{3}{5}\) at \(\sigma=1\), approaches \(1\) as \(\sigma\to 0^{+}\), and \(\frac{1}{3}\) as \(\sigma\to 2^{-}\).
(b) The "CFL" conditions \(k=O(h^{\frac{2\sigma}{4+\sigma}})\) and \(\delta=O(h^{\frac{4}{4+\sigma}})\) imply that \(\frac{h}{k}\to 0\) and \(\frac{h}{\delta}\to 0\) as \(h\to 0\).
(c) Conditions (A.5) and (A.7) are symmetry assumptions on the singular part of \(\mathcal{I}^{\alpha}\) which lead to best possible rates. We refer to Section 7 for extensions to nonsymmetric nonlocal operators and the corresponding (slightly) lower rates.
In the weakly non-degenerate case we get an improvement in the rate due to the better regularity of solutions both for the equation and the numerical scheme:
**Theorem 3.8** (weakly non-degenerate equations).: _Assume \(\sigma\in(0,2)\), \(h\in(0,1)\), (A.1)-(A.7), (B.1)-(B.2), \(u\) and \(u_{h}\) are the solutions of (2.1) and (3.12) for \(k=O(h^{\frac{2\sigma}{4+\sigma}})\) and \(\delta=O(h^{\frac{4}{4+\sigma}})\). Then there is \(C>0\) independent of \(h\) such that_
\[|u-u_{h}|\leq\left\{\begin{array}{ll}C\,h^{\frac{4-\sigma}{4+\sigma}}&\text { for }\ \ 0<\sigma\leq 1,\\ C\,h^{\frac{\sigma(4-\sigma)}{4+\sigma}}&\text{ for }\ \ 1<\sigma<2.\end{array}\right. \tag{3.15}\]
**Remark 3.9**.: For \(\sigma\leq 1\), the results are the same as in Theorem 3.6. For \(\sigma>1\), the rate of convergence is always more than \(\mathcal{O}(h^{\frac{1}{2}})\), and the rate approaches \(\mathcal{O}(h^{\frac{5}{3}})\) when \(\sigma\to 2\). The "CFL" conditions are the same as in Theorem 3.6.
## 4. Powers of discrete Laplacian
In this section we consider versions of equation (1.1) where the nonlocal operator is the fractional Laplacian,
\[\lambda u(x)+\sup_{\alpha\in\mathcal{A}}\big{\{}f^{\alpha}(x)+a^{\alpha}\,(- \Delta)^{\frac{\sigma}{2}}u(x)\big{\}}=0. \tag{4.1}\]
In other words, \(\mathcal{I}^{\alpha}=-a^{\alpha}\,(-\Delta)^{\frac{\sigma}{2}}\), \(\nu_{\alpha}(dz)=a^{\alpha}\,\frac{c_{N,\sigma}}{|z|^{N+2\sigma}}dz\), and \(\eta^{\alpha}(z)=z\) in (1.2). Here (**A**.3)-(**A**.7) trivially holds. We assume (**B**.1), the equation is weakly non-degenerate (otherwise the equation is purely algebraic), which here is equivalent to
\[\text{there is }\alpha_{0}\in\mathcal{A}\text{ such that }\quad a^{\alpha_{0}}>0. \tag{4.2}\]
Under assumptions (**A**.1), (**A**.2), (**B**.1), and (**B**.2), we can use Proposition 2.2, Lemma 2.4, and Theorem 2.7 to conclude wellposedness, stability, approximation, and regularity results for (4.1). Here we introduce and analyse a discretization
\[\lambda u_{h}(x)+\sup_{\alpha\in\mathcal{A}}\{f^{\alpha}(x)+a^{\alpha}(- \Delta_{h})^{\frac{\sigma}{2}}[u_{h}](x)\}=0, \tag{4.3}\]
based on powers of the discrete Laplacian \((-\Delta_{h})^{\frac{\sigma}{2}}\), see [25, 35] and also [13]. As far as we know, this is the first time this type of discretization has been considered for HJB equations. It is a very good approximation in the sense that it is a monotone method of second order accuracy. This is better than the diffusion corrected discretization of Section 3.
Let \(\Delta_{h}\phi(x)=\sum_{k=1}^{N}\frac{1}{h^{2}}\big{(}\phi(x+he_{k})-2\phi(x)- \phi(x-he_{k})\big{)}\) be the 2nd order central finite difference approximation of the Laplacian \(\Delta\phi\), then
\[(-\Delta_{h})^{\frac{\sigma}{2}}\phi(x):=\frac{1}{\Gamma(-\frac{\sigma}{2})} \,\int_{0}^{\infty}\Big{(}e^{t\Delta_{h}}\phi(x)-\phi(x)\Big{)}\,\frac{dt}{t^{ 1+\frac{\sigma}{2}}}, \tag{4.4}\]
where \(U(t)=e^{t\Delta_{h}}\psi\) is the solution of semi-discrete heat equation
\[\partial_{t}U(x,t) =\Delta_{h}\,U(x,t)\quad\text{for}\quad(x,t)\in\mathbb{R}^{N} \times(0,\infty),\] \[U(x,0) =\psi(x)\quad\text{for}\quad x\in\mathbb{R}^{N}.\]
An explicit formula for \(e^{t\Delta_{h}}\phi\) and details related to this approximation can be found in Section 4.5 of [35]. We can write approximation (4.4) as a quadrature,
\[-(-\Delta_{h})^{\frac{\sigma}{2}}\phi(x)=\sum_{\mathbf{j}\in\mathbb{Z}^{N} \setminus\{0\}}\Big{(}\phi(x+x_{\mathbf{j}})-\phi(x)\Big{)}\kappa_{h,\mathbf{ j}}\quad\text{with}\quad\kappa_{h,\mathbf{j}}\geq 0.\]
This is obviously a monotone approximation of the fractional Laplacian, and by Lemma 4.22 in [35], it has the following local truncation error:
**Lemma 4.1**.: _Assume \(\sigma\in(0,2)\). Then for any smooth bounded function \(\phi\),_
\[\Big{|}(-\Delta_{h})^{\frac{\sigma}{2}}\phi(x)-(-\Delta)^{\frac{\sigma}{2}} \phi(x)\Big{|}\leq Ch^{2}\Big{(}\|D^{4}\phi\|_{0}+\|\phi\|_{0}\Big{)}. \tag{4.5}\]
We note that Theorem 3.4 (wellposedness and stability) also holds for (4.3). We now state an error bound for this scheme. The proof is given in Section 6.
**Theorem 4.2**.: _Assume \(h\in(0,1)\), (**A.1**), (**A.2**), (**B.1**), (**B.2**), \(u\) and \(u_{h}\) are solutions of equation (4.1) and the scheme (4.3). Then there is \(C>0\) such that_
\[\|u-u_{h}\|_{0}\leq\left\{\begin{array}{lll}Ch^{\frac{1}{2}}&\quad\text{for}&0 <\sigma\leq 1,\\ Ch^{\frac{\sigma}{2}}&\quad\text{for}&1<\sigma<2.\end{array}\right. \tag{4.6}\]
**Remark 4.3**.: The problem is weakly non-degenerate and the regularity of the solution can be seen in the rate for \(\sigma>1\), cf. Theorem 2.7. This \(\sigma\) dependence seems to be optimal, and is consistent as \(\sigma\to 2\) with the \(\mathcal{O}(h)\) bound obtained in the 2nd order case in [32]. For \(\sigma\in(\frac{4}{3},2)\), the rate is better than for the diffusion corrected discretization in Theorem 3.8.
## 5. Proofs of the error bounds for monotone quadrature schemes
Here, we give proof of the convergence rates discussed in Section 3.
### Strongly-degenerate equations - the proof of Theorem 3.6
Let \(\big{(}\rho_{\epsilon}\big{)}_{\epsilon>0}\) be the standard mollifier on \(\mathbb{R}^{N}\) and define \(u_{\epsilon,h}=u_{h}*\rho_{\epsilon}\). By (3.12),
\[f^{\alpha}(x)+c^{\alpha}(x)u_{h}(x)-\mathcal{L}^{\alpha}_{\delta,k,h}\,u_{h}(x )-\sum_{\mathbf{j}\in\mathbb{Z}^{N}}\big{(}u_{h}(x+x_{\mathbf{j}})-u_{h}(x) \big{)}\kappa^{\alpha,\delta}_{h,\mathbf{j}}\leq 0\]
for any \(\alpha\in\mathcal{A}\). Let \(f^{\alpha}_{\epsilon}=f^{\alpha}*\rho_{\epsilon}\), convolve by \(\rho_{\epsilon}\), to get
\[f^{\alpha}_{\epsilon}(x)+(c^{\alpha}u_{h,\epsilon})*\rho_{\epsilon}(x)- \mathcal{L}^{\alpha}_{\delta,k,h}\,u_{h,\epsilon}(x)-\sum_{\mathbf{j}\in \mathbb{Z}^{N}}\big{(}u_{h,\epsilon}(x+x_{\mathbf{j}})-u_{h,\epsilon}(x) \big{)}\kappa^{\alpha,\delta}_{h,\mathbf{j}}\leq 0.\]
Since \(\|f^{\alpha}*\rho_{\epsilon}-f^{\alpha}\|_{0}\leq K\epsilon\) and \(\|(c^{\alpha}u_{h})*\rho_{\epsilon}-c^{\alpha}\,u_{h,\epsilon}\|_{0}\leq \sup_{\alpha}\|Dc^{\alpha}\|_{0}\|u_{h}\|_{0}\,\epsilon\leq CK^{2}\,\epsilon\), we then find that
\[f^{\alpha}(x)+c^{\alpha}(x)u_{\epsilon,h}(x)-\mathcal{I}^{ \alpha}[u_{\epsilon,h}](x)\] \[\qquad\leq\big{\|}\mathcal{I}^{\alpha}[u_{\epsilon,h}]-\big{(} \mathcal{L}^{\alpha}_{\delta,k,h}\,u_{\epsilon,h}+\mathcal{I}^{\alpha,\delta}_ {h}[u_{\epsilon,h}]\big{)}\big{\|}_{0}+(CK^{2}+K)\epsilon. \tag{5.1}\]
By Lemmas 3.1, 3.2, 3.3, and \(|D^{k}u_{\epsilon,h}|_{0}\leq\frac{C\|u_{h}\|_{0,1}}{\epsilon^{k-1}}\), it follows that
\[\big{|}\mathcal{I}^{\alpha}[u_{\epsilon,h}]-\big{(}\mathcal{L}^{ \alpha}_{\delta,k,h}\,u_{\epsilon,h}+\mathcal{I}^{\alpha,\delta}_{h}[u_{ \epsilon,h}]\big{)}\big{|}_{0}\] \[\qquad\leq M_{\epsilon,\delta}:=C\Big{(}\delta^{4-\sigma}\,\frac{ 1}{\epsilon^{3}}+k^{2}\,\delta^{2(2-\sigma)}\,\frac{1}{\epsilon^{3}}+\frac{h^ {2}}{k^{2}}\frac{1}{\epsilon}+\frac{h^{2}}{\delta^{\sigma}}\,\frac{1}{ \epsilon}\Big{)}. \tag{5.2}\]
Therefore \(u_{\epsilon,h}-\frac{C}{\lambda}\tilde{M}_{\epsilon,\delta}\), for \(\tilde{M}_{\epsilon,\delta}=M_{\epsilon,\delta}+(CK^{2}+K)\epsilon\), is a classical (and hence also viscosity) subsolution of equation (1.1). By comparison for equation (1.1) (Proposition 2.2 (a)), \(u_{\epsilon,h}-\frac{C}{\lambda}\,\tilde{M}_{\epsilon,\delta}\leq u.\) Since \(\|u_{h}-u_{\epsilon,h}\|_{0}\leq\epsilon\|Du_{h}\|_{0}\), we get
\[u_{h}-u\leq K\big{(}\epsilon+M_{\epsilon,\delta}\big{)}. \tag{5.3}\]
The bound on \(u-u_{h}\) can be proved in similar way. Let \(u_{\epsilon}=u*\rho_{\epsilon}\). Arguing as above, using Lemmas 3.1, 3.2, 3.3, and \(\|D^{k}u_{\epsilon}\|_{0}\leq\frac{C\|u\|_{0,1}}{\epsilon^{k-1}}\), we have
\[f^{\alpha}(x)+c^{\alpha}(x)\,u_{\epsilon}(x)-\mathcal{L}^{ \alpha}_{\delta,k,h}\,u_{\epsilon}(x)-\mathcal{I}^{\alpha,\delta}_{h}[u_{ \epsilon}](x)\] \[\leq\big{\|}\mathcal{I}^{\alpha}[u_{\epsilon}]-\big{(}\mathcal{L }^{\alpha}_{\delta,k,h}\,u_{\epsilon}+\mathcal{I}^{\alpha,\delta}_{h}[u_{ \epsilon}]\big{)}\big{\|}_{0}+(CK^{2}+K)\epsilon\leq M_{\epsilon,\delta}+( CK^{2}+K)\epsilon.\]
This implies \(u_{\epsilon}-\frac{C}{\lambda}\,\tilde{M}_{\epsilon,\delta}\) is a subsolution of the numerical scheme (3.12). Comparison for the scheme (3.12) (Theorem 3.4(b)) and \(\|u-u_{\epsilon}\|_{0}<\epsilon\|Du\|_{0}\) lead to
\[u_{h}-u\geq-C(\epsilon+M_{\epsilon,\delta}). \tag{5.4}\]
By (5.3) and (5.4) we get \(|u-u_{h}|\leq C(\epsilon+M_{\epsilon,\delta})\), and then we optimize with respect to \(k\), \(\delta\), and \(\epsilon\). The optimal choices \(k^{2}=O\big{(}\frac{h\epsilon}{\delta^{2}-\sigma}\big{)}\) and \(\epsilon=O\big{(}\frac{h}{\delta^{\frac{2}{2}}}\big{)}\) lead to
\[|u-u_{h}|\leq C\Big{(}\delta^{4+\frac{\sigma}{2}}h^{-3}+\delta^{2}h^{-1}+\frac {h}{\delta^{\frac{2}{2}}}\Big{)}, \tag{5.5}\]
and the result follows by choosing \(\delta=O\big{(}h^{\frac{4}{4+\sigma}}\big{)}\).
### Intermezzo on regularisations
In the remaining proofs we need high order estimates for two different regularisation procedures: (i) Convolution with standard mollifiers and (ii) convolution with fractional heat kernels. These estimates are proved in this section.
Let \(\rho_{\varepsilon}(x)=\frac{1}{\varepsilon^{N}}\rho\big{(}\frac{x}{\varepsilon }\big{)}\) for some \(\rho\in C_{c}^{\infty}(\mathbb{R}^{N})\) with support in \(B(0,1)\) and \(\int_{\mathbb{R}^{N}}\rho\,dx=1\). Hence \(\operatorname{supp}\rho_{\epsilon}=\overline{B(0,\epsilon)}\) and \(\int_{\mathbb{R}^{N}}\rho_{\epsilon}\,dx=1\). We define
\[v^{(\epsilon)}=v*\rho_{\epsilon} \tag{5.6}\]
for bounded continuous functions \(v\). It then easily follows that \(v^{(\epsilon)}\in C_{b}^{\infty}\).
**Lemma 5.1**.: _If \(v\in C^{1,\beta}(\mathbb{R}^{N})\) for \(\beta\in(0,1]\) and \(\rho\) is a radial function, then_
\[\|v^{(\epsilon)}-v\|_{0}\leq C\epsilon^{1+\beta}\|v\|_{1,\beta}\qquad\text{ and}\qquad\|D^{m}v^{(\epsilon)}\|_{0}\leq\frac{K}{\epsilon^{m-1-\beta}}\|v\|_{1,\beta}\]
_for any \(m\geq 2\), where \(C\) and \(K\) are independent of \(\epsilon\)._
Proof.: The first inequality follows since \(\int_{\mathbb{R}^{N}}y\rho_{\epsilon}(y)\,dy=0\) and then
\[|v^{(\epsilon)}(x)-v(x)| =\Big{|}\int_{\mathbb{R}^{N}}(v(x-y)-v(x)-y\cdot\nabla v(x))\rho_ {\epsilon}(y)\,dy\Big{|}\] \[\leq C\|v\|_{1,\beta}\int_{\mathbb{R}^{N}}|y|^{1+\beta}\rho_{ \epsilon}(y)\,dy\leq C\|v\|_{1,\beta}\epsilon^{1+\beta}.\]
Since \(\int_{\mathbb{R}^{N}}D^{m-1}\rho_{\epsilon}(y)dy=0\) by the divergence theorem, the second inequality follows since \(Dv\in C^{\beta}\) and
\[D^{m}v^{(\epsilon)}=Dv*D^{m-1}\rho_{\epsilon}=\int_{\mathbb{R}^{N}}[Dv(x-y)-Dv (x)]D^{m-1}\rho_{\epsilon}(y)dy.\]
Let \(\tilde{K}^{\sigma}(t,x):=\mathcal{F}^{-1}\big{(}e^{-t|\cdot|^{\sigma}}\big{)} (x)\) be the fractional heat kernel, the fundamental solution of the fractional heat equation \(u_{t}+(-\Delta)^{\frac{\sigma}{2}}u=0\). Convolution with \(\tilde{K}^{\sigma}\) defines a smooth approximation of a bounded continuous function \(v\),
\[v^{[\epsilon]}(x):=v(\cdot)*\tilde{K}^{\sigma}(\epsilon^{\sigma},\cdot)(x). \tag{5.7}\]
Let \(K^{\sigma}(x)=\tilde{K}^{\sigma}(1,x)\). To prove estimates on \(v^{[\epsilon]}\), we need some well-known properties of \(\tilde{K}^{\sigma}\):
1. \(\tilde{K}^{\sigma}\in C^{\infty}((0,\infty)\times\mathbb{R}^{N})\), \(\tilde{K}^{\sigma}\geq 0\), and \(\int_{\mathbb{R}^{N}}\tilde{K}^{\sigma}(t,x)\,dx=1\) for \(t>0\).
2. \(\tilde{K}^{\sigma}(t+s,x)=\tilde{K}^{\sigma}(t,x)*\tilde{K}^{\sigma}(s,x)\) for \(t,s\geq 0\) (convolution semigroup).
3. For \(t>0\) and \(x\in\mathbb{R}^{N}\), \(\tilde{K}^{\sigma}(t,x)=t^{-\frac{N}{\sigma}}K^{\sigma}\Big{(}\frac{x}{t^{\frac {1}{\sigma}}}\Big{)}\) where \[\frac{c_{1}t}{\big{(}t^{\frac{2}{\sigma}}+|x|^{2}\big{)}^{\frac{N+\sigma}{2}}} \leq\tilde{K}^{\sigma}(t,x)\leq\frac{C_{2}t}{\big{(}t^{\frac{2}{\sigma}}+|x|^ {2}\big{)}^{\frac{N+\sigma}{2}}}.\]
* (Theorem 1.1(c) in [38]) For any \(m>0\) and multi-index \(\beta\) with \(|\beta|=m\), \[|D^{\beta}\,K^{\sigma}(x)|\leq\frac{B_{m}}{1+|x|^{N+\sigma}}\quad\text{for}\quad x \in\mathbb{R}^{N}.\]
We refer to [12, 33, 38] for the proofs.
**Lemma 5.2**.: _Assume \(\epsilon>0\), \(\sigma>1\), \(\beta\in(\sigma-1,1)\), and \(v\in C^{1,\beta}(\mathbb{R}^{N})\). Then there is \(C>0\) independent of \(\epsilon\), such that_
\[\|v^{[\epsilon]}-v\|_{0}\leq C\epsilon^{\sigma}.\]
Proof.: Let \(S_{t}\) be the fractional heat semigroup, i.e. \(v^{[\epsilon]}=S_{\epsilon^{\sigma}}(v)\). Since \(\int_{\mathbb{R}^{N}}\tilde{K}^{\sigma}(r,y)dy=1\), by Fubini's Theorem and property (i) above,
\[|(-\Delta)^{\frac{\sigma}{2}}[S_{r}(v)](x)| =\Big{|}\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}\frac{v(x+z-y)- v(x-y)}{|z|^{N+\sigma}}\tilde{K}^{\sigma}(r,y)dxdy\Big{|}\] \[\leq\int_{\mathbb{R}^{N}}|(-\Delta)^{\frac{\sigma}{2}}[v](x-y)| \tilde{K}^{\sigma}(r,y)dy\leq\|(-\Delta)^{\frac{\sigma}{2}}[v]\|_{0}.\]
Since \(\|(-\Delta)^{\frac{\sigma}{2}}[v]\|_{0}\leq K\|v\|_{1,\beta}\), for any \(t\geq s>0\),
\[|S_{t}(v)-S_{s}(v)|=\Big{|}\int_{s}^{t}\partial_{r}[S_{r}(v)]dr\Big{|}=\Big{|} \int_{s}^{t}(-\Delta)^{\frac{\sigma}{2}}[S_{r}(v)]dr\Big{|}\leq K(t-s)\|v\|_{ 1,\beta}.\]
The lemma then follows by taking \(t=\epsilon^{\sigma}\) and using that \(S_{s}(v)\to v\) pointwise as \(s\to 0\).
**Lemma 5.3**.: _Assume \(\epsilon>0\), \(\sigma>1\), \(m\geq 2\), \(v\in C^{0,1}(\mathbb{R}^{N})\), and define \(\epsilon_{1}=\frac{\epsilon}{2^{\frac{1}{\sigma}}}\). Then there exists \(C>0\) independent of \(\epsilon\) such that_
\[\|D^{m}v^{[\epsilon]}\|_{0}\leq\frac{C}{\epsilon^{m-1}}\|v\|_{0,1}\quad\text{ and}\quad\|D^{m}v^{[\epsilon]}\|_{0}\leq\frac{C}{\epsilon^{m-\sigma}}\|v^{[ \epsilon_{1}]}\|_{1,\sigma-1}.\]
Proof.: The first estimate is classical and follows from differentiating \(\tilde{K}^{\sigma}\)\((m-1)\) times and \(v\) once (c.f. Lemma 5.1) and noting that \(|x|\big{|}D_{x}^{m}\tilde{K}^{\sigma}(t,x)\big{|}\in L^{1}(\mathbb{R}^{N})\) for \(\sigma>1\) and \(t>0\) by property (iv) above.
For the second estimate, we must estimate \(\partial_{x_{i}}D^{\alpha}v^{[\epsilon]}\) for any multiindex \(\alpha\) with \(|\alpha|=m-1\). Rewriting \(v^{[\epsilon]}\) as
\[v^{[\epsilon]}=v*\tilde{K}^{\sigma}(\epsilon^{\sigma},\cdot)=v*\tilde{K}^{ \sigma}\big{(}\frac{\epsilon^{\sigma}}{2},\cdot\big{)}*\tilde{K}^{\sigma} \big{(}\frac{\epsilon^{\sigma}}{2},\cdot\big{)}=v^{[\epsilon_{1}]}*\tilde{K}^ {\sigma}\big{(}\frac{\epsilon^{\sigma}}{2},x\big{)},\]
we find that
\[\partial_{x_{i}}D^{\alpha}v^{[\epsilon]}=\partial_{x_{i}}v^{[\epsilon_{1}]}* \,D^{\alpha}\tilde{K}^{\sigma}\big{(}\frac{\epsilon^{\sigma}}{2},\cdot\big{)}.\]
First, by the divergence theorem and the decay at infinity (property (iv) above), \(\int_{\mathbb{R}^{N}}D^{\alpha}\tilde{K}^{\sigma}\big{(}\frac{\epsilon^{\sigma }}{2},y\big{)}dy=0\). Then, by self-similarity (property (iii)) and \(y=\frac{\epsilon}{2^{\frac{1}{\sigma}}}z\),
\[(D_{y}^{\alpha}\tilde{K})\Big{(}\frac{\epsilon^{\sigma}}{2},y\Big{)}=2^{\frac {N}{\sigma}}\frac{1}{\epsilon^{N+(m-1)}}(D_{z}^{\alpha}K)(z).\]
Combining these facts with the change of variables \(y=\epsilon z\), we see that
\[|\partial_{x_{i}}D^{\alpha}v^{[\epsilon]}| =\Big{|}\int_{\mathbb{R}^{N}}\big{(}\partial_{x_{i}}v^{[\epsilon_{ \mathbf{i}}]}(x-y)-\partial_{x_{i}}v^{[\epsilon_{\mathbf{i}}]}(x)\big{)}D_{y}^{ \alpha}\tilde{K}^{\sigma}\Big{(}\frac{\epsilon^{\sigma}}{2},y\Big{)}\,dy\Big{|}\] \[\leq\frac{2^{\frac{N}{\sigma}}}{\epsilon^{m-1}}\int_{\mathbb{R}^ {N}}\big{|}\partial_{x_{i}}v^{[\epsilon_{\mathbf{i}}]}(x-\epsilon z)-\partial _{x_{i}}v^{[\epsilon_{\mathbf{i}}]}(x)\big{|}\,|D_{z}^{\alpha}K^{\sigma}(z)|\,\,dz\] \[\leq\frac{K}{\epsilon^{m-\sigma}}\|v^{[\epsilon_{\mathbf{i}}]}\|_ {1,\sigma-1}\int_{\mathbb{R}^{N}}|z|^{\sigma-1}\,|D_{z}^{\alpha}K^{\sigma}(z )|\,\,dz.\]
The proof is complete since \(|x|^{\sigma-1}\,|D^{\alpha}K^{\sigma}(x)|\in L^{1}\) by property (iv) above.
### Weakly-degenerate equations - the proof of Theorem 3.8
We first prove a discrete version of the bound on nonlocal operator in Theorem 2.5. Then we show that these bounds leads to regularity of the numerical solution. From regularity, approximation, and comparison arguments the error bounds follows. Regularization arguments and the results of the previous section are used throughout. For \(h,k,\epsilon>0\), and \(\delta\in(0,1)\), we define
\[\hat{\mathcal{I}}_{\delta,k,h}^{\alpha}[\phi] :=\mathcal{L}_{\delta,k,h}^{\alpha}[\phi]+\mathcal{I}_{h}^{\alpha,\delta}[\phi],\] \[\mathcal{J}_{h}^{\alpha,\delta}[\phi] :=\sum_{\mathbf{j}\in\mathbb{Z}^{N}}\big{(}\phi(x+x_{\mathbf{j}}) -\phi(x)\big{)}\int_{|z|>\delta}\omega_{\mathbf{j}}(\eta^{\alpha}(z))\frac{ dz}{|z|^{N+\sigma}},\]
where \(\mathcal{L}_{\delta,k,h}^{\alpha}\), \(\mathcal{I}_{h}^{\alpha,\delta}\), and the weight function \(\omega_{\mathbf{j}}\) are defined in section 3.2. By definition \(\mathcal{J}_{h}^{\alpha,\delta}\) is a monotone approximation of the non-singular part of the operator
\[\mathcal{J}^{\alpha}[\phi]:=\int_{\mathbb{R}^{N}}\big{(}\phi(x+\eta^{\alpha}(z ))-\phi(x)-\nabla\phi(x)\cdot\eta^{\alpha}(z)1_{|z|<\delta}\big{)}\frac{dz}{|z |^{N+\sigma}} \tag{5.8}\]
with local truncation error (Taylor expand, see e.g. [8, Section 3])
\[|\mathcal{J}_{h}^{\alpha,\delta}[\phi](x)-\mathcal{J}^{\alpha}[\phi](x)|\leq C (\|\phi\|_{0}+\|D^{2}\phi\|_{0})\big{(}\delta^{2-\sigma}+h^{2}\delta^{-\sigma }\big{)}. \tag{5.9}\]
The discrete version of Theorem 2.5 is the following result.
**Theorem 5.4**.: _Assume (**A**.1)-(**A**.5), (**B**.1)-(**B**.2), and \(u_{h}\) solves (3.13). Then for \(\delta\in(0,1)\), \(\delta\geq h\), and \(k\geq\delta^{\frac{\sigma}{2}}\), there is a \(K>0\) independent of \(h,k,\delta\) such that_
\[\|\hat{\mathcal{I}}_{\delta,k,h}^{\alpha_{0}}[u_{h}]\,\|_{0} \leq K, \tag{5.10}\] \[\|\mathcal{J}_{h}^{\alpha_{0},\delta}[u_{h}]\,\|_{0} \leq\frac{K}{c_{\alpha_{0}}}. \tag{5.11}\]
The proof relies on the following technical lemma.
**Lemma 5.5**.: _Assume (**A**.1)-(**A**.6), (**B**.1)-(**B**.2), and \(\alpha_{0}\) is defined in (**B**.1). For \(\sigma\in(0,1)\), there is a \(K>0\) independent of \(\delta,h,k\) such that_
\[\|\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[f^{\alpha}]\|_{0}\leq K\Big{[}\frac{h^ {\sigma}}{k^{2}}+k^{\sigma-2}\delta^{\frac{\sigma(2-\sigma)}{2}}\Big{]}\|f^{ \alpha}\|_{1,\sigma-1}.\]
Proof.: Let \(f^{\alpha}_{(\gamma)}:=f^{\alpha}*\rho_{\gamma}\in C_{b}^{\infty}(\mathbb{R}^{N})\). By Lemma 5.1 and the fact that \(f^{\alpha}\in C^{1,\sigma-1}(\mathbb{R}^{N})\) by (**B**.2),
\[\|D^{m}f^{\alpha}_{(\gamma)}\|_{0}\leq\frac{C\|f^{\alpha}\|_{1,\sigma-1}}{\gamma ^{m-\sigma}}\quad\text{and}\quad\|f^{\alpha}-f^{\alpha}_{(\gamma)}\|_{0}\leq C \gamma^{\sigma}\|f^{\alpha}\|_{1,\sigma-1}. \tag{5.12}\]
Then by (3.8), (3.5), the bound on \(a_{\delta}^{\alpha}\) in (3.4), and first part of (5.12),
\[\Big{|}\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[f_{(\gamma)}^{\alpha}] \Big{|} \leq\Big{|}\mathcal{D}_{\delta,k}^{\alpha_{0}}[f_{(\gamma)}^{ \alpha}]\Big{|}+\frac{Ch^{2}}{k^{2}}\|D^{2}f_{(\gamma)}^{\alpha}\|_{0}\] \[\leq K|(\sqrt{a_{\delta}^{\alpha}})_{i})|^{2}\|D^{2}f_{(\gamma)}^{ \alpha}\|_{0}+C\frac{h^{2}}{k^{2}}\|D^{2}f_{(\gamma)}^{\alpha}\|_{0}\] \[\leq\frac{K}{\gamma^{2-\sigma}}\Big{(}\delta^{2-\sigma}+\frac{h^ {2}}{k^{2}}\Big{)}\|f^{\alpha}\|_{1,\sigma-1}.\]
By the second part of (5.12) and the definition of \(\mathcal{L}_{\delta,k,h}^{\alpha_{0}}\) in (3.7),
\[\big{|}\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[f_{(\gamma)}^{\alpha}]-\mathcal{ L}_{\delta,k,h}^{\alpha_{0}}[f^{\alpha}]\big{|}=\big{|}\mathcal{L}_{\delta,k,h}^{ \alpha_{0}}[f_{(\gamma)}^{\alpha}-f^{\alpha}]\big{|}\leq K\frac{\gamma^{ \sigma}}{k^{2}}\|f^{\alpha}\|_{1,\sigma-1},\]
and then
\[\|\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[f^{\alpha}]\|_{0}\leq \|\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[f_{(\gamma)}^{\alpha}]\|_ {0}+\|\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[f_{(\gamma)}^{\alpha}]-\mathcal{ L}_{\delta,k,h}^{\alpha_{0}}[f^{\alpha}]\|_{0}\] \[\leq K\Big{[}\frac{1}{\gamma^{2-\sigma}}\Big{(}\frac{h^{2}}{k^{2}}+ \delta^{2-\sigma}\Big{)}+\frac{\gamma^{\sigma}}{k^{2}}\Big{]}\|f^{\alpha}\|_{ 1,\sigma-1}. \tag{5.13}\]
The result follows by taking \(\gamma=\max\{h,k\,\delta^{\frac{2-\sigma}{2}}\}\).
Proof of Theorem 5.4.: (i) Since \(u_{h}\) solves (3.13), we find as in the proof of Theorem 2.5, that \(-\mathcal{I}_{h}^{\alpha_{0},\delta}[u_{h}]\) is a supersolution of
\[\lambda\,v(x)+\sup_{\alpha\in\mathcal{A}}\Big{\{}-\mathcal{L}_{\delta,k,h}^{ \alpha}[v]-\,\mathcal{I}_{h}^{\alpha,\delta}[v]-\mathcal{I}_{h}^{\alpha_{0}, \delta}[f^{\alpha}](x)\Big{\}}=0. \tag{5.14}\]
By assumptions (B.2) and (A.3),
\[\|\mathcal{I}_{h}^{\alpha_{0},\delta}[f^{\alpha}]\|_{0}\leq C_{1}:=\|f^{ \alpha}\|_{1,\beta-1}\int_{|z|<1}|z|^{\beta}\nu_{\alpha}(dz)+2\|f^{\alpha}\|_ {0}\int_{|z|\geq 1}\nu_{\alpha}(dz),\]
where the constant \(C_{1}\geq 0\) is independent of \(\alpha\), \(\delta\), and \(h\). Since \(-\frac{C_{1}}{\lambda}\) is a subsolution of (5.14), the comparison principle yields that \(\mathcal{I}_{h}^{\alpha_{0},\delta}[u_{h}](x)\leq\frac{C_{1}}{\lambda}.\) Arguing in the same way for the operator \(\mathcal{L}_{\delta,k,h}^{\alpha_{0}}\) and using Lemma 5.5, we get that
\[\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[u_{h}]\leq\frac{K}{\lambda}\Big{[}\frac {h^{\sigma}}{k^{2}}+k^{\sigma-2}\delta^{\frac{\sigma(2-\sigma)}{2}}\Big{]}\| f^{\alpha}\|_{1,\sigma-1}.\]
Taking \(k\geq C\max\{\delta^{\frac{\sigma}{2}},h^{\frac{\sigma}{2}}\}=C\delta^{\frac{ \sigma}{2}}\) (assuming \(\delta\geq h\)) we find a constant \(C_{2}\geq 0\) independent of \(\alpha\), \(k\), \(h\), and \(\delta\) such that \(\mathcal{L}_{\delta,k,h}^{\alpha_{0}}[u_{h}]\leq\frac{C_{2}}{\lambda}.\) Combining the two estimates then gives
\[\hat{\mathcal{I}}_{\delta,k,h}^{\alpha_{0}}[u_{h}](x)\leq\frac{C_{1}+C_{2}}{ \lambda}.\]
To get the lower bound, we use the definition of \(\hat{\mathcal{I}}_{\delta,k,h}^{\alpha_{0}}[u_{h}]\) and the fact that \(u_{h}\) is a subsolution of (3.13), to see that
\[-\,\hat{\mathcal{I}}_{\delta,k,h}^{\alpha_{0}}[u_{h}](x)\leq\sup_{ \alpha\in\mathcal{A}}\{-\hat{\mathcal{I}}_{\delta,k,h}^{\alpha}[u_{h}](x)\}\] \[\leq\lambda\,u_{h}(x)+\sup_{\alpha\in\mathcal{A}}\Big{\{}- \mathcal{L}_{\delta,k,h}^{\alpha}[u_{h}]-\,\mathcal{I}_{h}^{\alpha,\delta}[u_{h }](x)+f^{\alpha}(x)\Big{\}}+\Big{(}\lambda\|u_{h}\|_{0}+\|f^{\alpha}\|_{0}\Big{)}\] \[\leq\Big{(}\lambda\|u_{h}\|_{0}+\|f^{\alpha}\|_{0}\Big{)}.\]
In view of (A.2) and Theorem 3.4 this completes the proof of (5.10).
(ii) The upper bound for \(-\mathcal{J}_{h}^{\alpha_{0},\delta}[u_{h}]\) follows from the same reasoning that led to the upper bound in part (i). To prove the lower bound, we first note that \(\int_{\delta<|z|<1}\omega_{\mathbf{j}}(\eta^{\alpha_{0}}(z))\Big{(}\frac{d\nu_{ \alpha_{0}}}{dz}(z)-\frac{c_{\alpha_{0}}}{|z|^{N+\sigma}}\Big{)}\,dz\geq 0\) by (**B.1**)\((i)\) and the fact \(\omega_{\mathbf{j}}\geq 0\). By arguments similar to those that led to estimate (2.8), we then find that
\[\sum_{\mathbf{j}\in\mathbb{Z}^{N}}\big{(}u_{h}(x+x_{\mathbf{j}})-u_{h}(x)\big{)} \int_{\delta<|z|<1}\omega_{\mathbf{j}}(\eta^{\alpha_{0}}(z))\Big{(}\frac{d\nu _{\alpha_{0}}}{dz}(z)-\frac{c_{\alpha_{0}}}{|z|^{N+\sigma}}\Big{)}\,dz\leq \frac{K}{\lambda}.\]
Then by (5.10) (this bound also holds for \(\mathcal{I}_{h}^{\alpha_{0},\delta}[u_{h}]\), see the proof), \(\sum_{\mathbf{j}\in\mathbb{Z}^{N}}\omega_{\mathbf{j}}(\eta^{\alpha_{0}}(z))=1\), and (**A.4**), we have
\[-\mathcal{J}_{h}^{\alpha_{0},\delta}[u_{h}] \leq\frac{K}{\lambda}-\mathcal{I}_{h}^{\alpha_{0},\delta}[u_{h}]\] \[\leq K+C\|u_{h}\|_{0}\Big{(}\int_{|z|>1}\nu_{\alpha_{0}}(dz)+\int _{|z|>1}\frac{c_{\alpha_{0}}dz}{|z|^{N+\sigma}}\Big{)}\leq\,K+C\|u_{h}\|_{0}.\]
This completes the proof.
By Theorem 2.7 the solution \(u\) of (2.1) and its regularization \(u^{(\epsilon)}\) satisfy the bounds of Lemma 5.1 with \(\beta=\sigma-1\). We now show similar bounds for the solution \(u_{h}\) of the scheme (3.13) and regularizations of \(u_{h}\). The results will incorporate error terms due to truncation bounds for approximate operators.
**Lemma 5.6**.: _Assume (**A.1**)-(**A.7**), (**B.1**)-(**B.2**), \(\delta\in(0,1)\), \(\delta\geq h\), \(u_{h}\) solves (3.13), and \(\tilde{u}_{h}=u_{h}*\phi\) for \(0\leq\phi\in C^{\infty}(\mathbb{R}^{N})\) with \(\int_{\mathbb{R}^{N}}\phi\,dx=1\). Then there are \(K_{1},K_{2}>0\) independent of \(\delta,k,h\) and \(\phi\) such that_
\[(i)\quad\|(-\Delta)^{\frac{\sigma}{2}}[\tilde{u}_{h}]\|_{0}\leq K_{1}\Big{(} \|u_{h}\|_{0,1}+\delta^{2-\sigma}(\|u_{h}\|_{0}+\|D^{2}\tilde{u}_{h}\|_{0}) \Big{)},\]
\[(ii)\quad\|\tilde{u}_{h}\|_{1,\sigma-1}\leq K_{2}\Big{(}1+\|u_{h}\|_{0,1}+ \delta^{4-\sigma}\|D^{4}\tilde{u}_{h}\|_{0}+\delta^{2(2-\sigma)}k^{2}\|D^{4} \tilde{u}_{h}\|_{0}\\ +\frac{h^{2}}{k^{2}}\|D^{2}\tilde{u}_{h}\|_{0}+h^{2}\delta^{- \sigma}\|D^{2}\tilde{u}_{h}\|_{0}\Big{)}.\]
Note that a bound like (ii) follows from (i) by elliptic regularity, but bound (ii) is an improvement on any bound coming from (i).
Proof.: (i) Note that \(\eta^{\alpha_{0}}(0)=0\) by (**A.3**), \(z-\eta^{\alpha}_{0}(z)=\mathcal{O}(|z|^{2})\) by (**B.1**)\((ii)\), and \(\|\tilde{u}_{h}\|_{0,1}\leq\|u_{h}\|_{0,1}\) by properties of convolutions. By the definition of \(\mathcal{J}^{\alpha_{0}}\) (5.8), assumptions (**A.3**), (**A.5**)-(**A.7**), (**B.1**), and the truncation error bound (5.9),
\[|(-\Delta)^{\frac{\sigma}{2}}[\tilde{u}_{h}](x)|\] \[\leq|\mathcal{J}^{\alpha_{0}}[\tilde{u}_{h}](x)|+\Big{|}\int_{|z| <1}(z-\eta^{\alpha_{0}}(z))\cdot\nabla\tilde{u}_{h}(x)\,\frac{dz}{|z|^{N+ \sigma}}\Big{|}\] \[\quad+\Big{|}\int_{|z|>1}\tilde{u}_{h}(x+z)-\tilde{u}_{h}(x+\eta ^{\alpha_{0}}(z))\frac{dz}{|z|^{N+\sigma}}\Big{|}\] \[\leq\|\mathcal{J}^{\alpha_{0}}[\tilde{u}_{h}]\|_{0}+\|\nabla \tilde{u}_{h}\|_{0}\int_{|z|<1}\frac{K|z|^{2}\,dz}{|z|^{N+\sigma}}+2\|\tilde{u}_ {h}\|_{0}\int_{|z|>1}\frac{dz}{|z|^{N+\sigma}}\] \[\leq\|\mathcal{J}^{\alpha_{0},\delta}_{h}[\tilde{u}_{h}]\|_{0}+C (\|u_{h}\|_{0}+\|D^{2}\tilde{u}_{h}\|_{0})\big{(}\delta^{2-\sigma}+h^{2} \delta^{-\sigma}\big{)}+c_{\sigma}\|u_{h}\|_{0,1}.\]
The proof is complete since by Theorem 5.4 and properties of convolutions,
\[\|\mathcal{J}_{h}^{\alpha_{0},\delta}[\tilde{u}_{h}]\|_{0}\leq C\|\mathcal{J}_{h}^ {\alpha_{0},\delta}[u_{h}]\|_{0}\leq K.\]
(ii) By Theorem 5.4 and properties of convolutions, \(\|\widehat{\mathcal{I}}_{\delta,k,h}^{\alpha_{0}}[\tilde{u}_{h}]\|_{0}\leq C\| \widehat{\mathcal{I}}_{\delta,k,h}^{\alpha_{0}}[u_{h}]\|_{0}\leq K.\) From the error bounds (3.3), (3.9), and (3.11), it then follows that
\[\|\mathcal{I}^{\alpha_{0}}[\tilde{u}_{h}]\|_{0}\leq K_{1}\Big{(}K +\delta^{4-\sigma}\|D^{4}\tilde{u}_{h}\|_{0}+\delta^{2(2-\sigma)}k^{2}\|D^{4} \tilde{u}_{h}\|_{0}\\ +\frac{h^{2}}{k^{2}}\|D^{2}\tilde{u}_{h}\|_{0}+\frac{h^{2}}{ \delta^{\sigma}}\|D^{2}\tilde{u}_{h}\|_{0}\Big{)}. \tag{5.15}\]
We define the operator
\[\widetilde{\mathcal{J}}[\phi](x):= \int_{|z|<1}\big{(}\phi(x+z)-\phi(x)-z\cdot\nabla\phi(x)\big{)} \nu^{\alpha_{0}}(dz)\] \[+\int_{|z|>1}\big{(}\phi(x+z)-\phi(x)-z\cdot\nabla\phi(x)\big{)} \frac{c_{\alpha_{0}}}{|z|^{N+\sigma}}.\]
Since \(z-\eta_{0}^{\alpha}(z)=\mathcal{O}(|z|^{2})\) by (B.1)\((ii)\) and \(\|\tilde{u}_{h}\|_{0,1}\leq\|u_{h}\|_{0,1}\), by (5.15) we have
\[\big{|}\widetilde{\mathcal{J}}[\tilde{u}_{h}](x)\big{|}\leq\big{|} \mathcal{I}^{\alpha_{0}}[\tilde{u}_{h}](x)\big{|}\\ +\Big{|}\int_{|z|<1}\big{(}\tilde{u}_{h}(x+z)-\tilde{u}_{h}(x+\eta ^{\alpha_{0}}(z))-(z-\eta^{\alpha_{0}}(z))\cdot\nabla\tilde{u}_{h}(x)\big{)} \nu^{\alpha_{0}}(dz)\Big{|}\\ +\Big{|}\int_{|z|>1}\big{(}\tilde{u}_{h}(x+z)-\tilde{u}_{h}(x) \big{)}\frac{c_{\alpha_{0}}dz}{|z|^{N+\sigma}}-\int_{|z|>1}\big{(}\tilde{u}_{ h}(x+j^{\alpha}(z))-\tilde{u}_{h}(x)\big{)}\nu^{\alpha_{0}}(dz)\Big{|}\\ \leq\big{|}\mathcal{I}^{\alpha_{0}}[\tilde{u}_{h}](x)\big{|}+2\| \nabla\tilde{u}_{h}\|_{0}\int_{|z|<1}|z-\eta^{\alpha_{0}}(z)|\,\nu^{\alpha_{0} }(dz)+2\|\tilde{u}_{h}\|_{0}\int_{|z|>1}\frac{(c_{\alpha_{0}}+C)}{|z|^{N+ \sigma}}\\ \leq C\Big{(}K+\delta^{4-\sigma}\|D^{4}\tilde{u}_{h}\|_{0}+\delta ^{2(2-\sigma)}k^{2}\|D^{4}\tilde{u}_{h}\|_{0}\\ +\frac{h^{2}}{k^{2}}\|D^{2}\tilde{u}_{h}\|_{0}+\frac{h^{2}}{\delta ^{\sigma}}\|D^{2}\tilde{u}_{h}\|_{0}+\|u_{h}\|_{0,1}\Big{)}. \tag{5.16}\]
Hence \(\widetilde{\mathcal{J}}[\tilde{u}_{h}]\in L^{\infty}(\mathbb{R}^{N})\) for fixed \(\delta\) and \(h\). By (B.1)\((i)\) and (A.6), the assumptions of the regularity result [31, Theorem 3.8] are satisfied, and we conclude that
\[\|\tilde{u}_{h}\|_{1,\sigma-1}\leq K\Big{(}\|\tilde{u}_{h}\|_{0}+\|\widetilde{ \mathcal{J}}[\tilde{u}_{h}]\|_{0}\Big{)}.\]
The result then follows from (5.16).
We now give results approximation and derivative bounds mollifications of \(u_{h}\) by the fractional heat kernel. These are discrete versions of Lemmas 5.2 and 5.3.
**Lemma 5.7**.: _Assume \(\delta\in(0,1)\), \(h\leq\delta\), \(\epsilon>0\), (A.1)-(A.5), (B.1)-(B.2), \(u_{h}\) solves (3.13), and its mollification \(u_{h}^{[\varepsilon]}\) is defined in (5.7). Then for \(m\geq 2\),_
\[\|D^{m}u_{h}^{[\varepsilon]}\|_{0}\leq K\frac{\|u_{h}\|_{0,1}}{\varepsilon^{m- \sigma}}\Big{(}1+\big{(}\delta^{4-\sigma}+\delta^{2(2-\sigma)}k^{2}\big{)} \frac{1}{\varepsilon^{3}}+\big{(}h^{2}k^{-2}+h^{2}\delta^{-\sigma}\big{)} \frac{1}{\varepsilon}\Big{)}.\]
Proof.: By Lemma 5.3 and Lemma 5.6\((ii)\) with \(\phi(x)=\tilde{K}^{\sigma}(\varepsilon^{\sigma},x)\),
\[\|D^{m}u_{h}^{[\varepsilon]}\|_{0} \leq\frac{K}{\epsilon^{m-\sigma}}\|u_{h}^{[\varepsilon_{1}]}\|_{1, \sigma-1}\] \[\leq\frac{K}{\varepsilon^{m-\sigma}}\Big{(}1+\|u_{h}\|_{0,1}+ \delta^{4-\sigma}\|D^{4}u_{h}^{[\varepsilon_{1}]}\|_{0}+\delta^{2(2-\sigma)}k ^{2}\|D^{4}u_{h}^{[\varepsilon_{1}]}\|_{0}\] \[\qquad\qquad\qquad\qquad\qquad+\frac{h^{2}}{k^{2}}\|D^{2}u_{h}^{ [\varepsilon_{1}]}\|_{0}+\frac{h^{2}}{\delta^{\sigma}}\|D^{2}u_{h}^{[ \varepsilon_{1}]}\|_{0}\Big{)},\]
where \(\varepsilon_{1}=\frac{\varepsilon}{2^{\frac{1}{\sigma}}}\). The result then follows from the first part of Lemma 5.3.
**Lemma 5.8**.: _Assume \(0<h\leq\delta\leq\epsilon\), \(\delta\in(0,1)\), (**A.1**)-(**A.5**), (**B.1**)-(**B.2**), \(u_{h}\) solves (3.13), and its mollification \(u_{h}^{[\varepsilon]}\) is defined in (5.7). Then_
\[\|u_{h}^{[\varepsilon]}-u_{h}\|_{0}\leq C\big{(}\delta+\varepsilon^{\sigma}+ \delta^{2-\sigma}\varepsilon^{2(\sigma-1)}+k^{2}\delta^{1-\sigma}+\frac{h^{2} }{k^{2}}\delta^{\sigma-1}\big{)}.\]
Proof.: Let \(S_{t}\) be the fractional heat semigroup (c.f. the proof of Lemma 5.2) so that \(u_{h}^{[\varepsilon]}=S_{\varepsilon^{\sigma}}(u_{h})\). By properties of \(S_{t}\) and Lemmas 5.6\((i)\) and 5.7, we have
\[|S_{t}[u_{h}]-S_{s}[u_{h}]|=\Big{|}\int_{s}^{t}(-\Delta)^{\frac{ \sigma}{2}}\big{[}S_{r}[u_{h}]\big{]}dr\Big{|}\leq C\int_{s}^{t}\big{(}1+ \delta^{2-\delta}(1+\|D^{2}u_{h}^{[r^{\frac{1}{\sigma}}]}\|_{0})\big{)}dr\] \[\leq C\int_{s}^{t}\Big{(}1+\frac{\delta^{2-\sigma}}{r^{\frac{2- \sigma}{\sigma}}}\Big{(}1+\frac{\delta^{4-\sigma}+\delta^{2(2-\sigma)}k^{2}}{ r^{\frac{3}{\sigma}}}+\frac{h^{2}k^{-2}+h^{2}\delta^{-\sigma}}{r^{\frac{1}{ \sigma}}}\Big{)}\Big{)}dr\] \[\leq C\Big{(}t+\delta^{2-\sigma}t^{\frac{2(\sigma-1)}{\sigma}}+ \big{(}\delta^{6-2\sigma}+\delta^{6-3\sigma}k^{2}\big{)}s^{\frac{2\sigma-5}{ \sigma}}+\big{(}h^{2}\delta^{2-2\sigma}+h^{2}\delta^{2-\sigma}k^{-2}\big{)}s^ {\frac{2\sigma-3}{\sigma}}\Big{)}.\]
Since \(\|S_{s}[u_{h}]-[u_{h}]\|_{0}\leq Cs^{\frac{1}{\sigma}}\|u_{h}\|_{0,1}\), we then find that
\[\|S_{t}[u_{h}]-[u_{h}]\|_{0}\leq C\Big{(}s^{\frac{1}{\sigma}}+t+ \delta^{2-\sigma}t^{\frac{2(\sigma-1)}{\sigma}}+\big{(}\delta^{6-2\sigma}+ \delta^{6-3\sigma}k^{2}\big{)}s^{\frac{2\sigma-5}{\sigma}}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\big{(}h^{2}\delta^{2-2 \sigma}+h^{2}\delta^{2-\sigma}k^{-2}\big{)}s^{\frac{2\sigma-3}{\sigma}}\Big{)}.\]
This estimate holds for any \(s\in(0,t)\). Note that since \(h\leq\delta\), the Take \(t=\varepsilon^{\sigma}\) and \(s=\delta^{\sigma}\) to find that
\[\|u_{h}^{[\varepsilon]}-u_{h}\|_{0} \leq C\Big{(}\delta+\varepsilon^{\sigma}+\delta^{2-\sigma} \varepsilon^{2(\sigma-1)}+\big{(}\delta^{6-2\sigma}+\delta^{6-3\sigma}k^{2} \big{)}\delta^{2\sigma-5}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\big{(}h^{2} \delta^{2-2\sigma}+h^{2}\delta^{2-\sigma}k^{-2}\big{)}\delta^{2\sigma-3}\Big{)}.\] \[\leq C(\delta+\varepsilon^{\sigma}+\delta^{2-\sigma}\varepsilon^{2( \sigma-1)}+\delta+k^{2}\delta^{1-\sigma}+\delta+\frac{h^{2}}{k^{2}}\delta^{ \sigma-1}).\]
This completes the proof.
In the last proof the dependence on the parameters are only partially optimized, but the result is still good enough for our purposes - the optimal error bound that we will prove next.
Proof of Theorem 3.8.: The proof is similar to the proof of Theorem 3.6, and only the case \(\sigma>1\) is new. Let \(\big{(}\rho_{\epsilon}\big{)}_{\epsilon>0}\) be the standard mollifier on \(\mathbb{R}^{N}\) and define \(u^{(\epsilon)}=u*\rho_{\epsilon}\). Since \(u\) is the viscosity solution of (2.1), \(u^{(\epsilon)}\) is a smooth solution of
\[\lambda\,u^{(\epsilon)}+\sup_{\alpha\in\mathcal{A}}\Big{\{}(f^{\alpha})^{( \epsilon)}(x)-\mathcal{I}^{\alpha}[u^{(\epsilon)}]\Big{\}}\leq 0.\]
By Theorem 2.7, \(u\in C^{1,\sigma-1}(\mathbb{R}^{N})\), and by (B.2) and Lemma 5.1, \(\|f^{\alpha}-(f^{\alpha})^{(\varepsilon)}\|_{0}\leq K\varepsilon^{\sigma}\). Therefore, from the truncation error bounds (3.3), (3.9) and (3.11) we get
\[\lambda\,u^{(\epsilon)}+ \sup_{\alpha\in\mathcal{A}}\Big{\{}f^{\alpha}(x)-\mathcal{I}_{h} ^{\alpha}u^{(\epsilon)}\Big{\}}\leq\sup_{\alpha\in\mathcal{A}}\Big{[}\|f^{ \alpha}-(f^{\alpha})^{(\epsilon)}\|_{0}+\|\mathcal{I}_{h}^{\alpha}[u^{( \epsilon)}]-\mathcal{I}^{\alpha}[u^{(\epsilon)}]\|_{0}\Big{]}\] \[\leq C\epsilon^{\sigma}+C\Big{(}\delta^{4-\sigma}\|D^{4}u^{( \varepsilon)}\|_{0}+\delta^{2(2-\sigma)}k^{2}\|D^{4}u^{(\varepsilon)}\|_{0}\] \[\qquad\qquad\qquad\qquad+\frac{h^{2}}{k^{2}}\|D^{2}u^{(\varepsilon )}\|_{0}+\frac{h^{2}}{\delta^{\sigma}}\|D^{2}u^{(\varepsilon)}\|_{0}\Big{)}\] \[\leq C\Big{(}\epsilon^{\sigma}+\delta^{4-\sigma}\frac{1}{ \varepsilon^{4-\sigma}}+\delta^{2(2-\sigma)}k^{2}\frac{1}{\varepsilon^{4- \sigma}}+\frac{h^{2}}{k^{2}}\frac{1}{\varepsilon^{2-\sigma}}+\frac{h^{2}}{ \delta^{\sigma}}\frac{1}{\varepsilon^{2-\sigma}}\Big{)}:=A_{\varepsilon}.\]
Hence \(u^{(\epsilon)}-\frac{C}{\lambda}A_{\varepsilon}\) is a subsolution of the equation (3.13), and the comparison principle for (3.13) then implies that \(u^{(\epsilon)}-\frac{C}{\lambda}A_{\varepsilon}\leq u_{h}\). By Theorem 2.7 and Lemma 5.1, \(\|u^{(\epsilon)}-u\|\leq K\epsilon^{\sigma}\), and we conclude that
\[u(x)-u_{h}(x)\leq C\epsilon^{\sigma}+\frac{C}{\lambda}A_{\varepsilon}.\]
Minimizing by taking \(k^{2}=O\big{(}\frac{h\varepsilon}{\delta^{2-\sigma}}\big{)}\), \(\delta=O\big{(}h^{\frac{1}{2}}\varepsilon^{\frac{1}{2}}\big{)}\), and \(\varepsilon=O\big{(}h^{\frac{4-\sigma}{4+\sigma}}\big{)}\), leads to
\[u(x)-u_{h}(x)\leq Kh^{\frac{\sigma(4-\sigma)}{4+\sigma}}.\]
The lower bound on \(u-u_{h}\) follows from a similar argument based on the solution \(u_{h}\) of the scheme (3.13). For technical reasons, we need to work with a different regularisation \(u_{h}^{[\epsilon]}\) based on the fractional heat kernel, see the definition in (5.7). Since \(u_{h}\) solves (3.13), we have
\[\lambda\,u_{h}^{[\epsilon]}+\sup_{\alpha\in\mathcal{A}}\Big{\{}(f^{\alpha})^{ [\epsilon]}(x)-\mathcal{I}_{h}^{\alpha}[u_{h}^{[\epsilon]}]\Big{\}}\leq 0.\]
By (B.2) and Lemma 5.2, \(\|f^{\alpha}-(f^{\alpha})^{[\epsilon]}\|\leq C\epsilon^{\sigma}\), and then by Lemmas 3.1, 3.2 and 3.3,
\[\lambda\,u_{h}^{[\epsilon]}+\sup_{\alpha\in\mathcal{A}}\Big{\{}f ^{\alpha}(x)-\mathcal{I}^{\alpha}[u_{h}^{[\epsilon]}]\Big{\}}\leq K\epsilon^ {\sigma}+\|\mathcal{I}^{\alpha}[u_{h}^{[\epsilon]}]-\mathcal{I}_{h}^{\alpha} [u_{h}^{[\epsilon]}]\|_{0}\] \[\leq K\epsilon^{\sigma}+C\Big{(}\big{(}\delta^{4-\sigma}+\delta^{ 2(2-\sigma)}k^{2}\big{)}\|D^{4}u_{h}^{[\epsilon]}\|_{0}+\big{(}h^{2}k^{-2}+h^ {2}\delta^{-\sigma}\big{)}\|D^{2}u_{h}^{[\epsilon]}\|_{0}\Big{)}:=B_{\varepsilon}.\]
Hence \(u_{h}^{[\epsilon]}-\frac{C}{\lambda}B_{\epsilon}\) is a subsolution of equation (2.1), and the comparison principle for (2.1) then implies that \(u_{h}^{[\epsilon]}-u\leq\frac{C}{\lambda}\,B_{\epsilon}.\) Therefore by Lemma 5.8 and the bounds on \(\|D^{4}u_{h}^{[\epsilon]}\|_{0}\) and \(\|D^{2}u_{h}^{[\epsilon]}\|_{0}\) from Lemma 5.7, we get
\[u_{h}- u\leq C\Big{(}\delta+\varepsilon^{\sigma}+\delta^{2-\sigma} \varepsilon^{2(\sigma-1)}+k^{2}\delta^{1-\sigma}+\frac{h^{2}}{k^{2}}\delta^{ \sigma-1}\Big{)}\] \[+C\big{(}\delta^{4-\sigma}+\delta^{2(2-\sigma)}k^{2}\big{)}\frac{1 +(\delta^{4-\sigma}+\delta^{2(2-\sigma)}k^{2})\frac{1}{\varepsilon^{3}}+(h^{2 }k^{-2}+h^{2}\delta^{-\sigma})\frac{1}{\varepsilon}}{\varepsilon^{4-\sigma}}\] \[+C\big{(}h^{2}k^{-2}+h^{2}\delta^{-\sigma}\big{)}\frac{1+(\delta^{ 4-\sigma}+\delta^{2(2-\sigma)}k^{2})\frac{1}{\varepsilon^{3}}+(h^{2}k^{-2}+h^ {2}\delta^{-\sigma})\frac{1}{\varepsilon}}{\varepsilon^{2-\sigma}}.\]
As in the proof of the upper bound, we now take \(k^{2}=O\big{(}\delta^{\sigma}\big{)}\) so that
\[u_{h}-u\leq K\Big{(}\varepsilon^{\sigma}+\delta^{2-\sigma}\varepsilon^{2(\sigma-1 )}+\delta+h^{2}\delta^{-1}\Big{)}\] \[\quad+C\Big{(}\frac{h^{2}\delta^{-\sigma}}{\varepsilon^{2-\sigma}} +\frac{h^{4}\delta^{-2\sigma}}{\varepsilon^{3-\sigma}}+\frac{\delta^{4-\sigma} }{\varepsilon^{4-\sigma}}+2\frac{h^{2}\delta^{4-2\sigma}}{\varepsilon^{5- \sigma}}+\frac{\delta^{8-2\sigma}}{\varepsilon^{7-\sigma}}\Big{)}\] \[= A_{1}+A_{2}.\]
To continue note we can factor the second term,
\[A_{2}=C\frac{1}{\varepsilon^{1-\sigma}}\Big{(}\frac{h^{2}}{\varepsilon\delta^{ \sigma}}+\frac{\delta^{4-\sigma}}{\varepsilon^{3}}\Big{)}\Big{(}1+\frac{h^{2} }{\varepsilon\delta^{\sigma}}+\frac{\delta^{4-\sigma}}{\varepsilon^{3}}\Big{)}.\]
Taking \(\delta=O\big{(}h^{\frac{1}{2}}\varepsilon^{\frac{1}{2}}\big{)}\) as in the upper bound, we balance terms in \(A_{2}\), and \(A_{2}=\frac{1}{\varepsilon^{1-\sigma}}a(1+a)\) for \(a^{2}=O(\frac{h^{4-\sigma}}{\varepsilon^{2+\sigma}})\). Finally (as for the upper bound) we take \(\varepsilon=O\big{(}h^{\frac{4-\sigma}{4+\sigma}}\big{)}\). Then it is easy to check (for \(h<1\)) that \(a=O(\varepsilon)\) and
\[A_{2}\leq O\Big{(}\frac{a}{\varepsilon^{1-\sigma}}\Big{)}=O(\varepsilon^{ \sigma})=O(h^{\frac{\sigma(4-\sigma)}{4+\sigma}}).\]
In the remaining \(A_{1}\) term, using \(h\leq\delta\leq\varepsilon\) to estimate the 2nd and 4th terms, and a direct computation for the \(\delta\)-term, we find that the 2nd and 4th terms are \(O(\varepsilon^{\sigma})\) and \(O(h)\), while \(\delta=O(h^{\frac{4}{4+\sigma}})\). Since \(\frac{\sigma(4-\sigma)}{4+\sigma}\leq\frac{4}{4+\sigma}\leq 1\), we conclude that
\[u_{h}-u\leq Ch^{\frac{\sigma(4-\sigma)}{4+\sigma}}.\]
This completes the proof of the theorem.
## 6. Proof of error bound for powers of discrete Laplacian
We start with an analogous (uniform in \(h\)) bound as in Theorem 5.4.
**Theorem 6.1**.: _Assume (**A.1**)-(**A.5**), (**B.1**)-(**B.2**), and \(u_{h}\) solves (4.3). Then there is \(K>0\) independent of \(h\) such that_
\[\|(-\Delta_{h})^{\frac{\sigma}{2}}[u_{h}]\|_{0}\leq K.\]
We omit the proof which is similar to the proof of Theorem 5.4, but simpler since we have no diffusion correction term in the approximation this time. Next we state the analogous results to Lemmas 5.6 and 5.7 for regularisations \(u_{h}^{[\varepsilon]}(x)\) by the fractional heat semigroup defined in (5.7).
**Lemma 6.2**.: _Assume \(\sigma>1\), (**A.1**)-(**A.7**), (**B.1**)-(**B.2**), \(u_{h}\) solves (4.3), and \(u_{h}^{[\varepsilon]}\) is defined in (4.3). Then there is \(K>0\) independent of \(h\) and \(\varepsilon\) such that_
\[\|u_{h}^{[\varepsilon]}\|_{1,\sigma-1}\leq K\Big{(}1+\frac{h^{2}}{\varepsilon^ {3}}\Big{)}, \tag{6.1}\]
_and for \(m\geq 2\),_
\[\|D^{m}u_{h}^{[\varepsilon]}\|_{0}\leq\frac{K}{\varepsilon^{m-\sigma}}\Big{(} 1+\frac{h^{2}}{\varepsilon^{3}}\Big{)}.\]
Proof.: By Theorem 6.1 and properties of \(\tilde{K}^{\sigma}\), \(\|(-\Delta_{h})^{\frac{\sigma}{2}}[u_{h}^{[\varepsilon]}]\|_{0}\leq C\|(- \Delta_{h})^{\frac{\sigma}{2}}[u_{h}]\|_{0}\leq CK\), and we conclude from the truncation error bound (4.5) that
\[\|(-\Delta)^{\frac{\sigma}{2}}u_{h}^{[\varepsilon]}\|_{0}\leq K_{1}\Big{(}\|( -\Delta_{h})^{\frac{\sigma}{2}}[u_{h}]\|_{0}+h^{2}(\|D^{4}u_{h}^{[\varepsilon] }\|_{0}+\|u_{h}^{[\varepsilon]}\|_{0})\Big{)}. \tag{6.2}\]
Since \(\|D^{m}u_{h}^{(\varepsilon)}\|_{0}\leq\frac{C}{\varepsilon^{m-1}}\|u_{h}\|_{0,1}\) by Lemma 5.3, estimate (6.1) follows from the regularity estimate [58, Theorem 1.1(a)] by Ros-Oton and Serra for fractional Laplace operators. The second part follows from (6.1) and Lemma 5.3.
We give a version of Lemma 5.8 for powers of the discrete fractional Laplacian.
**Lemma 6.3**.: _Assume \(\sigma>1\), \(0<h\leq\epsilon^{\frac{4-\sigma}{2}}\), (**A**.1)-(**A**.5), (**B**.1)-(**B**.2), \(u_{h}\) solves (4.3), and \(u_{h}^{[\varepsilon]}\) is defined in (5.7). Then_
\[\|u_{h}^{[\varepsilon]}-u_{h}\|_{0}\leq K\Big{(}\varepsilon^{\sigma}\|(- \Delta_{h})^{\frac{\sigma}{2}}[u_{h}]\|_{0}+h^{\frac{2}{4-\sigma}}\|u_{h}\|_{0,1}\Big{)}. \tag{6.3}\]
Proof.: The proof is similar to the proof of Lemma 5.8. By definition (5.7), \(u_{h}^{[\varepsilon]}=S_{r}(u_{h})\) where \(\varepsilon=r^{\frac{1}{\sigma}}\) and \(S_{t}\) is the fractional heat semigroup. Therefore using properties of heat kernels, estimate (6.2), and the first part of Lemma 5.3, we have
\[|S_{t}(u_{h})-S_{s}(u_{h})|\leq\int_{s}^{t}K\Big{(}\|(-\Delta_{h} )^{\frac{\sigma}{2}}u_{h}\|_{0}+\frac{h^{2}}{r^{\frac{3}{\sigma}}}\|u_{h}\|_{0,1}\Big{)}\,dr\] \[\quad\leq K(t-s)\|(-\Delta_{h})^{\frac{\sigma}{2}}u_{h}\|_{0}+Kh^ {2}\|u_{h}\|_{0,1}\Big{(}\frac{1}{s^{\frac{3-\sigma}{\sigma}}}-\frac{1}{t^{ \frac{3-\sigma}{\sigma}}}\Big{)},\] \[|S_{s}(u_{h})-u_{h}|=\Big{|}\int_{\mathbb{R}^{N}}\Big{(}u_{h}(x-s ^{\frac{1}{\sigma}}y)-u_{h}(x)\Big{)}K^{\sigma}(y)\,dy\Big{|}\leq Ks^{\frac{1} {\sigma}}\|u_{h}\|_{0,1}.\]
Moreover,
\[|S_{t}(u_{h})-u_{h}|\leq K\Big{(}t\,\|(-\Delta_{h})^{\frac{\sigma}{2}}u_{h}\|_ {0}+\Big{(}\frac{h^{2}}{s^{\frac{3-\sigma}{\sigma}}}+s^{\frac{1}{\sigma}} \Big{)}\|u_{h}\|_{0,1}\Big{)}.\]
The result now follows by taking \(t=\varepsilon^{\sigma}\) and \(s=h^{\frac{2\sigma}{4-\sigma}}\), noting that \(s\leq t\) by the assumption that \(h\leq\varepsilon^{\frac{4-\sigma}{2}}\).
Proof of Theorem 4.2.: The case \(\sigma<1\) uses no more than Lipschitz continuity of solutions and follows in straight forward way from the local truncation error bound in Lemma 4.1 and the regularisation/comparison arguments in the proof of Theorem 3.6. Therefore we focus on the case \(\sigma>1\). The arguments are same as in the proof of Theorem 3.8. To prove the upper bound on \(u-u_{h}\), we regularize equation (4.1) and use the truncation error bound (4.5) to find that
\[\lambda\,u^{(\epsilon)}+\sup_{\alpha\in\mathcal{A}}\Big{\{}f^{ \alpha}(x)+a^{\alpha}(-\Delta_{h})^{\frac{\sigma}{2}}u^{(\epsilon)}\Big{\}}\] \[\leq\sup_{\alpha\in\mathcal{A}}\|f^{\alpha}-(f^{\alpha})^{( \epsilon)}\|_{0}+\sup_{\alpha\in\mathcal{A}}a^{\alpha}\|(-\Delta_{h})^{\frac{ \sigma}{2}}u^{(\epsilon)}-(-\Delta)^{\frac{\sigma}{2}}u^{(\epsilon)}\|_{0}\] \[\leq K\epsilon^{\sigma}+Ch^{2}\Big{(}\|D^{4}u^{(\epsilon)}\|_{0}+ \|u^{(\epsilon)}\|_{0}\Big{)}.\]
Hence \(u^{(\epsilon)}-\frac{C}{\lambda}\big{(}\epsilon^{\sigma}+h^{2}(\|D^{4}u^{( \epsilon)}\|_{0}+\|u^{(\epsilon)}\|_{0})\big{)}\) is a subsolution of equation (4.3). By the comparison principle for (4.3), regularity of \(u\) given by Theorem 2.7, and the bounds given by Lemma 5.1, we have
\[u(x)-u_{h}(x)\leq K\Big{(}\epsilon^{\sigma}+\frac{h^{2}}{\epsilon^{4-\sigma}} \Big{)}.\]
We optimize the right hand side by choosing \(\epsilon=O\big{(}h^{\frac{1}{2}}\big{)}\) and get
\[u(x)-u_{h}(x)\leq Kh^{\frac{\sigma}{2}}.\]
To prove the lower bound we mollify/regularize the scheme (4.3) using the fractional heat semigroup. Then by Lemma 5.2 for \(f^{\alpha}\) and the truncation error (4.5),
\[\lambda\,u_{h}^{[\epsilon]}+\sup_{\alpha\in\mathcal{A}}\Big{\{}f^{\alpha}(x)+a^{ \alpha}(-\Delta)^{\frac{\sigma}{2}}u_{h}^{[\epsilon]}\Big{\}}\leq C\epsilon^{ \sigma}+Ch^{2}\Big{(}\|D^{4}u_{h}^{[\epsilon]}\|_{0}+\|u_{h}^{[\epsilon]}\|_{0 }\Big{)}.\]
Therefore \(u_{h}^{[\epsilon]}-\frac{C}{\lambda}\big{(}\epsilon^{\sigma}+h^{2}\|D^{4}u^{[ \epsilon]}\|_{0}+h^{2}\|u^{[\epsilon]}\|_{0}\big{)}\) is a subsolution of equation (4.1), and comparison for (4.1) then yields
\[u_{h}^{[\epsilon]}-u\leq\frac{C}{\lambda}\Big{(}\epsilon^{\sigma}+h^{2}\|D^{ 4}u_{h}^{[\epsilon]}\|_{0}+h^{2}\|u_{h}^{[\epsilon]}\|_{0}\Big{)}.\]
Then by Lemma 6.3 (needs \(h\leq\varepsilon^{\frac{4-\sigma}{2}}\)) and the \(\|D^{4}u_{h}^{[\epsilon]}\|_{0}\)-bound of Lemma 6.2,
\[u_{h}-u\leq C\Big{(}\epsilon^{\sigma}+h^{\frac{2}{4-\sigma}}+\frac{h^{2}}{ \epsilon^{4-\sigma}}+\frac{h^{4}}{\epsilon^{7-\sigma}}\Big{)}.\]
Optimizing in \(\epsilon\) by choosing \(\epsilon=O\big{(}h^{\frac{1}{2}}\big{)}\), we get the final estimate
\[u_{h}-u\leq K\big{(}h^{\frac{\sigma}{2}}+h^{\frac{2}{4-\sigma}}\big{)}.\]
The result now follows since \(\frac{2}{4-\sigma}>\frac{\sigma}{2}\) for \(\sigma>1\) and \(h=\varepsilon^{2}\leq\varepsilon^{\frac{2}{4-\sigma}}\) (for \(h<1\)).
## 7. Extensions
In this section we discuss two related extensions of our previous results: (i) to nonlocal HJB equations with drift/advection terms, and (ii) to jump diffusions with nonsymmetric singular parts in the sense that we drop condition (A.7). Consider
\[\sup_{\alpha\in\mathcal{A}}\big{\{}f^{\alpha}(x)+c^{\alpha}(x)u(x)-b^{\alpha} \cdot\nabla u(t,x)-\mathcal{I}^{\alpha}[u](x)\big{\}}=0,\quad\text{in }\mathbb{R}^{N}, \tag{7.1}\]
where \(b^{\alpha}\in\mathbb{R}^{N}\) and a modified version of (A.2) holds:
* There is a \(K>0\) such that \[\|f^{\alpha}\|_{1}+\|c^{\alpha}\|_{1}+|b^{\alpha}|+\|\eta^{\alpha}\|_{0}\leq K \quad\text{for}\quad\alpha\in\mathcal{A}.\]
Under assumptions (A.1), (A.2'), (A.3), (A.4) equation (7.1) is well-posed, comparison holds, and the \(C_{b}\) and Lipschitz bounds of Proposition 2.2 hold. The proof is the same as for Proposition 2.2. Note that \(x\)-independent \(b^{\alpha}\) is consistent with \(x\)-independent \(\eta^{\alpha}\) in (1.2) and simplifies the presentation below.
Dropping (A.7) means that
\[\tilde{b}^{\alpha,\delta}:=\int_{\delta<|z|<1}\eta^{\alpha}(z)\,\nu_{\alpha}( dz)\neq 0,\]
and there is a new drift term in our equation. We can write the nonlocal term as
\[\mathcal{I}^{\alpha}[\phi](x)=\mathcal{I}^{\alpha}_{\delta}[\phi](x)+\mathcal{ I}^{\alpha,\delta}[\phi](x)-\tilde{b}^{\alpha,\delta}\cdot\nabla\phi(x),\]
where \(\mathcal{I}^{\alpha}_{\delta}\), \(\mathcal{I}^{\alpha,\delta}\) are defined in Section 3. The term \(\tilde{b}^{\alpha,\delta}\) is bounded under a \(C^{1,1}\) condition for \(\eta^{\alpha}\) at \(z=0\), a uniform in \(\alpha\) version of assumption (B.1) (ii):
* There is \(K>0\) such that \[|\eta^{\alpha}(z)-2\eta^{\alpha}(0)-\eta^{\alpha}(-z)|\leq K|z|^{2}\qquad \text{for}\qquad|z|<1,\quad\alpha\in\mathcal{A}.\]
This assumption is satisfied in most applications. The next result is a version of Lemma 3.1 without (A.7).
**Lemma 7.1**.: _Assume (A.1), (A.2'), (A.3) - (A.6) and \(\delta\in(0.1)\)._
1. _There is_ \(K>0\) _independent of_ \(\delta,\alpha,\phi\) _such that_ \[|\mathcal{I}^{\alpha}_{\delta}[\phi]-tr[a^{\alpha}_{\delta}D^{2}\phi]|\leq K \delta^{3-\sigma}\|D^{3}\phi\|_{0}.\] (7.2)
2. _If also (_A.8_) holds, there is_ \(C>0\) _independent of_ \(\delta,\alpha\) _such that_ \(|\tilde{b}^{\alpha,\delta}|\leq C\)_._
Proof.: (i) The proof is similar to the proof of Lemma 3.1. After a Taylor expansion of \(\phi\), we find that
\[\mathcal{I}^{\alpha}_{\delta}[\phi](x)=tr[a^{\alpha}_{\delta}D^{2}\phi]+Err_{ 1,\delta},\]
where \(Err_{\delta}=\frac{|\beta|}{\beta!}\sum_{|\beta|=3}\big{[}\int_{|z|<\delta}\int _{0}^{1}(1-s)^{|\beta-1|}D^{\beta}\phi(x+s\eta^{\alpha}(z))\eta^{\alpha}(z)^{ \beta}\,ds\,\nu_{\alpha}(dz)\big{]}\) and \(a^{\alpha}_{\delta}\) is defined in Lemma 3.1. By (A.6) we have \(|Err_{1,\delta}|\leq C\delta^{3-\sigma}\|D^{3}\phi\|_{0}\).
(ii) Since \(\eta^{\alpha}(0)=0\) by (A.3), assumptions (A.5) and (A.8) lead to
\[\big{|}\tilde{b}^{\alpha,\delta}\big{|}=\frac{1}{2}\Big{|}\int_{\delta<|z|<1} \big{(}\eta^{\alpha}(z)+\eta^{\alpha}(-z)\big{)}\,\nu_{\alpha}(dz)\Big{|}\leq \,K\int_{\delta<|z|<1}|z|^{2}\,\nu_{\alpha}(dz).\]
By (A.4), this completes the proof.
Following the approach of Section 3, to discretize (7.1) we first approximate small jumps by a diffusion. This leads to equation (3.2) with a redefined operator \(\mathcal{L}^{\alpha}_{\delta}\) to account for the drift:
\[\mathcal{L}^{\alpha}_{\delta}[\phi](x):=tr[a^{\alpha}_{\delta}D^{2}\phi](x)+b^ {\alpha}_{\delta}\cdot\nabla\phi(x),\qquad b^{\alpha}_{\delta}=b^{\alpha}- \tilde{b}^{\alpha,\delta}, \tag{7.3}\]
where \(b^{\alpha}_{\delta}\) is bounded under (A.2') and (A.8). Then we approximate \(\mathcal{L}^{\alpha}_{\delta}\) by
\[\bar{\mathcal{L}}^{\alpha}_{\delta,k,h}\phi=\mathcal{L}^{\alpha}_{\delta,k,h} [\phi]+b^{\alpha,+}_{k}\,\delta_{h,e_{k}}\phi+b^{\alpha,-}_{k}\,\delta_{h,-e_ {k}}\phi, \tag{7.4}\]
where \(\mathcal{L}^{\alpha}_{\delta,k,h}\) is defined in (3.7), \(e_{k}\) are basis vectors in \(\mathbb{R}^{N}\), \(b^{\alpha}_{\delta}=(b^{\alpha}_{1},\cdots,b^{\alpha}_{N})\), and
\[\delta_{h,l}u(x)=\frac{u(x+hl)-u(x)}{h}\qquad\text{for}\qquad l\in\mathbb{R}^ {N},\neq 0.\]
Here the drift term is discretized by an upwind finite difference method10, and the total discretization is still monotone. We estimate the truncation error next.
Footnote 10: This is just an example, many other monotone discretizations would also work here, also SL schemes.
**Lemma 7.2**.: _Assume (_A.1_), (_A.2'_), (_A.3_)-(_A.6_), (_A.8_), \(\phi\in C^{4}(\mathbb{R}^{N})\), and \(\mathcal{L}^{\alpha}_{\delta}\) and \(\bar{\mathcal{L}}^{\alpha}_{\delta,k,h}\) are defined by (7.3) and (7.4). Then there is \(K\) independent of \(h,k,\delta\) such that_
\[\big{|}\bar{\mathcal{L}}^{\alpha}_{\delta,k,h}[\phi]-\mathcal{L}^{\alpha}_{ \delta}[\phi]\big{|}\leq K\Big{(}h\|D^{2}\phi\|_{0}+\delta^{2(2-\sigma)}k^{2} \|D^{4}\phi\|_{0}+\frac{h^{2}}{k^{2}}\|D^{2}\phi\|_{0}\Big{)}. \tag{7.5}\]
Proof.: The first term on the right hand side of (7.5) is classical and due to the approximation of the drift. The remaining terms come from Lemma 3.2.
The numerical scheme for equation (7.1) is defined by
\[\sup_{\alpha\in\mathcal{A}}\Big{\{}f^{\alpha}(x)+c^{\alpha}(x)u(x)-\bar{ \mathcal{L}}^{\alpha}_{\delta,k,h}[u](x)-\mathcal{I}^{\alpha,\delta}_{h}[u](x) \Big{\}}=0\quad\text{in}\quad\mathbb{R}^{N}, \tag{7.6}\]
where \(\bar{\mathcal{L}}^{\alpha}_{\delta,k,h}\) and \(\mathcal{I}^{\alpha,\delta}_{h}\) are given by (7.4) and (3.10). This is a consistent, monotone, and \(L^{\infty}\)-stable scheme. In the strongly degenerate case, an error estimate given by the next result.
**Theorem 7.3**.: _Assume \(\sigma\in(0,2)\), \(h,k\in(0,1)\), \(\delta\geq h\), (**A.1**), (**A.2**'), (**A.3**)-(**A.6**), (**A.8**), \(u\) and \(u_{h}\) solves (7.1) and (7.6)._
* _If_ \(k^{2}=O(\frac{h^{2}}{\delta^{2-\frac{\sigma}{2}}})\)_, then there is_ \(C>0\) _such that_ (7.7)
* _When (_A.7_) holds and_ \(k^{2}=O(\frac{h^{2}}{\delta^{2-\frac{\sigma}{2}}})\)_, then there is_ \(C>0\) _such that_ (7.8)
**Remark 7.4**.: (a) When \(\sigma\leq 1\), the error can can not be better than \(\mathcal{O}(h^{\frac{1}{2}})\) because of the (local) drift term in (7.1). In this case the diffusion correction does not improve the rate as it did in Section 3.
(b) Under assumption (**A.8**), we get improved convergence rates for any \(\sigma>1\), see Theorem 7.3 (b). The rate approaches \(\frac{1}{3}\) as \(\sigma\to 2\), compared to \(\frac{1}{4}\) in part (a).
_Sketch of proof:_ The proof is similar to the proof of Theorem 3.6, we only explain the main differences. In view of Lemmas 3.1 and 7.1, replacing Lemma 3.2 by Lemma 7.2 when estimating (5.1), the constant \(M_{\epsilon,\delta}\) in (5.2) gets a \(O(\frac{h}{\epsilon})\) contribution from the drift and becomes
\[M_{\epsilon,\delta}\,=\begin{cases}\delta^{3-\sigma}\,\frac{1}{\epsilon^{2}} +h\,\frac{1}{\epsilon}+k^{2}\,\delta^{2(2-\sigma)}\,\frac{1}{\epsilon^{3}}+ \frac{h^{2}}{k^{2}}\frac{1}{\epsilon}+\frac{h^{2}}{\delta^{\sigma}}\,\frac{1} {\epsilon}&\text{for part (a)},\\ \delta^{4-\sigma}\,\frac{1}{\epsilon^{3}}+h\,\frac{1}{\epsilon}+k^{2}\,\delta ^{2(2-\sigma)}\,\frac{1}{\epsilon^{3}}+\frac{h^{2}}{k^{2}}\frac{1}{\epsilon}+ \frac{h^{2}}{\delta^{\sigma}}\,\frac{1}{\epsilon}&\text{for part (b)}.\end{cases} \tag{7.9}\]
In case (a) the nonlocal operator is not symmetric, so we have used Lemma 7.1 (i) to get the first term. By (5.3) and (5.4) we get \(|u-u_{h}|\leq C(\epsilon+M_{\epsilon,\delta})\) and optimize with respect to \(k,\delta,\) and \(\epsilon\). First we take \(k^{2}=O\big{(}\frac{h\epsilon}{\delta^{2-\sigma}}\big{)}\), then using \(h\leq\delta\), we take \(\delta=O\big{(}h^{\frac{2}{\sigma}}\epsilon^{\frac{1}{3}}\big{)}\) for part (a) and \(\delta=O\big{(}h^{\frac{1}{2}}\epsilon^{\frac{1}{2}}\big{)}\) for part (b) to get
\[|u-u_{h}|\leq\begin{cases}C\big{(}h^{\frac{2}{3}(3-\sigma)}\epsilon^{-\frac{1} {3}(3+\sigma)}+h\frac{1}{\epsilon}+\epsilon\big{)}&\text{for part (a)},\\ C\big{(}h^{\frac{1}{2}(4-\sigma)}\epsilon^{-\frac{1}{2}(2+\sigma)}+h\frac{1}{ \epsilon}+\epsilon\big{)}&\text{for part (b)}.\end{cases} \tag{7.10}\]
For part (a), the rate (7.7) follows by choosing \(\epsilon=O\big{(}\max\big{\{}h^{\frac{1}{2}},h^{\frac{2(3-\sigma)}{6+\sigma}} \big{\}}\big{)}\), i.e. \(\epsilon=O\big{(}h^{\frac{1}{2}}\big{)}\) for \(0<\sigma\leq\frac{5}{6}\), and \(\epsilon=O\big{(}h^{\frac{2(3-\sigma)}{6+\sigma}}\big{)}\) for \(\frac{5}{6}\leq\sigma<2\). For part (b), the convergence rate (7.8) is observed by choosing \(\epsilon\) optimally as \(\epsilon=O\big{(}\max\big{\{}h^{\frac{1}{2}},h^{\frac{4-\sigma}{4+\sigma}} \big{\}}\big{)}\), i.e. \(\epsilon=O\big{(}h^{\frac{1}{2}}\big{)}\) for \(0<\sigma\leq\frac{4}{3}\) and \(\epsilon=O\big{(}h^{\frac{4-\sigma}{4+\sigma}}\big{)}\) for \(\frac{4}{3}\leq\sigma<2\).
|
2305.19923 | MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL | Recently, diffusion model shines as a promising backbone for the sequence
modeling paradigm in offline reinforcement learning(RL). However, these works
mostly lack the generalization ability across tasks with reward or dynamics
change. To tackle this challenge, in this paper we propose a task-oriented
conditioned diffusion planner for offline meta-RL(MetaDiffuser), which
considers the generalization problem as conditional trajectory generation task
with contextual representation. The key is to learn a context conditioned
diffusion model which can generate task-oriented trajectories for planning
across diverse tasks. To enhance the dynamics consistency of the generated
trajectories while encouraging trajectories to achieve high returns, we further
design a dual-guided module in the sampling process of the diffusion model. The
proposed framework enjoys the robustness to the quality of collected warm-start
data from the testing task and the flexibility to incorporate with different
task representation method. The experiment results on MuJoCo benchmarks show
that MetaDiffuser outperforms other strong offline meta-RL baselines,
demonstrating the outstanding conditional generation ability of diffusion
architecture. | Fei Ni, Jianye Hao, Yao Mu, Yifu Yuan, Yan Zheng, Bin Wang, Zhixuan Liang | 2023-05-31T15:01:38Z | http://arxiv.org/abs/2305.19923v1 | # MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
###### Abstract
Recently, diffusion model shines as a promising backbone for the sequence modeling paradigm in offline reinforcement learning (RL). However, these works mostly lack the generalization ability across tasks with reward or dynamics change. To tackle this challenge, in this paper we propose a task-oriented conditioned diffusion planner for offline meta-RL (MetaDiffuser), which considers the generalization problem as conditional trajectory generation task with contextual representation. The key is to learn a context conditioned diffusion model which can generate task-oriented trajectories for planning across diverse tasks. To enhance the dynamics consistency of the generated trajectories while encouraging trajectories to achieve high returns, we further design a dual-guided module in the sampling process of the diffusion model. The proposed framework enjoys the robustness to the quality of collected warm-start data from the testing task and the flexibility to incorporate with different task representation method. The experiment results on MuJoCo benchmarks show that MetaDiffuser outperforms other strong offline meta-RL baselines, demonstrating the outstanding conditional generation ability of diffusion architecture. More visualization results are released on project page.
Machine Learning, ICML, ICML
## 1 Introduction
Offline Reinforcement Learning (Offline RL) (Levine et al., 2020) aims to learn policies from pre-collected data without interacting with the environment and has made many success in the fields of games (Chen et al., 2021; Li et al., 2022), robotic manipulation (Ebert et al., 2018), sequential advertising (Hao et al., 2020). However, one of the inherent difficulties of offline RL is the challenges to generalize to unseen tasks. Recent work in offline meta-RL (Mitchell et al., 2021; Li et al., 2020, 2021; Mu et al., 2022) aims to solve this problem by training a meta-policy from multi-task offline datasets that can efficiently adapt to unseen tasks with small amounts of warm-start data.
Conventional offline meta-RL methods (Li et al., 2021; Yuan and Lu, 2022) learn a context encoder to infer task representation and a meta policy conditioned on the learned context for generalization across tasks. These works extended from the online meta-RL setting, still rely on context-conditioned policy trained by temporal difference (TD) learning, which may potentially cause instability in policy optimization and limited performance (Levine et al., 2020; Ajay et al., 2022). A more recent work Prompt-DT (Xu et al., 2022) turn to tackle the generalization problem from the sequence modeling perspective, which joint models state-action trajectories to avoid TD-learning. This approach uses prompting method to generalize across unseen tasks without the need for explicit extraction of task representation through pre-trained context encoder. However, the key limitation is that the quality of the pre-collected warm-start data must be high enough, which is challenging to collect in unseen tasks, to act as an expert prompt for guiding sequence generation, otherwise performance may suffer with random or medium data. The aforementioned limitations raise a key question: Can we design a offline meta-RL framework to achieve the generalization ability across multiple tasks with robustness for the quality of warm-start data while utilize the promising ability of sequence-modeling paradigm?
Planning with diffusion model (Janner et al., 2022) provides a promising paradigm for offline RL, which utilizes
Figure 1: Overall few-shot generalization performance comparisons on various environments including 2 domains with dynamics change and 4 domains with reward change. The expert performance in each environment is chosen as normalized baseline.
diffusion model as a trajectory generator by joint diffusing the states and actions from the noise to formulate the sequence decision-making problem as standard generative modeling. The concurrent works (Ajay et al., 2022; Wang et al., 2022) also showcase the potential of the diffusion model as a highly promising generative model, highlighting its ability to serve as a key backbone for addressing sequence modeling problems in RL, while avoiding the limitations of TD-learning. But these works focus on a single task and lack research on generalization ability across tasks, which leaves the conditioned diffusion unexplored for offline meta-RL. However, conditioned diffusion model has made significant progress in vision and language tasks (Ho and Salimans, 2022), such as DALL-E (Ramesh et al., 2022) and ImageGen (Saharia et al., 2022) for text-to-image generation tasks. These works demonstrate the powerful conditional generation capabilities of conditioned diffusion models with the textual label without the need for expert images as prompts.
Inspired by this, we propose a novel framework for offline meta-RL, named MetaDiffuser that leverages diffusion model to conduct desired trajectories generation for generalization across unseen tasks. During meta-training, to provide accurate conditional labels for subsequent trajectory generation, we first pre-train an accurate context encoder that can capture task-relevant information from offline trajectories mixed with different tasks. Then the compact task representation is injected as a contextual label to the conditional diffusion model to manipulate the task-oriented trajectories generation. In this way, the diffusion model learns to estimate the conditional distribution of multi-task distributions based on the task-oriented context. During meta-testing, with the predicted context from provided warm-start data in the testing task, the conditional diffusion model can denoise out desired trajectories for the testing task. The generated trajectories can guide the subsequent action to step into the next state, similar to the planning (Yuan et al., 2023) in RL. Moreover, to decrease the discrepancy between generated trajectories and real-rollout trajectories, we design an effective dual-guide to enhance the dynamics consistency of generated trajectories while encouraging the high return simultaneously. The contributions of this work are as follows:
* **Generalization Ability**: We propose MetaDiffuser to leverage the diffusion model to conduct conditional trajectory generation to achieve the generalization ability across unseen tasks.
* **Robustness and Flexibility**: MetaDiffuser enjoys the flexibility to incorporate with different task representation method and the robustness to the quality of collected warm-start data at the testing task.
* **Dual-guide Enhanced Planner**: We design the dual-guide of both dynamics and rewards to ensure the feasibility of guided trajectories while encouraging the generated trajectories to achieve high returns.
* **Superior Performance**: The experiments on various benchmarks empirically show that MetaDiffuser much better generalizes to unseen tasks than prior methods.
## 2 Related Work
### Offline Meta-RL
Offline meta-RL investigates learning to learn from offline data, with the aim to quickly adapt to unseen tasks. Recent works (Mitchell et al., 2021; Li et al., 2020, 2021), including FOCAL (Li et al., 2021) and CORRO (Yuan and Lu, 2022), trains a context encoder for compact task representation for the conditioned policy to generalize. These methods extended from the traditional online meta-RL setting, still rely on context-conditioned policy trained by TD-learning, which may potentially cause instability in policy optimization and limited performance. Prompt-DT (Xu et al., 2022) turns to solve the generalization problem from the sequence modeling perspective, which joint models state-action trajectories to avoid TD-learning. This approach can utilize the collected prompt as a prefix to generalize across tasks without the need for explicit context encoder. However, the key limitation is the high requirement for the quality of warm-start data as prompt, which is challenging to pre-collect in unseen task. See more discussion in Appendix C. To combine the best of both context-based manner and sequence-modeling fashion, we propose MetaDiffuser, which not only avoiding TD-loss, but also enjoy the robustness to the quality of warm-start data.
### Diffusion Model for Sequence Decision Making
Recently, many works have emerged to utilize diffusion models to solve sequence decision-making tasks, showing the great potential of diffusion model as a promising backbone of sequence modeling. Diffuser (Janner et al., 2022) applies a diffusion model as a trajectory generator, which is trained by diffusing over the full trajectory of state-action pairs from the noises. A separate reward model is trained to predict the cumulative rewards of each trajectory sample, then the gradient guidance from reward model is injected into the reverse sampling stage. Then the first action in the generated trajectories will be applied to execute in the environment to step into the next state, which repeats in a loop until the terminal. The consequent work Decision Diffuser (Ajay et al., 2022) frames offline sequential decision-making as conditional generative modeling based on returns, constraints and skills to eliminate the complexities in traditional offline RL. The concurrent work Diffusion-QL (Wang et al., 2022), build policy with the reverse chain of a conditional diffusion model, which allows for a highly expressive policy class, as a strong policy-regularization method. However, these works mostly focus on a single task and lack the generalization ability to unseen tasks in the setting of
offline meta-RL. Our approach MetaDiffuser leverages the conditioned diffusion model to conduct conditional trajectory generation to achieve the generalization across unseen tasks with different reward functions or dynamics.
### Conditional Diffusion Model
Recently, there have been incredible advances in the field of conditional content generation with the strong generation capabilities of conditioned diffusion models. Conditional diffusion model pushes the state-of-the-art on text-to-image generation tasks such as DALL-E (Ramesh et al., 2022) and ImageGen (Saharia et al., 2022). The technique of conditioning can divide into two fashions: classifier-guided (Nichol and Dhariwal, 2021) and classifier-free (Ho and Salimans, 2022). The former improves sample quality while reducing diversity in conditional diffusion models using gradients from a pre-trained classifier \(p_{\phi}(\mathbf{y}|\mathbf{x}_{k})\) during sampling. The latter is an alternate technique that avoids this pre-trained classifier by instead jointly training a single diffusion model on conditional \(\epsilon_{\theta}(\mathbf{x}_{k},\mathbf{y},k)\) and unconditional \(\epsilon_{\theta}(\mathbf{x}_{k},k)\) noise model via randomly dropping conditional label \(\mathbf{y}\).
In fact, the aforementioned Diffuser (Janner et al., 2022) can also be considered as a classifier-guided conditional diffusion model, where the pre-trained reward model is another form of classifier for evaluating the sample quality. Our designed MetaDiffuser builds upon the Diffuser and additionally incorporates classifier-free manner, by injecting the context as label \(\mathbf{y}\) into the conditional noise model \(\epsilon_{\theta}(\mathbf{x}_{k},\mathbf{y},k)\), achieving more precise conditional generation. The details about the relationship between two different conditional fashions can be found in Appendix A.
## 3 Preliminaries
### Problem Formulation
The reinforcement learning problem can be generally modeled as a Markov Decision Process (MDP), represented as \(\mathcal{M}=(\mathcal{S},\mathcal{A},T,\rho,R)\), where \(\mathcal{S}\) is the state space, \(\mathcal{A}\) is the action space, \(T(s^{\prime}|s,a)\) is the transition dynamics of the environment, \(\rho(s)\) is the initial state distribution, \(R(s,a)\) is the reward function. The objective is to find a policy \(\pi(a|s)\) that optimizes the expected cumulative rewards, \(\mathbb{E}_{s_{0}\sim\rho,\pi}\sum_{t}\gamma^{t}R(s_{t})\), starting from the initial state. In the offline meta-RL setting, aiming to adapt to new tasks via pre-collected data quickly, an agent is given a set of tasks \(\mathcal{T}\), where a task \(\mathcal{T}_{i}\in\mathcal{T}\) is defined as \((\mathcal{M}_{i},\pi_{i})\), containing an MDP \(\mathcal{M}_{i}\) and a behavior policy \(\pi_{i}\). For each task \(\mathcal{T}_{i}\), the agent is provided with a pre-collected dataset \(\mathcal{D}_{i}\), which contains trajectories sampled using \(\pi_{i}\). The agent is trained with a subset of training tasks denoted as \(\mathcal{T}^{train}\) and is expected to find the optimal policies in a set of test tasks \(\mathcal{T}^{test}\), which is disjoint with \(\mathcal{T}^{train}\).
### Diffusion Model
Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) are a type of generative model that consists a forward diffusion process and a reverse denoising process to learn the data distribution \(q(\mathbf{x})\). Here, the data-generating procedure is modelled with a predefined forward noising process \(q(\mathbf{x}_{k+1}|\mathbf{x}_{k})\coloneqq\mathcal{N}(\mathbf{x}_{k+1};\sqrt{\alpha_{k}} \mathbf{x}_{k},(1-\alpha_{k})\mathbf{I})\) and a trainable reverse process \(p_{\theta}(\mathbf{x}_{k-1}|\mathbf{x}_{k})\coloneqq\mathcal{N}(\mathbf{x}_{k-1}|\mu_{ \theta}(\mathbf{x}_{k},k),\Sigma_{k})\), where \(\mathcal{N}(\mu,\Sigma)\) denotes a Gaussian distribution with mean \(\mu\) and variance \(\Sigma\), \(\alpha_{k}\in\mathbb{R}\) determines the variance schedule, \(\mathbf{x}_{0}\coloneqq\mathbf{x}\) is a sample, \(\mathbf{x}_{k}\) are the sequentially sampled latent variables for \(k=1,\dots,K\), and \(\mathbf{x}_{K}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) for carefully chosen \(\alpha_{k}\) and long enough \(K\). Starting with Gaussian noise, samples are then iteratively generated through a series of reverse denoising steps by the predicted noise. The predicted noise \(\epsilon_{\theta}(\mathbf{x}_{k},k)\), parameterized with a deep neural network, estimates the noise \(\epsilon\sim\mathcal{N}(0,I)\) added to the dataset sample \(\mathbf{x}_{0}\) to produce noisy \(\mathbf{x}_{k}\), which can be trained by a simplified surrogate loss (Ho et al., 2020): \(\mathcal{L}_{\text{denoise}}(\theta)\coloneqq\mathbb{E}_{k\sim[1,K],\mathbf{x}_{ 0}\sim q,\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})}[||\epsilon-\epsilon_{\theta} (\mathbf{x}_{k},k)||^{2}]\).
## 4 Methodology
To tackle the generalization challenge from the sequence modeling perspective, we propose MetaDiffuser, a novel offline meta-RL framework, which leverages the conditioned diffusion model to conduct task-oriented trajectories generation for generalization across unseen tasks. As shown in Figure 2, the overall generalization process can explicitly be divided into meta-training and meta-testing. During **meta-training**, in order to provide accurate conditional labels for subsequent trajectory generation, we first need to pre-train an accurate context encoder that can capture both reward changes and dynamics changes from trajectories. Then the compact task representation inferred by the context encoder is injected as a contextual label into the step-wise denoising process from the Gaussian noise for estimating the conditional distribution of multi-task trajectories. During **meta-testing**, with predicted context from provided warm-start data, the conditional diffusion model can denoise out desired trajectories for the testing task. Moreover, to alleviate the discrepancy between generated trajectories and real-rollout trajectories, the previously trained reward model and dynamics model are utilized as a trajectory evaluator to enhance the dynamics consistency and high returns of trajectories.
### Task-oriented Context Encoder
To manipulate the conditional trajectory generation with a high correlation with the desired specific task, it is necessary to establish an accurate mapping from trajectories to the contextual label it belongs to. Considering the environments in the meta-RL setting can change in reward functions and
transition dynamics, we expect the context can fully distinguish between the two types of environmental changes with a unified learning objective. For this, we propose a simple yet effective context encoder \(E_{\phi}\), jointly with generalized reward model \(R_{\psi}\) and dynamics model \(P_{\omega}\). We augment context into state and action to minimize the prediction loss of both dynamics and reward simultaneously.
Specifically, given the multi-task offline dataset \(\mathcal{D}\), which contains the trajectories \(\tau^{\mathcal{M}}=\{(s_{t},a_{t},r_{t},s_{t+1})\}_{t=1}^{K}\) with horizon \(K\) for training task \(\mathcal{M}\sim\mathcal{T}^{train}\). For each trajectories, a trajectories segment \(\tau_{t}^{\mathcal{M}}=\{(s_{t+i},a_{t+i},r_{t+i},s_{t+i+1})\}_{i=0}^{h}\) of size \(h\) are sampled started from random selected \(t\). With the historical sub-trajectories, the context encoder \(E_{\phi}\) captures the latent representations \(z_{t}=E_{\phi}(\tau_{t}^{\mathcal{M}})\) as the contextual information of the task. Then the generalized reward model \(R_{\psi}\) and dynamics model \(P_{w}\) are conditioned on \(z\), parameterized with \(\psi,\omega\). The context encoder is trained jointly by minimizing the state transition and reward prediction error conditioned on the learned context:
\[\begin{split}\mathcal{L}_{\phi,\psi,\omega}=-\mathbb{E}_{\left(s _{t},a_{t},r_{t},s_{t+1}\right)\sim\tau_{t}^{\mathcal{M}},\mathcal{M}\sim \mathcal{T}^{train}}\left[\mathbb{E}_{s_{t}=E_{\phi}(z_{t}|\tau_{t}^{ \mathcal{M}})}\right.\\ \left.\left[\log P_{\omega}(\hat{s}_{t+1}|s_{t},a_{t},z_{t})+\log R _{\psi}(\hat{r}_{t}|s_{t},a_{t},z_{t})\right]\right]\end{split} \tag{1}\]
Moreover, our method additionally trains the generalized reward model and dynamics model as byproducts, which will play a key role as a useful classifier in the later classifier-guided conditional generation module. It should be noted that our framework is flexible to other representation methods, the further analysis is illustrated in Section 5.5. The detailed experimental results about distribution shift of the quality of training data can be found in Section 5.6.3.
### Conditional Diffusion Architecture
Inspired by the great success of the diffusion model in text-to-image tasks, which generates images based on text labels from noises, we leverage the diffusion model as a trajectory generator conditioned on the task-oriented context. Following Diffuser (Janner et al., 2022), the states and actions in the trajectory are generated simultaneously per time step \(t\) over the planning horizon \(H\):
\[\mathbf{x}_{k}(\tau)=(s_{t},a_{t},s_{t+1},a_{t+1}...,s_{t+H-1},a_{t+H-1})_{k} \tag{2}\]
where \(k\) denotes the timestep in the denoising process. Now we have the pre-trained context encoder to infer the task labels for different tasks, we can additionally condition the diffusion process on the contextual information of the tasks. In this way, we formulate the meta-RL problem as the conditional generative modeling problem:
\[\theta^{*}=\arg\max_{\theta}\mathbb{E}_{\tau\sim\mathcal{D}}[\log p_{\theta}( \mathbf{x}_{0}(\tau)|\mathbf{y}=E_{\phi}(\tau))] \tag{3}\]
where the conditional label \(y\) denotes the task-oriented context inferred from the pre-collected offline data from the current task by context encoder \(E_{\phi}\). The goal is to estimate the conditional data distribution with \(p_{\theta}\) so we can later generate desired trajectory \(\mathbf{x}_{0}(\tau)\) according to the context label from unseen tasks. The forward diffusion process \(q\) and the reverse denoising process \(p_{\theta}\) can be formulated as:
\[q(\mathbf{x}_{k+1}(\tau)|\mathbf{x}_{k}(\tau)),\quad p_{\theta}(\mathbf{x}_{k-1}(\tau)| \mathbf{x}_{k}(\tau),\mathbf{y}=E_{\phi}(\tau))) \tag{4}\]
Specifically, for each trajectory \(\tau\) in the training offline dataset, we first sample a Gaussian noise \(\epsilon\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and a denoising timestep \(k\in\{1,\dots,K\}\). Then we construct a
Figure 2: The overview of MetaDiffuser. During **meta-training** phase, a task-oriented context encoder is trained jointly with conditioned dynamics model and reward model in a self-supervised manner to infer the current task from the recent historical transitions. Then, the multi-task trajectories can be labeled with the trained context encoder and the inferred context are injected in the conditioned diffusion model to estimating the multi-modal distribution mixed by different training tasks. During **meta-testing** phase, context encoder can capture the task information from provided warm-start data from the test task. Then the conditioned diffusion model can manipulate the noise model to denoise out desired trajectories for the test task with the inferred context. Additionally, the pretrained dynamics model and reward model can serve as classifiers for evaluation, with gradient to guide the conditional generation in a classifier-guide fashion.
noisy array with the same dimension of \(\mathbf{x}_{k}(\tau)\) and finally predict the denoising noise as \(\hat{\epsilon}_{\theta}=\epsilon_{\theta}(\mathbf{x}_{k}(\tau),\mathbf{y}(\tau),k)\) in the denoising step \(k\).
For the classifier-free conditioned diffusion model (Ho and Salimans, 2022), the commonly used technique is to randomly drop out the conditioning for improving the quality of generated samples. Intuitively, we also train the noise model jointly with a single diffusion model on conditional and unconditional objective via randomly dropping the conditioning context label with probability \(\beta\). The proper drop probability can balance off the diversity and the relevance of the context label of generated trajectories. The detailed analysis about the effects of different context drop probability can be found in Section 5.6.4.
So far, with the mixed trajectories datasets \(\mathcal{D}\) paired with contextual information of the task it belongs to, we can train the reverse denoising process \(p_{\theta}\), parameterized through the conditional noise model \(\epsilon_{\theta}\) with the following loss:
\[\mathcal{L}(\theta)=\mathbb{E}_{k,\tau\in\mathcal{D}}\left[\|\epsilon- \epsilon_{\theta}\left(\mathbf{x}_{k}\left(\tau\right),\left(1-\beta\right)E_{ \phi}\left(\tau\right)+\beta\varnothing,k\right)\|^{2}\right] \tag{5}\]
After training a conditioned diffusion model for imitating expert trajectories in the offline datasets, we now discuss how to utilize the trained diffusion model to achieve the generalization across unseen tasks. During meta-testing, the context encoder captures the task information context from pre-collected trajectories as warm-start data and infer the task-oriented context as the conditional label \(\mathbf{y}=E_{\phi}(\tau)\). Then the context label can be injected into conditioned diffusion model to guide the desired expert trajectory generation for the current task. \(\mathbf{x}_{0}(\tau)\) is sampled by starting with Gaussian noise \(\mathbf{x}_{K}(\tau)\) and refining \(\mathbf{x}_{k}(\tau)\) into \(\mathbf{x}_{k-1}(\tau)\) at each intermediate timestep with the perturbed noise:
\[\hat{\epsilon}=\omega\epsilon_{\theta}(\mathbf{x}_{k}(\tau),\mathbf{y},k)+(1-\omega) \epsilon_{\theta}(\mathbf{x}_{k}(\tau),\varnothing,k) \tag{6}\]
where the scalar \(\omega\) denotes the guidance weight in the classifier-free conditioned diffusion model. Setting \(\omega=1\) disables classifier-free guidance while increasing \(\omega>1\) strengthens the effect of guidance. Based on the context-conditioned noise generated iteratively, the desired trajectories containing future states and actions can be denoised from the noise step-wisely. With the generated trajectories, the first action will be applied to execute in the environment to step into the next state. This procedure repeats in a standard receding-horizon control loop, similar to traditional planning in RL, described in Appendix D. For architecture details, please refer to Appendix H.
### Dual-guide Enhanced Planner
Previous work (Janner et al., 2022) trains an extra reward predictor \(\mathcal{J}\) to evaluate the accumulative return of generated trajectories and utilizes the gradient of return as a guidance in the sampling process of diffusion model, to encourage the generated trajectories to achieve high return. However, during meta-testing for unseen tasks, as shown in the top part of Figure 3, the conditional generated trajectories may not always obey dynamics constraints due to the aggressive guidance aim for high return, making it difficult for the planner to follow the expected trajectories during the interaction with the environment. Therefore, we propose a dual-guide to enhance the dynamics consistency of generated trajectories while encouraging the high return \(\mathcal{J}\) simultaneously.
For this, we utilize the previously pretrained dynamics model, to predict the future state of the generated trajectory based on its planned actions, then compared it to the states in the generated trajectory. The dynamics discrepancy \(\zeta\) serves as an important metric to evaluate the consistency and reachability of the generated trajectory. Then the gradient from dual-guide can be formulated as:
\[\begin{split}& g=\nabla\mathcal{J}(\mathbf{x}_{k}(\tau))+\lambda \nabla\zeta(\mathbf{x}_{k}(\tau))\\ &\mathcal{J}\left(\mathbf{x}_{k}(\tau)\right)=\sum_{t=0}^{T}R_{\psi} \left(s_{t},a_{t},z_{t}\right)\\ &\zeta\left(\mathbf{x}_{k}(\tau)\right)=\sum_{t=0}^{T}\left\|s_{t+1}- P_{\omega}\left(\hat{s}_{t+1}\mid s_{t},a_{t},z_{t}\right)\right\|^{2}\end{split} \tag{7}\]
where \(\lambda\) denotes the relative scaling coefficient between the dynamics guide and reward guide to balance off the high reward and low discrepancy. The detailed ablation study about the scaling effect can be found in Section 5.6.1. The visualization of an intuitive example is shown in Figure 3.
In this way, incorporate MetaDiffuser not only conducts the classifier-free manner by injecting the context as label \(\mathbf{y}\) into the conditional noise model \(\epsilon_{\theta}(\mathbf{x}_{k},\mathbf{y},k)\), achieving more precise conditional generation, but also incorporate the classifier-guide fashion in Diffuser where the single reward guide is expanded to desi
Figure 3: The visualization of an extreme case about generated trajectories and real trajectories rollout according to actions within generated trajectories in Hopper-Param, as an environment with dramatic dynamics changes. With dual-guide, the generated trajectories are less aggressive in expected rewards and more dynamics consistent to enhance the reachability between adjacent states.
complex environment change in meta-RL setting. Formally, the denoising process in Equation (6) can be extended as:
\[\begin{split}\hat{\epsilon}&\coloneqq\underbrace{ \omega\epsilon_{\theta}(\mathbf{x}_{k}(\tau),E_{\phi}(\tau),k)+(1-\omega)\epsilon_{ \theta}(\mathbf{x}_{k}(\tau),\varnothing,k)}_{\text{classifier-free}}\\ &-\underbrace{\sqrt{1-\bar{\alpha}_{t}}\nabla_{\mathbf{x}_{k}(\tau) }\Big{[}\mathcal{J}(\mathbf{x}_{k}(\tau))+\lambda\zeta(\mathbf{x}_{k}(\tau))\Big{]}}_ {\text{classifier-guided}}\end{split} \tag{8}\]
The details about the relationship between two different conditional fashions can be found in Appendix A.
## 5 Experiments
We conduct experiments on various tasks to evaluate the few-shot generalization performance of the proposed MetaDiffuser. We aim to empirically answer the following questions: 1) Can MetaDiffuser achieve performance gain on few-shot policy **generalization** compared to other strong baselines? 2) Can MetaDiffuser show **robustness** to the quality of warm-start data? 3) Can MetaDiffuser be a **flexible** framework to incorporate with any context representation method?
### Environments Settings
We adopt a 2D navigation environment Point-Robot and multi-task MuJoCo control tasks to make comparisons, as classical benchmarks commonly used in meta-RL (Mitchell et al., 2021; Li et al., 2020, 2021). More details about environments are available in Appendix E. For each environment, different tasks are randomly sampled from the task distribution, divided into a training set \(\mathcal{T}^{train}\) and testing set \(\mathcal{T}^{test}\). On each task, we use SAC (Haarnoja et al., 2018) to train a single-task policy independently. The trajectories of expert policy for each task are collected to be the offline datasets. See more details in Appendix G.
### Baselines
**FOCAL**(Li et al., 2021) proposes a novel negative-power distance metric learning method to train the context encoder for task inference, as an end-to-end offline meta-RL algorithm with high efficiency.
**CORRO**(Yuan & Lu, 2022) proposes a contrastive learning framework for task representations that are robust to the distribution mismatch of behavior policies in training and testing. CORRO demonstrates superior performance than prior context-conditioned policy-based methods.
**Prompt-DT**(Xu et al., 2022) leverages the sequential modeling ability of the Transformer architecture and the prompt framework to achieve few-shot adaptation in offline RL, as a strong meta-RL baseline in sequence modeling fashion.
**CVAE-Planner** To investigate the influence of different generative architectures, we substitute the conditioned diffusion to conditioned VAE, serving the same role as a trajectory generator to guide the planning across tasks.
### The Generalization Ability on Task Adaptation
To evaluate the performance on task adaptation, we sample tasks from the test set with warm-start data, which is pre-collected by a random policy or an expert policy. Then we measure the few-shot generalization ability of different methods with the average episode accumulated reward. For fairness, all methods are trained with the same expert dataset in each environment to investigate whether the diffusion model facilitates few-shot generalization and the performance of MetaDiffuser.
The testing curves and converged performance are summarized in Figure 4 and Table 1 respectively, which contain six environments varying in dynamics and rewards. In relatively simple environments such as Point-Robot and Cheetah-Dir, MetaDiffuser and Prompt-DT significantly outperforms other baselines to a large extent. In Ant-Dir MetaDiffuser outperforms other baselines by a large margin, which show the strong generalization ability in unseen task with different reward functions. Moreover, in Cheetah-Vel, MetaDiffuser is more data-efficient to achieve better asymptotic performance than others, with the benefit of the strong generative capacity of the diffusion model. In dynamics change environments, such as Hopper-Param and Walker-Param, CORRO, as a context-based method, can have a more stable improvement than Prompt-DT. The potential reason may be that the complex environment varying in dynamics is more challenging for Prompt-DT to implicitly capture the dynamics information within a prompt.
MetaDiffuser can outperform CORRO benefiting from the stability of the sequence-modeling framework instead of TD-learning. The detailed analysis of the context representation method can be found in Section 5.5.The CVAE-Planner struggles to generalize to different tasks, illustrating the strong modeling capability of the diffusion model against CVAE, when meeting with the extreme multi-modal distribution. We will illustrate the detailed analysis in Section 5.6.2.
### The Robustness of Warm-start Data Quality
Benefiting from the context encoder and the manner of injecting the explicit context as a label into the diffusion model to conduct the conditional generation, MetaDiffuser is robust to the quality of warm-start data, similar to traditional context-based methods like CORRO. Prompt-DT is sensitive to the quality of prompt and the performance can drop a lot with the middle or random prompt, also mentioned in the original paper (Xu et al., 2022). We conduct a more detailed experiment to investigate the robustness of two algorithms.
The results in Table 2 show that when the quality of prompt
data is not high enough, the performance of Prompt-DT will drop by a large extent except for Cheetah-Dir. This environment contains just two tasks forward and backward, both concluded in the training set and testing set, potentially decreasing the reliance on expert warm-start data. The performance of MetaDiffuser may also experience a slight drop, but still superior to Prompt-DT. The slight drop may be caused by the distribution shift exhibited by poor quality warm-start data during meta-testing and expert data during the pre-trained stage of context encoder, resulting in the inferred context being less accurate. For Prompt-DT, the prompt as the prefix to guide the subsequent sequence generation should contain enough valuable knowledge about how to solve the current task, not just information about what the current task is. But MetaDiffuser has no strict demand with the quality of warm-start data and even can be rollout with any arbitrary policy. The role of warm-start data is just to provide the task-oriented information to the context encoder can infer the task context as the label and then be injected in the conditional denoising process to generate the desired trajectories for planning to fast adaption.
### The Flexibility in Context Representation Method
The generalization ability of MetaDiffuser arises from capturing task information as context to guide the diffusion model to conditional generation. We argue that our framework can flexibly integrate different task representation algorithm, and the improvement of context accuracy can enhance the generalization performance. We conduct experiments to investigate the effect of different context representations on the few-shot generalization capability of MetaDiffuser.
To this end, we borrowed the representation module of CORRO and integrated it into MetaDiffuser, shown as Ours+CORRO in Table 3, resulting in a slight improvement. The performance gain demonstrates that the powerful generalization ability of MetaDiffuser is not achieved by improving context representation capability. The simple representation method we design is not better than the
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Environment** & **CORRO** & **Ours** & **Ours+CORRO** & **Ours+GT** \\ \hline
**Point-Robot** & -5.59\(\pm\)0.57 & -4.48\(\pm\)0.28 & -4.43\(\pm\)0.26 & **-4.02\(\pm\)**0.13 \\
**Ant-Dir** & 193.3\(\pm\)3.21 & -27.7\(\pm\)1.68 & 251.3\(\pm\)1.72 & **282.9\(\pm\)**1.36 \\
**Cheeta-Dir** & 283.5\(\pm\)37.9 & 936.2\(\pm\)17.9 & 936.9\(\pm\)18.1 & **939.7\(\pm\)**15.7 \\
**Cheeta-Vel** & -56.2\(\pm\)9.4 & -45.9\(\pm\)4.1 & -44.6\(\pm\)3.9 & **-41.1\(\pm\)**3.2 \\ \hline
**Walker-Param** & 300.5\(\pm\)34.2 & 368.3\(\pm\)0.6 & 377.0\(\pm\)29.6 & **394.1\(\pm\)**17.5 \\
**Hopper-Param** & 289.3\(\pm\)24.7 & 356.4\(\pm\)16.9 & 361.3\(\pm\)19.2 & **382.5\(\pm\)**12.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The comparisons of the influences of different context representation methods on generalization ability to unseen task.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Environment** & \multicolumn{4}{c}{**MetaDiffuser**} & \multicolumn{4}{c}{**Prompt-DT**} \\ \cline{2-7} & _Expert_ & _Medium_ & _Random_ & _Expert_ & _Medium_ & _Random_ \\ \hline
**Point-Robot** & **-4.48\(\pm\)**0.28 & -4.54\(\pm\)0.31 (\(\downarrow\) 1.5\%) & -4.61\(\pm\)0.21 (\(\downarrow\) 3.2\%) & **-5.04\(\pm\)**0.35 & -5.17\(\pm\)0.29 (\(\downarrow\) 3.8\%) & -5.85\(\pm\)0.32 (\(\downarrow\) 23.4\%) \\
**Ant-Dir** & **247.7\(\pm\)**16.8 & 238.9\(\pm\)18.1 (\(\downarrow\) 3.67\%) & 213.8\(\pm\)26.5 (\(\downarrow\) 13.7\%) & **213.2\(\pm\)**29.1 & 154.7\(\pm\)39.5 (\(\downarrow\) 27.4\%) & 40.1\(\pm\)16.3 (\(\downarrow\) 81.2\%) \\
**Cheeta-Dir** & **936.2\(\pm\)**17.9 & 930.3\(\pm\)18.5 (\(\downarrow\) 0.6\%) & 916.7\(\pm\)21.8 (\(\downarrow\) 1.9\%) & **931.7\(\pm\)**21.3 & 922.6\(\pm\)28.2 (\(\downarrow\) 1.0\%) & 913.9\(\pm\)30.8 (\(\downarrow\) 1.9\%) \\
**Cheeta-Vel** & **-45.9\(\pm\)**4.1 & -50.2\(\pm\)5.2 (\(\downarrow\) 1.9\%) & -55.8\(\pm\)2.3 (\(\downarrow\) 4.4\%) & **-51.3\(\pm\)**4.9 & -125.6\(\pm\)7.5 (\(\downarrow\) 33.2\%) & -208.4\(\pm\)1.9 (\(\downarrow\) 76.1\%) \\ \hline
**Walker-Param** & **368.3\(\pm\)**30.6 & 357.9\(\pm\)33.7 (\(\downarrow\) 2.8\%) & 341.6\(\pm\)38.4 (\(\uparrow\) 7.2\%) & **287.7\(\pm\)**32.1 & 200.1\(\pm\)26.3 (\(\downarrow\) 30.4\%) & 64.7\(\pm\)8.1 (\(\uparrow\) 77.5\%) \\
**Hopper-Param** & **356.4\(\pm\)**16.9 & 337.0\(\pm\)21.2 (\(\downarrow\) 5.4\%) & 319.6\(\pm\)14.2 (\(\downarrow\) 10.3\%) & **265.2\(\pm\)**37.1 & 159.6\(\pm\)35.7 (\(\downarrow\) 39.8\%) & 82.6\(\pm\)15.3 (\(\downarrow\) 68.9\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The comparisons of the performance of Prompt-DT and MetaDiffuser with different qualities of provided warm-start data during the meta-testing phase. The \(\downarrow\) denotes the performance drop with other quality of data.
Figure 4: **Meta-testing average performance** of MetaDiffuser against baselines run over five random seeds in unseen tasks. The dashed lines denote the oracle performance of expert policy trained separately for each test task.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Environment** & **CORRO** & **Ours** & **Ours+CORRO** & **Ours+GT** \\ \hline
**Point-Robot** & -5.59\(\pm\)0.57 & -4.8\(\pm\)0.28 & -4.43\(\pm\)0.26 & **-4.02\(\pm\)**0.13 \\
**Ant-Dir** & 193.3\(\pm\)3.21 & -27.7\(\pm\)1.68 & 251.3\(\pm\)1.72 & **282.9\(\pm\)**1.36 \\
**Cheeta-Dir** & 283.5\(\pm\)37.9 & 936.2\(\pm\)17.9 & 936.9\(\pm\)18.1 & **939.7\(\pm\)**15.7 \\
**Cheeta-Vel** & -56.2\(\pm\)9.4 & -45.9\(\pm\)4.1 & -44.6\(\pm\)3.9 & **-41.1\(\pm\)**3.2 \\ \hline
**Walker-Param** & 300.5\(\pm\)34.2 & 368.3\(\pm\)0.6 & 377.0\(\pm\)29.6 & **394.1\(\pm\)**17.5 \\
**Hopper-Param** & 289.3\(\pm\)24.7 & 356.4\(\pm\)16.9 & 361.3\(\pm\)19.2 & **382.5\(\pm\)**12.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average test returns of MetaDiffuser against other baselines with a few-shot manner.
fine-grained representation trained in contrastive learning manner used in CORRO. Considering the combination of CORRO representation with MetaDiffuser can earn a large performance gain than the original conditioned policy manner, conditional sequence modeling shows great potential as a promising paradigm for generalization tasks.
Although we do not seek improvement in generalization performance through a more complicated context representation design in this paper, the incorporation of a more accurate context representation method is always encouraged. The significant improvement in incorporating ground truth parameters of tasks as context into MetaDiffuser demonstrates that there is still rich room for improvement in the integration of context method, shown as Ours+GT.
### Ablation Study
#### 5.6.1 The Effect of Dual-guide
For meta-testing for unseen tasks, the real trajectory rollout with actions in generated trajectory often deviates greatly from the expected trajectory, especially when meeting with a dynamics change environment. Here we conduct a detailed ablation study to demonstrate the importance of dual-guide for meta-RL setting and gain performance improvements of different relative scaling coefficients between reward guide and dynamics guide in all environments. The visualization in Hopper-Param is shown in Figure 3 and the results are illustrated in Table 4. The utilization of dual-guide can greatly enhance the feasibility and also encourage the high value of generated trajectories when the tasks shift dramatically. In relatively simple environments such as Point-Robot or environments with limited task numbers such as Cheetah-Dir, overly large dynamics guides can cause diffusion models to generate trajectories that are too conservative and lack a high value to guide. We also tried to omit the value guide and solely utilize the dynamics guide, and found that it yielded relatively poor performance for the same reason.
#### 5.6.2 The comparisons of Generative Models
To investigate the importance of the conditional diffusion model in MetaDiffuser, we substitute the conditioned diffusion model to conditioned VAE as the same role of trajectory generator to guide the planning across tasks, named as CVAE-Planner. For fairness, the length of generated trajectories and the planning procedure with samples keep the same with MetaDiffuser. The results of the experiment are shown in Table 5, demonstrating that the fitting capability of CVAE is significantly inferior to the conditional diffusion model, struggling to generate reasonable trajectories for unseen tasks. Moreover, compared to the end-to-end generative paradigm of CVAE, MetaDiffuser can fully utilize the gradient from dual-guide during the step-wise iterative denoising process. Additionally, we also trained an unconditional diffusion model over mixed expert data on all the training tasks, named as UDiffuser. UDiffuser, which is the same as vanilla Diffuser in (Janner et al., 2022), struggles to model such a diverse distribution of data and fails to generate the desired trajectories for specific tasks for the lack of ability to infer what the testing task is.
#### 5.6.3 The distribution shift of data quality
The data distribution shift in meta-RL stems from the warm-start data and training data quality, which may potentially cause reward or transition shift and the inaccurate guidance from the dual guide during meta-test phase. The distribution shift caused by warm-start data has already been studied in Section 5.4, we now take further investigation into the distribution shift of the training data. Specifically, we replaced the expert dataset used for training the context encoder with mixed data and random data and the training and sampling parts of the conditioned diffusion model remain unchanged. The results in Table 6 show that a completely random dataset performs the worst, while the performance of a dataset that mixes random and expert data surpasses that of the expert dataset in most environments.
The potential reason may be the diffusion model performs \(M\) denoising steps to transform noise into a desired trajectory. In the early denoising steps, the trajectory may be closer to noise or similar to the trajectories from random datasets. The reward guide and dynamics guide for
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Environment** & **Random Dataset** & **Mixed Dataset** & **Expert Dataset** \\ \hline
**Point-Robot** & -4.5\(\pm\)10.25 & -4.49\(\pm\)0.26 & **-4.48\(\pm\)** 0.28 \\
**Ant-Dir** & 240.2\(\pm\) 17.1 & **258.4\(\pm\)**19.3 & 247.7\(\pm\) 16.8 \\
**Cheetah-Dir** & 936.3\(\pm\) 17.2 & **936.4\(\pm\)**17.6 & 936.2\(\pm\) 17.9 \\
**Cheetah-Vel** & -46.8\(\pm\) 4.5 & **-43.4\(\pm\)**4.2 & -45.9\(\pm\) 4.1 \\ \hline \hline
**Walker-Param** & 359.4\(\pm\) 33.0 & **381.5\(\pm\)** 28.2 & 368.3\(\pm\) 30.6 \\
**Hopper-Param** & 341.1\(\pm\) 17.3 & **375.2\(\pm\)**18.4 & 356.4\(\pm\) 16.9 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation of dual-guide and relative scaling coefficient.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Environment** & **\(\mathbf{\lambda=0}\)** & **\(\mathbf{\lambda=0.5}\)** & **\(\mathbf{\lambda=1}\)** & \(\mathbf{\lambda=2}\)** \\ \hline
**Point-Robot** & -4.57\(\pm\)0.33 & **-4.48\(\pm\)**0.28 & -4.74\(\pm\)0.26 & -4.89\(\pm\)0.27 \\
**Ant-Dir** & 213.0\(\pm\)17.0 & 214.6\(\pm\)9.5 & **247.7\(\pm\)**16.5 & **238.3\(\pm\)**18.1 \\
**Cheetah-Dir** & 902.4\(\pm\)21.96 & **936.2\(\pm\)**17.9 & 929.7\(\pm\)15.1 & 916.0\(\pm\)19.8 \\
**Cheetah-Vel** & -52.9\(\pm\)4.72 & -49.9\(\pm\)2.95 & **-45.9\(\pm\)**1.4 & -48.6\(\pm\)3.75 \\ \hline
**Walker-Param** & 326.5\(\pm\)24.9 & 330.6 \(\pm\)23.4 & 347.2\(\pm\)19.3 & **368.3\(\pm\)**30.6 \\
**Hopper-Param** & 293.3\(\pm\)13.8 & 307.2\(\pm\)18.6 & 328.1\(\pm\)16.7 & **356.8\(\pm\)**16.9 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The comparisons between different generative models on generalization ability to unseen task.
such trajectories with low quality need to have seen this poor distribution during the pretrained phase to provide a more accurate guide than the dual guide only trained on the expert dataset. However, in the later stages of denoising, the trajectory can be improved toward high quality and be more similar to the expert dataset. Therefore, the dual guide trained on a completely random dataset may also be challenging to guide. The differences are not significant in the remaining environments, which may be because the state space of these environments is relatively small, and distribution shift is not a very important factor. Training the context encoder on datasets with a more diverse distribution can provide accurate guidance for the whole denoising process of trajectories gradually denoising from low-quality noise to high-quality desired trajectories.
#### 5.6.4 The Effect of Context Drop Probability
The proper context drop probability can balance off the diversity and the relevance of the conditional label of generated samples (Ho and Salimans, 2022). We conduct an ablation study with the aim to investigate the effect of context drop probability in the training of the conditional diffusion model. When \(\beta\) reaches 1, MetaDiffuser devolves into the unconditional version previously mentioned as UDiffuser in Table 5. The results in Table 7 show that removing conditional context with a proper probability can improve generalization ability, but the best probability differs from environments. One possible explanation for this could be the varying levels of information sharing among tasks in different environments. Complex or diverse environments may have higher requirements for conditional generation.
#### 5.6.5 The comparisons of denoising steps
Additionally, we also compared the effects of different denoising steps on trajectory generation. The experimental results in Table 8 show that relatively longer denoising steps can better denoise desired trajectories from noise, which can slightly improve the quality of generated trajectories. Increasing denoising steps provide more chances for the dual guide to precisely manipulate the direction and intensity of denoising, further emphasizing the effectiveness of the dual guide. Overall, MetaDiffuser is relatively robust to the choice of denoising steps \(k\), and its performance still outperforms all baselines. Due to the fact that more denoising steps mean longer generation time, we can consider replacing DDPM (Ho et al., 2020) used in this paper with DDIM (Song et al., 2021) or DPM solver (Lu et al., 2022) for reducing the number of denoising steps to meet the requirement of real-time control.
## 6 Conclusion
We propose MetaDiffuser, a novel framework for offline meta-RL, which leverages the diffusion model to conduct conditional trajectory generation to achieve the generalization ability across unseen tasks. By combining the context representation module with a task-oriented conditional diffusion model to generate the desired trajectories for unseen tasks, MetaDiffuser demonstrates that the conditional diffusion model can be a promising backbone for offline meta-RL. Moreover, we design the dual-guide to improve the quality of generated trajectories in the sampling process, ensuring dynamics transition consistency with the real world while encouraging the generated trajectories to achieve high returns. The experiments on various benchmarks empirically show that MetaDiffuser much better generalizes to unseen tasks than prior methods, while also enjoying both the flexibility to incorporate with other task representation methods and the robustness to the quality of collected warm-start data at the testing task.
**Limitation.** Although MetaDiffuser enjoys the robustness of warm-start data, the framework still faces the limitation of the need for expert training data in meta-training phase, which is the common dilemma in offline meta-RL. Besides, MetaDiffuser has not been evaluated in real robots and the requirements of real-time control may be challenging.
**Future Work.** Further improving the speed of real-time trajectory generation in planning and supporting high-dimensional image inputs are directions for future work. Additionally, the combination of a large language model (LLM) with the reasoning ability in complex control tasks with MetaDiffuser is an interesting research direction.
## Acknowledgements
This work is supported by the National Key R&D Program of China (Grant No. 2022ZD0116402), the National Natural Science Foundation of China (Grant No. 62106172), and the Natural Science Foundation of Tianjin (No. 22JCQNJC00250).
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Environment** & \(\mathbf{\beta=0}\) & \(\mathbf{\beta=0.1}\) & \(\mathbf{\beta=0.2}\) & \(\mathbf{\beta=0.3}\) \\ \hline
**Point-Robot** & -4.86\(\pm\)0.22 & -4.61\(\pm\)0.34 & -4.71\(\pm\)0.30 & **-4.48\(\pm\)**0.28 \\
**Anti-Dir** & 234\(\pm\)9.12\(\pm\)8 & 2.41\(\pm\)3.16 & **24.77\(\pm\)**16.8 & 2.84\(\pm\)25.3 \\
**Cheeta-Dir** & **396\(\pm\)**21.79 & 915.4\(\pm\)9.80 & 909.6\(\pm\)22.1 & 873.8\(\pm\)28.6 \\
**Cheeta-Vol** & -48.3\(\pm\)2.7 & -49.8\(\pm\)3.5 & -47.4\(\pm\)3.7 & **-45.9\(\pm\)**4.1 \\ \hline
**Walker-Param** & 346.5\(\pm\)31.4 & 349.6\(\pm\)35.7 & **368.3\(\pm\)**30.6 & 357.8 \(\pm\)29.0 \\
**Hopper-Param** & 347.1\(\pm\)17.3 & **356.8\(\pm\)**16.9 & 336.9\(\pm\)12.4 & 343.3\(\pm\)18.6 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Ablation of context drop probability.
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Environment** & \(\mathbf{k=20}\) & \(\mathbf{k=50}\) & \(\mathbf{k=100}\) \\ \hline
**Point-Robot** & **4.41\(\pm\)**0.30 & -4.52\(\pm\)**0.31 & -4.48\(\pm\) 0.28 \\
**Anti-Dir** & 243.1\(\pm\)1.51 & **248.0\(\pm\)**1.72 & 247.1\(\pm\) 16.8 \\
**Cheeta-Dir** & 927.2\(\pm\) 21.4 & 935.2\(\pm\) 16.4 & **936.2\(\pm\)**17.9 \\
**Cheeta-Vol** & -47.6\(\pm\) 5.2 & **-45.4\(\pm\)**4.3 & -45.9\(\pm\) 4.1 \\ \hline
**Walker-Param** & 360.3\(\pm\) 28.7 & 364.8\(\pm\) 31.3 & **368.3\(\pm\)**30.6 \\
**Hopper-Param** & 351.5\(\pm\) 15.0 & 356.4\(\pm\) 16.9 & **360.2\(\pm\)**16.2 \\ \hline \hline \end{tabular}
\end{table}
Table 8: The comparisons between different denoising steps \(k\). |
2309.08537 | Operational Integration Potential of Regional Uncrewed Aircraft Systems
into the Airspace System | As part of newly developing aviation markets, fixed-wing Uncrewed Aircraft
Systems (UAS) are projected to impact airspace systems and conventional air
traffic in the future. The initial introduction of fixed-wing cargo UAS for
regional operations is anticipated to occur at smaller under-utilized airports.
Therefore, this paper assesses the integration potential of regional fixed-wing
cargo UAS into the airspace system. A baseline is established to identify
potential airports for cargo UAS operations in different areas. Additionally,
using 2022 data, regional aircraft eligible for future cargo UAS operations are
investigated. Finally, the accessibility of these regional aircraft at the
identified airports was analysed. Based on the availability of current
certified landing systems needed for initial UAS operations, potential airports
in the areas Germany, Texas, and California for UAS operations are compared.
Additionally, based on the maximum takeoff weight allowances of airport
runways, current air transport operations at airports, and airspace classes,
individual airports with a high potential for the introduction of initial cargo
UAS operations with and without the availability of landing systems needed for
UAS are identified and compared among the investigated areas. Despite a total
of 173 identified airports for potential UAS operations in Germany, 376 in
Texas, and 231 in California, only eleven of these airports currently have the
certified landing systems needed for initial UAS operations. However, other
landing system technologies that are currently under development, such as
vision-based landing systems, might support UAS accessibility at the identified
airports for potential UAS operations in the future. | Tim Felix Sievers, Jordan Sakakeeny, Nadezhda Dimitrova, Husni Idris | 2023-09-15T16:56:18Z | http://arxiv.org/abs/2309.08537v1 | # Operational Integration Potential of Regional Uncrewed Aircraft Systems Into the Airspace System
###### Abstract
As part of newly developing aviation markets, fixed-wing Uncrewed Aircraft Systems (UAS) are projected to impact airspace systems and conventional air traffic in the future. The initial introduction of fixed-wing cargo UAS for regional operations is anticipated to occur at smaller under-utilized airports. Therefore, this paper assesses the integration potential of regional fixed-wing cargo UAS into the airspace system. A baseline is established to identify potential airports for cargo UAS operations in different areas. Additionally, using 2022 data, regional aircraft eligible for future cargo UAS operations are investigated. Finally, the accessibility of these regional aircraft at the identified airports was analysed. Based on the availability of current certified landing systems needed for initial UAS operations, potential airports in the areas Germany, Texas, and California for UAS operations are compared. Additionally, based on the maximum takeoff weight allowances of airport runways, current air transport operations at airports, and airspace classes, individual airports with a high potential for the introduction of initial cargo UAS operations with and without the availability of landing systems needed for UAS are identified and compared among the investigated areas. Despite a total of 173 identified airports for potential UAS operations in Germany, 376 in Texas, and 231 in California, only eleven of these airports currently have the certified landing systems needed for initial UAS operations. However, other landing system technologies that are currently under development, such as vision-based landing systems, might support UAS accessibility at the identified airports for potential UAS operations in the future.
Uncrewed aircraft systems, UAS, regional air mobility, regional aircraft, air cargo, regional airport
## 1 Introduction
The United States (US) and Europe both have an extensive network of airports and dense airspace. Airspace in the US is denser, on average, and airports are generally busier in terms of flight movements, england passengers, and cargo per airport, than in Europe [1]. Despite the high overall number of flight movements, many US and European airports operate under capacity because travellers and air cargo are consolidated into fewer, larger aircraft on high-traffic routes via major hubs [2]. In fact, only around 0.6% of all airports in the US serve 70% of passenger flights and 1.8% of all airports in Europe are responsible for 50% of air transport services [2, 3]. Moreover, most US and European local and regional airports are increasingly under-utilized [2, 4]. The introduction of next-generation air transport systems, such as fixed-wing Uncerewed Aircraft Systems (UAS), may help to revitalize traffic at these under-utilized airports [5, 6]. UAS are highly automated aircraft without pilots on board and the most promising initial use case for the development of these increasingly autonomous aircraft systems is expected to be regional air cargo operations [6].
In recent years, congestion at major hub airports, the emergence of electric and other non-conventionally powered aircraft, and a significant pilot shortage in the regional sector have created a desire to revitalize Regional Air Mobility (RAM) and to rethink the typical hub-and-spoke air cargo model [2]. Cargo UAS provide a proving ground for increasingly autonomous technologies because they will be subject to fewer regulations in terms of safety compared to operations that transport passengers without a pilot. These fixed-wing cargo UAS will be either conversions of existing aircraft or new designs. To safely and efficiently integrate these fixed-wing UAS, whether they include new entrant aircraft or conversions, with conventional traffic, it is critical to consider and analyse the environment in which the UAS are operating. This paper aims to answer the questions, "What kind of airports are accessible to regional air cargo aircraft eligible for UAS operations, given current assumptions about technological capabilities? Where and how many of these airports are in the airspace system?" Answering these questions provides an important input to performing studies and simulations that assess the impact of cargo UAS on the airspace system and its different entities.
For the regional cargo UAS use case, it is likely that, initially, existing aircraft will be converted to UAS. Therefore, a previous study to obtain a baseline on current regional air cargo operations in the US and Europe determined three areas (Germany, Texas, and California) as good candidates for initial cargo UAS operations due to their large number of under-utilized airports and importance to the air cargo network. It was also found that thutoprop aircraft dominate the regional air cargo network. In this paper, current air traffic and airport data from 2022 for Germany, Texas, and California were analysed to provide a baseline of how the introduction of fixed-wing UAS may evolve and impact airspace systems differently in different areas.
This research as shown in the following Sections 1-4 has previously been published in [7]. Section 2 reviews previous work and establishes background differences between US and European airspace. Section 3 describes the derivation of a baseline and the methodology for how that baseline will be used for comparison. Using that baseline, Section 4 compares the potential for identified airports to support UAS operations by distinguishing between different instrument Approach Procedures (IAP) needed for initial UAS operations and Maximum Takeoff Weight (MTOW) allowances of airport runways. Section 5 assesses individual airports for potential UAS operations based on IAP availability, airspace classes, current air transport operations, and MTOW allowances. Section 6 presents concluding remarks and future work.
## 2 Background and previous work
An airspace system can be considered as a network of different entities in controlled and uncontrolled airspace [8]. Among others, entities include airports and aviation services, procedures, and personnel managing the air traffic. When analysing and comparing US and European airspace systems, it is important to consider the different characteristics of each's Air Traffic Management (ATM) systems. The US and European ATM systems have many fundamental similarities in terms of their operational concepts. However, in Europe, 37 different national Air Navigation Service Provider (ANSP) organizations are responsible for different geographic areas, whereas in the US, airspace management is provided by one single national organization, the Federal Aviation Administration (FAA) [1, 9]. Thus, ATM in Europe occurs primarily within individual European country borders. The Single European Sky (SES) initiative was introduced by the European Union (EU) in 2004 to de-fragmentize the European airspace and jointly improve efficiencies towards safety, performance, technological contribution, human factors, and airport infrastructure [9].
### Differences in airspaces classes
EUROCONTROLOL, on behalf of the EU, regularly publishes a joint report with the FAA on "ATM operational performance comparisons between the US and Europe". The latest report published in 2019 shows that, on average, the density of operations in the airspace of the Conterminus United States (CONUS) is higher than in Europe, because the US controls almost 50% more Instrument Flight Rules (IFR) flights than Europe, even though its airspace is 10% smaller geographically [1]. Table 1 provides a comparison of airspace classes in terms of being controlled by Air Traffic Control (ATC) and the separation services provided, using Germany (GER) as a European example compared to the US [10, 11, 12].
ATC is responsible for providing separation services to aircraft by ensuring minimum separation. In the US, airspace Classes A and B exist in which all flights must be separated by ATC, whereby only IFR flights are permitted in airspace Class A. In the only uncontrolled airspace, Class G, there is no separation of flights by ATC. Furthermore, there are additional rules for separation as in Special Visual Flight Rules (SVFR) operations when weather conditions are not within the Visual Flight Rules (VFR) limits [10, 12, 13].
Additionally, Germany operates Radio Mandatory Zones (RMZ), which are specially created for IFR approaches at airports in uncontrolled airspace. The RMZ begins on the ground (GND) and extends to the above bordering airspace Class E, which starts between 1,000 feet and 2,500 feet Above Ground Level (AGL). Within the RMZ, carrying radio communication equipment is mandatory. However, the aircraft does not require ATC clearance for its entry, but voice communication capability and radio standby [10].
Within the different airspace classes there are further differences between Germany and the US such as the altitude AGL to which airspace extends. For example, in the US, Class D typically covers the airspace from GND to 2,500 feet AGL [11]. In Germany, Class D airspace can reach 10,000 feet Mean Sea Level and is utilized as a Controlled Traffic Region (CTR) at 32 public airports and airfields in controlled airspace [10]. In the US, however, Classes B, C, and D are utilized as controlled airspaces around airports depending on the level of flight activities (with Class B airspace being used for the busiest airports). Additionally, some towered airports in Class C or D airspace in the US become non-tovered at less traffic-intensive times, such as late evening or night, and move to Class E or G airspace accordingly. For example, Waco Regional Airport (KACT) is a Class D airspace between 0600-2400 in the local time and is Class E when the tower is not operating (i.e., from 0000-0600 local time). For simplicity, airports with a physical air traffic control tower receiving separation by ATC will be counted as "towered" in this study, although some airports might not always have this tower operational.
The existence of an air traffic control tower is an important integration factor when it comes to how a remotely piloted UAS flying under IFR will integrate into the terminal airspace surrounding an airport. It is debatable whether initial entry into the airspace will occur at low-traffic towered airports or at non-towered airports. Considering towered airports first, an air traffic controller can provide separation and other services for the UAS and its remote pilot. The process of flying into and out of a towered airport will tend to be more standardized and predictable than at non-towered airports without ATC separation. However, towered airports have a tower because they are busy enough to necessitate the services an air traffic control tower provides. Integrating into a towered airport typically will mean integrating into an environment with more traffic than a non-towered airport. That additional traffic may lead to inefficient UAS operations, should the UAS not be able to integrate with the same performance as conventionally crewed aircraft. Additionally, should the UAS face an off-nominal situation, there is a much higher chance of causing disruptions with other aircraft.
Typically, non-towered airports are less busy than towered airports and therefore aircraft in their terminal area do not receive ATC separation services. Due to the "one in, one out" rule, whereby ATC will only allow one IFR aircraft operating at a non-towered airport at a time, it is guaranteed that there will be only one IFR aircraft, for example the UAS flying in or out of the airport. However, the major integration hurdle at non-towered airports is aircraft flying under VFR, especially non-cooperative VFR traffic that operates with unknown intention and thus will not actively cooperate to resolve a potential conflict. Conventionally crewed aircraft operations utilize the pilot on board to "see and avoid" other traffic. Without a pilot on board, that requirement to "see and avoid" falls to "detect and avoid" systems, which need to have minimal latency to guarantee safe operations. Because VFR aircraft may fly less predictably than IFR aircraft, a larger buffer between Uncrewed Aircraft (UA) and VFR aircraft may be needed than between UA and IFR aircraft. This increased buffer could lead to potentially inefficient integration of UAS, as they may fly a more circuitous routing to mitigate interactions with VFR. An analogy can be found in "self-driving" cars: it is relatively straightforward to automate driving on a highway, as the path is roughly fixed, and the movement of other vehicles is fairly predictable. However, "self-driving" in the city is more difficult because non-cooperatives, such as other cars pulling out of parking spots without looking, have the freedom to do what they will, making operations much more difficult to predict.
### _Differences in network and distribution of airports_
Generally, it can be observed that there are a considerable number of under-utilized airports in the US and Europe, which may be candidates for initial UAS operations. In the US, about 70% of passenger flights are operated from just 30 airports (operated in the relatively busy airspace Class B), although there are over 5,000 public US airports [2]. In Europe, a similar phenomenon exists with over 2,500 less-busy airports [3, 4]. Likewise, air cargo traffic is primarily oriented around hub-and-spoke operations, namely through major international hubs [5, 14, 15]. Smaller airports are responsible for feeder traffic to the hub-and-spoke system or for point-to-point flights, with many of these less-busy airports focused on passenger transport rather than air cargo [5, 15].
Looking at the year 2022, the aforementioned trends of US airports being busier than their European counterparts, as investigated in [1], can be observed by comparing the most recent annual data from Eurostat, the statistical office of the European Union, and the US Bureau of Transportation Statistics (BTS). For commercial flight movements, multiple values, including fight movements with passengers and/or
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \multirow{2}{*}{**Airspace classes\({}^{\ddagger}\)**} & \multicolumn{2}{c|}{**Controlled**} & \multicolumn{2}{c}{**ATC separation**} \\ \cline{2-5} & **GER** & **US** & **GER** & **US** \\ \hline A (Alpha) & \(\cdot\) & Yes & - &
\begin{tabular}{c} IFR to IFR \\ _no VFR traffic_ \\ \end{tabular} \\ \hline B (Bravo) & - & Yes & - & V/IFR to V/IFR \\ \hline C (Charlie) & Yes & Yes & IFR to V/IFR & IFR to V/IFR \\ \hline D (Delta) & Yes & Yes & IFR to IFR & IFR to IFR \\ \hline E (Echo) & Yes & Yes & IFR to IFR & IFR to IFR \\ \hline G (Golf) & No & No & No & No \\ \hline \end{tabular}
* a. In addition to these six airspace classes, there are designated airspace areas with limitations and special use such as for military operations. In ULICE some other European countries, Germany has neither Class A nor D airspace in operation. France, or D airspace areas for the airspace around its capital. Paris. Class A airspace in the United States is not around airports at all. Rather, if no corporate the airspace between 18,000 feet and 60,000 feet.
\end{table} TABLE I: Comparison of different national airspace classes
cargo on board (all operations'), enplaned passengers, cargo-only flight movements, and enplaned cargo in metric tonnes (I) can be found for the 34 busiest European and US airports in Table 2[16, 17].
Although Table 2 indicates that the main airports in the US are busier on average, [1] states that Europe's airports have a higher number of IFR flights per active runway and airports operate closer to their capacity limits than in the US. In 2022, 8,302,587 IFR flights were operated in Europe (based on the 27 states of the European Union plus Norway and Switzerland) with 35.8% of IFR flights (2,971,433) in France and 32.7% of IFR flights in Germany (2,712,552)[18]. In the US, 15,416,640 IFR flights were handled by the FAA in FY2022219 [19]. 13.7% of the IFR operations in the US took place at just three airports: Atlanta (KATL), Chicago O'Hare (KORD), and Dallas-Fort Worth (KDFW).
Footnote 1: The air cargo on board of “all operations flight movements” is any of cargo-only (no passengers transported), belly freight (cargo transported in the lower deck of the passenger aircraft), or combi freight (split of the main cabin of the aircraft to separate passenger seats and cargo area).
Previous analysis showed that the aircraft flying into the airports likely to be used for the introduction of cargo UAS are small, fixed-wing aircraft, also known as regional aircraft [20]. The term regional aircraft2, in this work, refers to fixed-wing aircraft that have a payload <9 tonnes and a MTOW <25 tonnes, regardless of propulsion type. The analysis of the potential for regional air cargo operations with UAS also showed that most of the domestic3 cargo flight movements by regional aircraft were operated within a flight distance under 1,000 kilometers [20]. 49% of the domestic cargo-only flight movements by all aircraft in Europe and 97% of the domestic cargo-only flight movements by regional aircraft in the US were operated within this flight distance. Likewise, this definition of a regional flight distance is in accordance with NASA's definition of RAM, in which regional flights are conducted in ranges between 50 and 500 nautical miles (93-926 kilometers) [21].
Footnote 2: [https://www.sds.org/](https://www.sds.org/)
Footnote 3: Note that, in [20], regional aircraft referred to only piston and turboprop aircraft. The term has been expanded to include jet aircraft in this work because there is a strong desire by industry to expand beyond just turboprop aircraft into larger jet aircraft.
The same analysis proved that a higher number of flight movements by smaller regional aircraft in the US (e.g., Cessna 208 Caravan) are used to transport an equal amount of cargo (3.7 versus 3.9 million tonnes) relative to Europe, where a lower overall number of larger turboprop aircraft dominated the regional air cargo domain [20]. Considering regional turboprop aircraft types, larger aircraft are used in Europe, such as the ATR 42, ATR 72, and Embracher EMB 120. Almost 60% of European cargo flight movements were operated over longer regional flight distances between 300 and 700 kilometers. However, in the US, over 60% of cargo flight movements by regional aircraft were operated on flights less than 300 kilometers in flight distance.
Despite its high number of small commercial airports and the highest number of intra- and extra-European cargo flight movements compared to those in any other European country, Germany had fewer than 400 domestic cargo flight movements by regional aircraft in 2021 [20]. Because of the widespread existence of small commercial airports as necessary infrastructure requirements for future UAS operations in the RAM realm [22, 2], Germany can be considered a potential country for the introduction of regional cargo UAS. However, since almost no domestic cargo flights are currently operated in Germany, existing cargo flights can rarely be replaced by UAS at present. Given the benefits of highly automated cargo UAS operations such as increased flexibility in operations and reduced personnel requirements as well as lower costs [23], it can be assumed that regional cargo UAS in Germany might be introduced via additional regional cargo operations on new flight routes.
The same analysis has shown that California and Texas appear to be well suited for regional fixed-wing cargo operations in the US [20]. California, a large, populous state in the western US of similar size to Germany, and Texas, another large, populous state, in the south-central region of the US, have a similar percentage (~15%) of intra-state cargo flight movements being performed by regional aircraft (i.e., eligible for potential UAS replacement). Both Texas and California also have important large cargo sorting hubs. However, the share of airports by sizes relevant for cargo UAS operations is different in the two US states. California has a high share of small4 airports (73, more than any other US state, except for Alaska5) whereas Texas has the highest share of medium-sized airports (that Eurostat refers to as other airports) compared to any other US state. These other airports, being busier than small airports, may present more challenges with respect to the integration of cargo UAS. In this context, according to Eurostat, Germany has 141 small public, commercial airports with the majority being under-utilized [16]. Germany, Texas, and California are relatively busy in terms of total number of cargo flight movements compared to other US states and European countries (see Table 3).
Footnote 5: According to Eurostat, small airports are defined as airports with <15,000 annual passenger units (where one passenger unit corresponds to either one passenger or 100 kilograms of cargo); other airports have <150,000 to >15,000 annual passenger units, and main airports >150,000.
Footnote 6: While Alaska is a potentially very interesting use case for cargo UAS, the choice was made to study in-depth only states in the CONUS, as those results would likely be more applicable to other US states.
\begin{table}
\begin{tabular}{l|r|r} \hline \multicolumn{1}{c|}{**Median value at**} & \multicolumn{1}{c}{**Europe**} & \multicolumn{1}{c}{**United**} \\ \multicolumn{1}{c|}{**main airports**} & & **States** \\ \hline All operations flight & & \\ movements\({}^{*}\) & 140,566 & 300,489 \\ \hline Enplaned passengers & 18,752,120 & 30,750,214 \\ \hline Cargo-only\({}^{*}\) flight movements & 4,433 & 9,906 \\ \hline Enplaned cargo on board & & \\ cargo-only flights (I)\({}^{*}\) & 141,206 & 198,554 \\ \hline \end{tabular}
* a. Figid movements refer to the sum of an arrival and departure for all national and international commercial flights that are both scheduled and non-scheduled.
* b. Cargo consists of both freight and mail. Cargo-only flights have no passengers on board of the aircraft.
\end{table}
Table 2: Median values based on 34 busiest main airports by commercial flight movements in 2022
Likewise, the investigated areas have a significant share of less-busy airports relevant for the introduction of initial UAS operations that Eurostat refers to as small and other airports. However, Germany has a comparatively low share of domestic cargo flights by regional aircraft that have the potential to become UAS by replacing current flight routes. California and Texas, on the other hand, might be prime locations with the required airport infrastructure as well as current air cargo routes for the replacement by UAS [20].
## 3 Methodology of the analysis of airspace system characteristics
The methodology section describes the baseline that is applied to identify potential airports for UAS operations in different areas. The current certified landing systems needed for initial UAS operations at the potential airports are introduced before concluding with the data sources used for this study.
### Derivation of a baseline for analysis
To assess how the introduction of UAS may evolve and impact airspace systems in different areas, a baseline of accessible airports for potential UAS operations needs to be identified. In the first step, potential airports are defined based on the air transport services they provide. In the second step, potential airports are classified based on their annual number of IFR flight movements to identify less busy airports. Finally, a maximum on the number of flight movements at an airport is applied to provide a baseline of potential airports for the introduction of UAS in different areas. This methodology was applied to airports in Germany, Texas, and California.
In Germany, airports and airfields are collectively referred to as aerodromes by the German ANSP, Deutsche Flugsicherung (DFS). Here, DFS distinguishes between airports, which "require protection by a construction protection area in accordance with $ 12 of the Air Traffic Act", and airfields, which do not. The construction protection area ensures that the construction of buildings within a 1.5-kilometer radius around the airport reference point, as well as on the takeoff and landing areas and safety areas, require approval by the aviation authority [24]. In this paper, for simplicity and to better align with FAA terminology, both airfields and airports will be referred to as airports.
It can be assumed that the introduction of cargo UAS will initially occur at publicly accessible airports with less busy air transport services [22]. Public airports are open for public access and do not require individual operating permissions from the airport operator as private airports do, which likely increases the flexibility of air transport operations by cargo UAS. Due to this factor and the added difficulty of interacting with military aircraft, private4 airports, as well as military and military-public joint-use airports are excluded from consideration. Therefore, only public airports will be analysed. Public airports can be further distinguished by whether they provide commercial and/or non-commercial air transport services. Eurostat defines commercial air transport operators and commercial purposes as "scheduled or non-scheduled air transport services, or both, which are available to the public for carriage of passengers, mail, and/or cargo" [25]. The FAA defines airports with "commercial services" as airports that are publicly owned "with at least 2,500 annual enplanements and scheduled air carrier service" [26]. In this study, the term public airport will refer to airports that are publicly accessible (regarding potential UAS operations), regardless of whether the airport currently has commercial air transport operations. For example, Heringsdorf (EDAH), despite its relatively few (688) IFR flight movements in 2022 is a public airport because it is publicly accessible for use by both commercial and general aviation aircraft [10].
Footnote 4: German airports are distinguished by their type of operating obligation. German airports with no operating obligation (because they are privately owned) are called special airports and special airfields. Only the operator and, upon request, third parties are allowed to operate on them.
According to DFS, Germany operates 15 lowered International Airports of which four serve as so-called Hub airports, six as International Access Airports 1 (IAA1) and five as International Access Airports 2 (IAA2). In addition to the 15 lowered International Airports, DFS defines 20 more towered airports as Regional Airports [27]. In 2022, the four German Hub airports, including Berlin (EDDB), Frankfurt (EDDF), Dusseldorf (EDDL), and Munich (EDDM,) had a median of 222,483 IFR flight movements followed by the IAA1 with a median of 77,145 annual IFR flight movements. In total, the Hub Airports and the IAA1 accounted for 87.7% of all annual IFR flight movements of all the towered airports in Germany. Looking at the IFR flight movements at IAA1 airports, Cologne/Bonn (EDDK) was the busiest IAA1 airport (119,117) and Nuremberg (EDDN) the least busy (35,714). The IAA2 had a median of 11,909 annual IFR flight movements with the greatest number of annual IFR flight movements operated at Bremen (EDDW) with 19,423 IFR flight movements and Erfurt (EDDG) as the least busy with 2,865 annual IFR flight movements. The subsequent category of airports by DFS are so-called Regional Airports with a median of 6,483 annual IFR flight movements in 2022. The most IFR flights operated at a Regional Airport was at Dortmund (EDLW) with 21,476 annual IFR flight movements, the fewest IFR flight movements operated at a Regional Airport was at Schwerin-Parchin (EDOP), with just one single annual IFR flight movement.
For the US, the FAA distinguishes between primary airports classified as Hub (large, medium, and small) and Non-hub airports, as well as between non-primary airports classified as National, Regional, Local, Basic, and Unclassified (limited activity) airports [26]. Primary airports are airports with commercial services that handle more than 10,000 passenger boardings annually. The categorization of US airports also includes special facilities such as seaplane
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Air cargo flight movements** & **Germany** & **Texas** & **California** \\ \hline Total\({}^{a}\) by all aircraft & 157,764 & 98,007 & 178,792 \\ Intra-state\({}^{b}\) by all aircraft & 15,816 & 44,504 & 138,180 \\ \hline Total\({}^{b}\) by regional aircraft & 9,870 & 18,575 & 28,370 \\ Intra-state\({}^{b}\) by regional aircraft & 392 & 15,026 & 27,952 \\ \hline \end{tabular}
* a. Refers to flight movements within the US and to intra- and extra-European cargo flight movements.
* b. Intra-state refers to flight movements within a US state and within Germany.
\end{table}
Table 3: Air cargo flight movements in 2021 [20]
bases or heliponts, though those are excluded from this analysis. Additionally, as in Germany, the US operates military-civil joint-use airports, which, as discussed previously, will be excluded.
In this study, the term potential UAS airports, or P2 airports for short, is used to establish a listing of airports to which cargo UAS might fly. P2 airports include and refer to: 1) Public towered airports with annual IFR flight movements percentages under 2.2% for the given area (country/state) and 2) Public non-towered airports.
The <2.2% threshold was selected because the least busy IAA1 airport (EDDN) had 2.2% of the total annual IFR flights in Germany. Using this cut off includes the five towered IAA2 (all public) and the 20 towered Regional Airports (17 publicly, as defined by DFS. The towered airports that receive <2.2% of the annual IFR traffic were selected because it is unlikely that initial UAS operations will occur at the busier airports (>2.2% of IFR flight movements). Rather, it is more likely that initial UAS operations will take place at less busy airports. Additionally, there are numerous airports in Germany that are non-towered and for which there is no record of IFR and VFR flight data provided by DFS. It can be assumed that these non-towered airports have fewer flight movements than the towered airports and thus are also included in the definition of P2 airports in this study. Following these assumptions, there are 173 P2 airports (22 towered) out of 183 public airports (32 towered) in Germany.
In Texas, there are a total of 2,080 airports (383 of which are public use) with 210 commercial airports included in the National Plan of Integrated Airport Systems (NPIAS) (47 being towered). California has a total of 899 airports (242 available for public access), with 188 commercial airports included in the NPIAS (55 being towered). Applying the <2.2% cut off for towered US airports, Texas has 376 P2 airports (40 being towered) and California has 231 P2 airports (44 being towered) [28]. Similar to Germany, a significant share of current IFR flight movements is operated at the airports with annual IFR flight movements percentages >2.2% in Texas (72.4%) and in California (78.5%) [29]. For the year 2022, Fort Worth Alliance (KAPV) was the busiest P2 airport in Texas with 48,119 annual IFR flight movements and Palm Springs International (KPSP) was the busiest P2 airport in California with 47,982 annual IFR flight movements [29].
### _Introduction of current certified landing systems for initial UAS operations_
IAP are used to land in Instrument Meteorological Conditions, in which visual landing is not possible. It is anticipated that UAS will utilize IAP to land at airports. However, no regulations yet exist that specify required IAP for UAS. Regulations and standards regarding UAS automatic landing capabilities and technologies will need to be put forth before UAS can fly routine operations. Nonetheless, when integrating UAS into the airspace system, it is important to consider other air traffic participants in the airspace as well as the availability of enabling procedures and technologies for initial UAS operations, such as needed IAP present at airports.
RTCA, Inc. highlights the need for automatic landing systems for UAS in its Guidance Material and Considerations for Unmanned Aircraft Systems (RTCA DO-304A, Section 2.4.6) [30]. Although automatic landing systems not based on ground based navigational aids would provide the most operational freedom for UAS, Instrument Landing System (ILS) Category (CAT) III are the only current systems that enable automatic landing8 in nominal operations. Although no US operator has received approval for ILS CAT IIIc, with a decision height of 0 feet and a runway visual range of 0 feet, it is nonetheless the only regulatory path to automatic land at present [31]. Therefore, until such time as alternative systems are developed and certified, it is assumed that for future UAS operations at airports, the most likely current IAP for UAS is ILS CAT III, even if the existing regulations need to be adapted for UAS. Other landing systems, such as vision-based landing systems [32], are also in development, and existing Global Positioning Systems landing systems are in use in limited situations, but do not currently meet civilian aviation safety standards. Therefore, only currently certified systems are considered in this work [33]. ILS CAT III are the most stringent IAP that exist today and require the highest level of technology of all the IAP. For ILS CAT III approaches, automatic landing systems and rollout control systems are needed to control the approaching aircraft. For more information about ILS categories, see [33].
Footnote 8: To operate in true zero visibility conditions, surface operations,
However, ILS, especially CAT III systems, do have their downsides. They are expensive to implement and maintain and they only serve a single runway end. As such, they are not installed at many airports (only 68 throughout the US [30]). Far more common are the less stringent CAT I (decision height >200 feet) and CAT II (decision height 100-200 feet) ILS. Another class of systems already in use that can be considered for future airport accessibility of UAS are Ground Based Augmentation Landing System (GBAS) Landing Systems (GLS) [34]. GLS generally need only one installation per airport. Once installed, the Global Navigation Satellite System localizer works for all runways, making it a cheaper system to install, maintain, and upgrade than ILS [35]. Of course, aircraft must be equipped with the necessary on-board systems to utilize GLS (the same is true for ILS). The categories (CAT I, II, and III) of GLS are the same as for ILS, though only CAT I and II are operational as of this writing.
Of the five different landing systems, ILS CAT I, II, and III and GLS CAT I and II, the latter three are considered UAS IAP insofar as they provide a higher potential for utilization by UAS operations. ILS CAT III is included because it is the highest-level IAP currently in use. The GLS approaches are included because they can be upgraded to CAT III more easily than ILS, once CAT III systems become available [36]. According to a SESAR estimate, full GLS rollout at airports across Europe may be achieved as early as 2036 [37]. Based on the availability of UAS IAP, this study further distinguishes between 1) P2 airports providing UAS IAP (P2W airports) and 2) P2 airports without UAS IAP (P2N airports). Thus, the airport types in this paper are:
1. **P2 Airports**: Potential UAS airports (those airports that are public use and have <2.2% of the area's IFR flight movements)
2. **P2W Airports**: P2 airports with UAS IAP (i.e., ILS CAT III or GLS CAT I/III)
3. **P2N Airports**: P2 airports without UAS IAP
such as taxiing, also need to be automated.
P2W airports have a higher potential to be initially utilized for UAS operations than P2N airports. Here, P2N airports refer to all other airports that do not currently have ILS CAT III or GLS in place, regardless of whether they provide any ILS. However, P2N airports will still be considered for future UAS operations, as they could be retrofitted with required UAS IAP at any time. Additionally, there will likely be further technological advancements that could enable UAS accessibility at these P2N airports.
### Data sources
The data on operational airports in Germany were accessed from the Aeronautical Information Publication Germany from DFS, which are publicly accessible since January 2023 [10]. In addition to general national regulations and requirements, specific information on airports and air navigation services can be retrieved. For this paper, information was collected about the name and operational type of airport, availability of IAP, aircraft permitted by MTOW at the airports, and hours of operation for all available German operating airports. Additional data on individual German airports were accessed from DESTATIS, the German Federal Statistical Office [38, 39].
For the US, airport and runway data (e.g., landing systems available, runway weight restrictions) were gathered from the FAA's National Airspace System Resource [40]. Airport classification information was obtained from the FAA's NPIAS [41]. IFR movement counts at lowered airports were sourced from the FAA's Operations Network database [29].
The statistics on commercial flight movements by regional aircraft for Europe and Germany were retrieved from Eurostat [16]. Here, a commercial flight movement represents the sum of the arrival and departure of an aircraft at an airport. In this context, specific data of the year 2022 on all domestic (i.e., flight movements within Germany) and international (i.e., flight movements between Germany and another country) flight movements for passenger and cargo air transports were analysed. The data for domestic European flight movements include data for 35 European countries, although complete data were not available for every country. Note that domestic operations within a European country can also be referred to as "intra-state" flight movements. Such intra-state flight movements for the US indicate a flight within a single US state, whereas domestic US flight movements could move between any US state or territory.
Statistics for flight movements in the United States1 and individual airports in Texas and California were sourced from the BTS T-100 Segment data [17]. BTS data combine segment data by aircraft type, origin, destination, and airline. The data denote the number of passengers, the amount of freight, and the amount of mail per segment. Flight movements with both origin and destination outside the US are excluded from the BTS data. Generally, the flight movement values at airports calculated from the BTS data will be lower than those shown in the FAA Operational Network because only airlines with annual operating revenues of 20 million USD or more are included in the BTS data, so some smaller airlines are excluded from the database and thus this study.
Footnote 1: Unless otherwise specified, data for the United States includes Puerto Rico and other US territories. A flight from Miami, Florida
## 4 Analysis of UAS Accessibility Potential
This section focuses on the airspace system accessibility of flights eligible for UAS operations based on availability of UAS IAP. The potential to use UAS for regional aircraft at the identified P2 airports is discussed.
### Availability of IAP at airports
Table 4 shows the count of all public and non-public airports (excluding military use airports) and P2 airports, sorted by lowered and non-towered, in Germany, Texas, and California that are equipped with different categories of ILS/GLS procedures. Airports that provide multiple ILS/GLS procedures are counted in all applicable categories.
In Germany, a total of 41 airports have ILS/GLS approach procedures. An ILS CAT III approach is available at 20 airports. In addition to ILS CAT III, two German airports, REmen (EDDW) and Frankfurt (EDDF), provide GLS CAT I procedures. Additionally, Frankfurt is the only German airport with GLS CAT II [35]. The only airports in California and Texas that have GLS procedures (CAT I at both) are Houston George Bush (KIAH) and San Francisco (KSFO).
Texas and California have about the same number of airports with ILS availability as Germany (see Table 4). The two US states have more P2 airports with ILS/GLS availability than Germany (36 in Texas and 36 in California versus 24 in Germany). However, Germany has more P2 airports providing UAS IAP (one in Texas and one in California versus nine in Germany).
### UAS accessibility potential for regional aircraft at P2 airports
In the previous analysis on the potential of regional air cargo operations for UAS [20], regional aircraft with turboprop engines were the focus of the investigation. In the US, the Cessana 208 Caravan aircraft was the dominant cargo-only aircraft with more than 83% of domestic US to San Juan, Puerto Rico, for example, would be counted as domestic.
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline \multirow{2}{*}{**ILS/GLS availability**} & \multicolumn{3}{c|}{**Count of airports**} \\ & \multicolumn{2}{c|}{(towered / non-towered)} \\ \cline{2-4} & **Germany** & **Texas** & **California** \\ \hline
**Total at all airports** & **35 / 6** & **38 / 5** & **37 / 8** \\ \hline ILS CAT I & 27 / 6 & 38 / 5 & 37 / 8 \\ \hline ILS CAT I & 3 / 0 & 7 / 0 & 9 / 0 \\ \hline ILS CAT III (_UAS IAP_) & 20 / 0 & 5 / 0 & 6 / 0 \\ \hline GLS CAT I (_UAS IAP_) & 2 / 0 & 1 / 0 & 1 / 0 \\ \hline GLS CAT II (_UAS IAP_) & 1 / 0 & 0 / 0 & 0 / 0 \\ \hline
**Total at all airports** & **20 / 0** & 5 / 0 & 6 / 0 \\ \hline
**Total at P2 airports** & **20 / 4** & 31 / 5 & **28 / 8** \\ \hline ILS CAT I & 17 / 4 & 31 / 5 & 28 / 8 \\ \hline ILS CAT II & 2/ 0 & 1/ 0 & 3 / 0 \\ \hline GLS CAT II (_UAS IAP_) & 9 / 0 & 1 / 0 & 1 / 0 \\ \hline GLS CAT II (_UAS IAP_) & 1 / 0 & 0 / 0 & 0 / 0 \\ \hline
**Total at P2W airports** & **9 / 0** & 1 / 0 & 1 / 0 \\ \hline \end{tabular}
\end{table}
Table 4: Availability of ILS/GLS procedures at airports
cargo flight movements in 2021. In Europe, the ATR 42, ATR 72, and Embracer EMB 120 aircraft account for more than 94% of domestic European cargo flight movements by regional aircraft in 2021. Discussions with industry experts indicated that, in addition to regional turboprop aircraft, larger regional jet-powered aircraft may also be considered for UAS operations. Previous research by the German Aerospace Center (DLR) investigated the development and validation of a concept for the operation of unmanned cargo as part of the "Unmanned Freight Operations" (UFO) project between 2014 to 2017 [42]. In that work, different aircraft were analysed covering three use cases: express freight (Boeing 777F), company internal transport (Cessna 2008), and disaster relief flights (no specific aircraft type). However, as discussed in Section 2.2, current efforts focus on using fixed-wing aircraft in the RAM realm at relatively small and under-utilized airports that typically do not service widely aircraft such as a Boeing 777F. Hence this study was limited to regional aircraft, as defined in Section 2.2.
#### 4.2.1 Types of regional aircraft eligible for UAS
It was assumed that domestic flights have the highest potential for initial UAS operations because different countries are likely to have different regulations regarding UAS operations. Table V provides an overview of aircraft types used for domestic flight movements at P2 airports [16, 17].
In Table V, domestic cargo-only flight movements and flight movements with passengers and/or cargo on board (all operations) are compared. The regional aircraft in the table have turbopro engines, unless labelled (piston) or (jet). Note here that data are at the domestic level to give a more general picture of what type of regional aircraft are operating within different European countries versus the US. Significant differences in the total number of flights within European countries and the US are partially to not counting flights between European countries.
For domestic cargo-only flight movements in Europe, three turbopro aircraft types (ATR 42, ATR 72, and Embracer EMB 120) are again as dominant as in the previous 2021 analysis, with a combined total of just under 90% of the operations. In fact, the only jet aircraft type with a notable number of domestic cargo-only flight movements is the Bomchardier CL-600 (Bombardier Challenger 600) aircraft that accounts for 8.7% of the operations in Europe (and 15% of all domestic operations in Europe). Cargo-only regional jet aircraft usage is even rarer in the US. Only 0.6% of cargo-only flights are operated by a single type of regional jet (Canadair RJ200). Conversely, the common aircraft in the US, the Cessna 208/208B and 402 or Beech 18 aircraft (see footnote d. in Table V), are not used in Europe. Nonetheless, these regional aircraft types combined account for a significant share (68.7%) of cargo-only operations in the US.
Looking at the engine type of regional aircraft, Table VI shows significant differences by the type of operation between regional jet aircraft and regional turbopro/piston aircraft (termed prop in Table VI) [16, 17].
#### 4.2.2 IAP availability at P2 airports
Table IV shows that all P2W airports are lowered across Germany, Texas, and California. Yet, non-towered airports are far more numerous than lowered airports (see Section 3.1). To assess the availability of ILS/GLS (all CATs) and UAS IAP (only ILS CAT Ill and GLS CAT 1 and II), Table VII breaks down the IAP by class of airspace and presence of air traffic control tower (towered) at P2 airports.
Table VII shows that Germany has a significant number of regional airports in uncontrolled Class G airspace. However, of these 151 non-towered P2 airports, only four provide ILS procedures, and none have UAS IAP. There exist 22 lowered P2 airports in controlled airspace, 20 of which have ILS or GLS (nine with UAS IAP).
In the two US states analysed, Texas has 62.8% more P2 airports than California. Moreover, Texas has 117.3% more P2 airports than Germany. Looking at the share of non-towered airports, the results are again similar. Texas has 79.7% more P2 non-towered airports than California and 122.5% more than Germany. Both US states have only one P2W airport (Fort Worth Alliance, KAFW, in Texas and Fresno Yosemite International, KAF, in California).
The visualization of all public airports, with P2 airports assigned a circle, including IAP configurations are shown in the following Figs. 1-31. For each public airport, the highest possible IAP category is indicated with GLS being higher than ILS.
Figs. 1-3 show that many of the smaller airports are located closer to the areas with larger airports providing ILS CAT III and/or GLS close to the relatively larger cities. In Germany, there is a relatively high density of P2 airports in the west of Germany in the Rhine-Main region around Frankfurt (EDDF) and Cologne/Bonn (EDDK). In Texas, airport density around the metropolitan areas of Dallas-Fort Worth (KDFW), Austin (KAUS), and Houston (KIAH) is higher. California has a similar picture, where the density of smaller P2 airports increases around the metropolitan areas of Los Angeles (KLAX), San Francisco (KSFO), and Sacramento (KSMF).
#### 4.2.3 Discussion of UAS accessibility potential for regional operations
After identifying regional aircraft types eligible for UAS operations and P2 airports in Germany, Texas, and California in the previous section, the next step is to analyse and discuss the accessibility potential of these regional aircraft at these P2 airports. For this analysis, regional aircraft are classified based on their operational empty weight (DEVW1) and MTOW in tonnes (t). As regional aircraft have a wide variety of payload tonnage, the range between DEW and MTOW was considered for the UAS accessibility assessment to give a feasible range. According to a regional cargo industry expert, regional aircraft are often volutrentically filled before the aircraft's MTOW is exceeded. Therefore, if the DEW and MTOW of an aircraft is less than or equal to the rated gross weight capacity of the airport runway for the aircraft's wheel configuration, it was included in the accessibility assessment of the respective airport. UAS accessibility of regional aircraft is differentiated between total number of P2 airports as well as between lowered (burd) and non-lowered (ntwrd) P2 airports.
Footnote 11: The OEW is the empty weight of an aircraft plus operational items including supplies necessary for full operations such as airline equipment and engine oil. Usable fuel that is needed to power the aircraft engines and the actual aircraft payload are excluded from the OEW.
Table 8 provides an overview of the most widely used regional aircraft types in Europe and the US (see Table 5) that are likely to be eligible for UAS operations and their accessibility potential at P2 airports. The OEW and MTOW in tonnes of each aircraft are listed in the column "Aircraft types" after the regional aircraft types. The metrics were used for the following regional aircraft type variants: ATR 42-600 (ATR 42) [43], ATR 72-600F (ATR 72) [44], Bombardier Challenger 650 (CL-600) [45], Bombardier DHC-8 Q200-100 (Dash B-100) [46], Embrac EMB 120 Brasilia (EMB 120) [47], Embrac ERJ 145 EP (ERJ 145) [48], Cessna 208 Caravan with cargo pod (C 208) [49], Cessna 208 Grand Caravan with cargo pod (C 208B) [50], and Canadair RJ200 ER (CRJ200) [51].
Taking the ATR 72 with an OEW of 11.80 tonnes and an MTOW of 23.00 tonnes as an example, this regional aircraft type can serve a total of 36 to 61 German12 P2 airports, depending on how much usable fuel and payload is carried. Based on the rated gross weight capacity of the runways, 61 P2 airports allow an aircraft with MTOW of >10.50 tonnes (with the next higher airport MTOW being 12.00 tonnes) and 36 P2 airports allow an aircraft with MTOW of >20.00 tonnes (with the next higher airport MTOW being 25.00 tonnes) at which the ATR 72 would be allowed to operate in Germany. For each regional aircraft type analysed in Table 8, accessible German P2 airports (173 in total) include all 20 P2 lowered airports with ILS/GLS, with nine of these P2 airports having a UAS IAP.
Footnote 12: Some of the German airports impose operation hours and permits for MTOW operations. Upon request (PPR: Prior Permission Required), airports can be opened for air transport services outside of normal operating hours and for MTOW operations.
For the comparatively smaller regional aircraft types that are only used in the US for air cargo operations (e.g., Cessna 208), the Table 8 also indicates the number of German P2 airports that are eligible for fixed-wing UAS operations. However, it is not clear at present whether such aircraft would be utilized for cargo operations in Germany or Europe in the future.
Overall, the analysis of current FER flight movements in Section 3.1 shows that most of the flights are not operated at P2 airports today. The ten German lowered airports that are not considered as P2 airports (Hub and IAA1) account for 87.7% of all annual IFR flight movements [27]. Similarly, a significant share of all IFR flight movements is operated at airports not considered as P2 airports in Texas (72.5%) and in California (78.7%) [29]. IFR flights are heavily concentrated at a few, large airports, supporting the assumption that there exist many under-utilized airports, many of which can be considered for initial UAS operations. Looking at the regional aircraft analysed, there are numerous different P2 airports in the investigated areas where an initial integration of fixed-wing UAS into the airspace system could be realized. Depending on the actual operating weight of the investigated regional aircraft based
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|c|}{**Aircraft types**} & \multicolumn{3}{c|}{**Count of accessible potential UAS airports**} \\ \cline{3-4} MTOW & & MTOW (OEW) & \\ \cline{2-4}
**(OEW)** & **Germany** & **Texas** & **California** \\ \hline
**ATR 42** & **40 (61) total** & **66 (73) total** & **75 (104) total** \\ \hline
18.60 & 20 (21) turd & 36 (36) turd & 36 (37) brnd \\ (11.75 & 20 (40) turd) & 30 (37) brnd & 39 (67) hrndv \\ \hline
**ATR 72** & **36 (61)** & **56 (73)** & **67 (100)** \\ \hline
23.00 t & 20 (21) & 35 (36) & 35 (36) \\ (11.80 ) & 16 (40) & 21 (37) & 32 (64) \\ \hline
**Ct-600** & **36 (59)** & **62 (73)** & **72 (100)** \\
21.86 t & 20 (21) & 36 (36) & 36 (36) \\ (12.32 t) & 16 (38) & 37 (37) & 36 (64) \\ \hline
**Dash 8-100** & **40 (72)** & **72 (73)** & **75 (104)** \\
16.47 t & 20 (22) & 36 (36) & 36 (37) \\ (10.48 t) & 20 (50) & 36 (37) & 39 (67) \\ \hline
**EMB 120** & **61 (76)** & **73 (74)** & **77 (118)** \\
11.50 t & 21 (22) & 36 (36) & 36 (38) \\ (7.07 t) & 40 (54) & 37 (38) & 41 (80) \\ \hline
**ERJ 145** & **36 (61)** & **63 (73)** & **72 (100)** \\
20.99 t & 20 (21) & 36 (36) & 36 (36) \\ (11.95 t) & 16 (40) & 27 (37) & 36 (64) \\ \hline
**C 208** & **148 (158)** & **287 (279)** & **192 (200)** \\
3.63 t & 22 (22) & 37 (38) & 44 (44) \\ (2.21 t) & 126 (136) & 230 (241) & 148 (156) \\ \hline
**C 208** & **146 (158)** & **266 (278)** & **192 (199)** \\
4.00 t & 22 (22) & 37 (38) & 44 (44) \\
2.41 t & 124 (136) & 229 (240) & 55 (148) \\ \hline
**CRJ200** & **36 (59)** & **56 (72)** & **57 (67)** \\
23.13 t & 20 (21) & 35 (36) & 33 (35) \\ (13.84 t) & 16 (38) & 21 (36) & 24 (32) \\ \hline \end{tabular}
\end{table}
Table 8: P2 airport accessibility by aircraft types eligible for UAS
on its individual mission, a maximum of 158 P2 airports, mainly accessible by smaller turboprop aircraft (e.g., Cessna 208), and a minimum of 36 P2 airports would be accessible for fixed-wing UAS operations in Germany. In the US, a maximum of 279 and 200 P2 airports in Texas and California, respectively, would be accessible, again, mainly by smaller turboprop aircraft (e.g., Cessna 208/208B). On the other hand, a minimum of 56 and 57 P2 airports in Texas and California, respectively, would be accessible by regional aircraft. In this context, the share of P2 airports located in controlled and uncontrolled airspace varies. All three areas investigated have more P2 airports in uncontrolled airspace (non-towered airports) that are eligible for initial UAS operations.
## 5 Analysis of individual high P2 airports
This section investigates and compares individual airports in Germany, Texas, and California that have the highest potential to be utilized as P2 airports for the initial introduction of cargo UAS operations based on their runway MTOW allowances and current air transport operations. Here, both P2W and P2N airports can be considered as high P2. High P2N airports might need to be retrofitted with UAS LAP or other landing technologies (thereby making that airport a high P2W airport) first to enable widespread cargo UAS operations.
### _Current operations at (non-)P2 airports_
As introduced in Section 3.2, P2W airports are likely to have a higher potential to be utilized for initial UAS operations than P2N airports. Nine P2W airports provide UAS lap in Germany, Texas and California have one such airport each (Table IV). Given the relatively low number of airports that have the potential to be used for initial UAS operations with current certified landing systems in Germany and the two US states, many P2N airports will need a retrofit of UAS lap or other landing technologies in the future to enable widespread fixed-wing UAS operations. P2N airports could be upgraded with certified landing technologies such as lLS CAT III or GLS as well as with landing technologies that are currently under development such as vision-based landing systems. In addition to P2W airports, P2N airports with appropriate runway MTOW allowances and commercial air cargo operations (that could be replaced by cargo UAS, for example) are defined as high P2 airports having a high potential for the introduction of initial UAS operations (i.e., high P2N airports) in the following sections.
Table IX gives an overview about the commercial air transport operations at the main airports (non-P2 airports because they have annual IFR flight movements percentages >2.2%), the P2N airports, and all other airports. The commercial air transport operations at these airports are distinguished by emplaned cargo in tonnes and enplaned passengers handled during annual flight movements, as well as by all-operations flight movements (all ops flight mov) in 2022 [17, 38, 39].
In Germany, over 90% of enplaned cargo and passengers are handled at the ten main airports. Accordingly, in Germany, between 4 and 5% are handled at the nine P2W airports. A similar picture is seen in Texas and California, where over 78% of enplaned cargo is operated at seven main airports in Texas and over 92% is handled at eleven main airports in California.
Whereas the absolute number of enplaned passengers and all operations flight movements at the main airports is relatively comparable among the three investigated areas, the absolute number of enplaned cargo in tonnes varies among the areas. Germany (4.92 million tonnes) and California (4.86 million tonnes) have a similar amount of enplaned cargo operated at the main airports, more than double that of Texas (1.85 million tonnes). Note that the US numbers may be undercounted because Amerflight, a major regional air cargo carrier based in Texas, is not included in the BTS data.
With respect to the enplaned cargo at P2W airports, Texas clearly dominates (377,719 tonnes,at KAFW alone). This
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline
**Air transport** & **Enplaned** & **Enplaned** & **All ops** \\
**operations at** & **cargo (\(\eta^{\texttt{a.b}}\) & passengers & flight \\ & & & mov\({}^{\texttt{c}}\) \\ \hline \multicolumn{5}{|c|}{**Germany**} \\ \hline Total main & 4,919,963 & 152,114,000 & 1,374,303 \\ airports\({}^{\texttt{d}}\) & (95.6\%)\({}^{\texttt{e}}\) & (91.8\%) & (47.6\%) \\ \hline Total P2W & 223,220 & 7,471,780 & 124,195 \\ airports & (4.3\%) & (4.5\%) & (4.3\%) \\ \hline Total P2N & 1,150 & 248,579 & 224,208 \\ airports\({}^{\texttt{f}}\) & (<0.1\%) & (0.2\%) & (7.8\%) \\ \hline Total other & 310 & 5.939,636 & 1,163,902 \\ airports\({}^{\texttt{g}}\) & (<0.1\%) & (3.6\%) & (40.3\%) \\ \hline \multicolumn{5}{|c|}{**Total**} \\ \multicolumn{5}{|c|}{**combined**} & **5,144,633** & **165,773,995** & **2,886,608** \\ \hline \multicolumn{5}{|c|}{**Texas**} \\ \hline Total main & 1,844,497 & 174,839,029 & 1,580,645 \\ airports\({}^{\texttt{d}}\) & (78.1\%)\({}^{\texttt{e}}\) & (96.2\%) & (91.2\%) \\ \hline Total P2W & 377,719 & 7,845 & 22,911 \\ airports & (16.0\%) & (<0.1\%) & (1.3\%) \\ \hline Total P2N & 134,310 & 6,708,421 & 122,655 \\ airports\({}^{\texttt{f}}\) & (5.7\%) & (3.7\%) & (7.0\%) \\ \hline Total other & 4,222 & 283,297 & 8,269 \\ airports\({}^{\texttt{g}}\) & (0.2\%) & (0.2\%) & (0.5\%) \\ \hline \multicolumn{5}{|c|}{**Total**} \\ \multicolumn{5}{|c|}{**combined**} & **2,360,748** & **181,838,592** & **1,732,480** \\ \hline \multicolumn{5}{|c|}{**California**} \\ \hline Total main & 4,862,023 & 190,776,761 & 1,673,377 \\ airports\({}^{\texttt{d}}\) & (92.7\%)\({}^{\texttt{e}}\) & (95.5\%) & (91.3\%) \\ \hline Total P2W & 14,438 & 2,155,276 & 25,125 \\ airports & (0.3\%) & (1.1\%) & (1.4\%) \\ \hline Total P2N & 338,285 & 6,853,761 & 124,395 \\ airports\({}^{\texttt{f}}\) & (6.4\%) & (3.4\%) & (6.8\%) \\ \hline Total other & 30,077 & 70,971 & 10,731 \\ airports\({}^{\texttt{g}}\) & (0.6\%) & (<0.1\%) & (0.6\%) \\ \hline \multicolumn{5}{|c|}{**Total**} \\ \multicolumn{5}{|c|}{**combined**} & **5,244,823** & **199,856,769** & **1,833,628** \\ \hline \end{tabular}
* a. Enplaned cargo on board cargo-only, help height, or connist height flights.
* b. Cargo consists of both height and mal. c. Flight movements refer to the sum of an arrival and departure for all national and international commercial flights that are both scheduled and non-scheduled. d. Main airports refer to airports with annual IFR flight movements per Guangzhou 22.2% for the given area (country/title). e. Percentage of total combined airport operations for the given area (country/title). f. The listing of total P2N airports only includes airports that do 3-0 tonnes of enplaned cargo in 2022.
* g. Other airports include P2N airports that did not have commercial air cargo operations in 2022 and all other airports such as private and military use airports. This data does not exclusively contain commercial flight data in common face-wing aircraft but also from aerial vehicles such as from helicocytes and piled baloons. This affects especially all operations figure movements at the “total other airport” category.
\end{table} TABLE IX: Total commercial air transport operations at (non-)P2 airports in 2022
high number is due to the fact that KAFW is a significant cargo hub for both FedEx and Amazon. Germany's total englander cargo at the nine P2W airports (223,220 tonnes) is almost entirely at Frankfurt-Hahn (220,127 tonnes) and California's lone P2W airport, KFAT13 (14,438 tonnes) has a much lower volume of cargo.
Footnote 13: Although KFAT hosts the California Air National Guard 144\({}^{\text{th}}\) Fighter Wing, among others, it is not considered as a joint-use airport in the official FAA database and therefore was included in our analysis.
Looking at the P2N airports that handled commercial air cargo, Germany has a comparatively low absolute number (1,150 tonnes) and share (>0.1%) of englander cargo compared to Texas (134,310 tonnes with 5.7%) and California (338,285 tonnes with 6.4%).
#### 5.1.1 Individual high P2W airports
To identify high P2W airports to assess the potential of future UAS operations for air cargo missions, airports are ranked by englander cargo in tonnes that are handled at these airports. The more englander cargo currently handled at an airport, the more likely it can be assumed that the initial introduction of cargo UAS will start at those airports.
Table X10 lists all P2W airports in Germany, Texas, and California ranked by their englander cargo in tonnes. Data are sorted by operations by aircraft of all different sizes (all aircraft) and by aircraft that meet the definition of regional aircraft (<25 tonnes MTOW) [17, 38]. Enplaned cargo can be considered one of the main indicators of whether cargo UAS are eligible candidates for replacement of current operations. However, if an airport already has a comparatively high amount of flight movements by all operations but a low amount of englander cargo, there might be potential for expansion of air transport operations handled by cargo UAS at that airport in the future (i.e., increased cargo service to that airport).
Airports in Texas and California that are in controlled airspace providing traffic separation service by ATC are marked as towered (wrd) airports followed by their airspace class in Table X10. German airports in controlled airspace are marked as CTR (as introduced in Section 2.1, a CTR is controlled Class D airspace).
All airports listed in Table X10 are found to be suitable for regional cargo UAS operations in terms of regional aircraft accessibility, as certified landing technologies are already in place, airports are in controlled airspace, and the airports have a MTOW allowance that exceeds the MTOW of the regional aircraft investigated in this study. Based on current air cargo operations, Table X10 shows that Frankfurt-Hahn (EDFH) in Germany and Fort Worth Alliance (KAFW) in Texas stand out with the highest amount of annual englander cargo among the investigated areas. However, only 0.05% of englander cargo are transported by regional aircraft at EDFH and 4.11% at KAFW. These small percentages are partially explained simply by the fact that a large cargo jet (e.g., a Boeing 767) can carry significantly more tonnage than a regional cargo aircraft. Nonetheless, a comparison of the flight movements by all commercial air transport aircraft to those by regional aircraft shows that EDFH only has only \(\sim\)2 flight movements by aircraft \(<\)25 t MTOW per day. By comparison, KAFW has \(\sim\)15 such flights per day.
The German P2W airport with the second highest amount of englander cargo, Karlsruhe/Baden-Baden (EDSB), handles significantly less englander cargo than EDFH or KAFW, but over 99% are handled by regional aircraft at EDSB that are likely to be converted for the introduction of cargo UAS. Likewise, EDSB has the second-highest amount of all operations flight movements (cargo and/or passenger flight movements) after Muenster/Osnabrueck (EDDG), with over half of the flight movements operated by regional aircraft. It can be concluded that, although Germany has a handful of airports that could be used for the introduction of cargo UAS, most of the airports currently receive little to no englander cargo. Therefore, cargo handling infrastructure at these airports may need to be
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Commercial air transport operations at airports** & **Enplaned cargo (t)** & **All ops flight mov** \\ \hline
**P2W airport** & **All aircraft** & **Aircraft \(<\)25 t MTOW** & **Aircraft \(<\)25 t MTOW** \\ \hline
**Germany** & & & \\ \hline Frankfurt-Hahn (EDFH) & **220,127** & **114** & **13,264** & **668** \\ (CTR-D) & & & & \\ \hline Karlsruhe/Baden -Baden (EDSB) & **1,784** & **1,768** & **21,089** & **12,742** \\ (CTR-D) & & & & \\ \hline Erfurt-Weimar (EDDE) & **933** & **12** & **2,873** & **1,664** \\ (CTR-D) & & & & \\ \hline Bremen (EDDW) & **290** & **90** & **18,656** & **5,129** \\ (CTR-D) & & & & \\ \hline Dresden (EDDC) & **61** & **1** & **11,425** & **2,324** \\ (CTR-D) & & & & \\ \hline Muenster/Osnabrueck (EDDG) & **21** & **-** & **23,072** & **15,320** \\ (CTR-D) & & & & \\ \hline Kassel-Calden (EDVK) & **4** & **-** & **13,723** & **-** \\ (CTR-D) & & & & \\ \hline Friedrichshafen (EDN) & **-** & **-** & **8,407** & **4,996** \\ (CTR-D) & & & & \\ \hline Niederrhein (EDLV) (CTR-D) & **-** & **-** & **11,686** & **5,132** \\ (CTR-D) & & & & \\ \hline
**Total** & **223,220** & **1,985** & **124,195** & **47,995** \\ \hline \multicolumn{4}{c}{**Texas**} \\ \hline Fort Worth Alliance (KAFW) (wrd-D) & **377,719** & **15,520** & **22,911** & **5,477** \\ \hline \multicolumn{4}{c}{**California**} \\ \hline Fresno & **7**3** & **14,438** & **16** & **25,125** & **4,556** \\ (KFAT) (wrd-C) & & & & \\ \hline \end{tabular}
\end{table} TABLE X10: P2W airports ranked by englander cargo in Germany, Texas, and California in 2022
installed, though the investigation of specific cargo handling infrastructure and capabilities at specific airports is outside the scope of this work.
In both Texas and California, only a single airport has the needed landing technology to enable cargo UAS operations.
Overall, all P2W airports can be considered relevant for the introduction of initial cargo UAS operations since the needed landing technologies at these airports are already available. In Germany, EDFH dominates the amount of enpained cargo of all P2W airports. In the US, the only P2 airports in Texas and California both have a significant amount of enpained cargo (377,719 tonnes at KAFW and 14,438 tonnes at KFAT) with over 22,000 annual flight movements and a significant share handled by regional aircraft (23.8% at KAFW and 18.1% at KFAT).
#### 5.1.2 Individual high P2N airports
This section identifies P2N airports that have a high potential to be upgraded with UAS IAP or other needed landing technologies to enable initial cargo UAS operations. Airports in Germany, Texas, and California are ranked by their enpained cargo in tonnes to identify airports with commercial air cargo operations for a potential cargo UAS replacement or expansion of operations.
Table XI.1 ranks all 17 P2N airports that had commercial air cargo operations in Germany in 2022 [38]. Airports located in uncontrolled airspace Class G that does not receive separation by ATC are marked as non-towered Class G airports (ntwrd-G or RMZ-G). Airports in uncontrolled airspace marked as RMZ-G and airports in controlled airspace marked as CTR-D allow for IFR approaches and can be considered to have a higher potential for the initial introduction of UAS since fixed-wing UAS are expected to operate under IFR [25]. As introduced in Section 2.1, an RMZ is specially created for IFR approaches at German airports in uncontrolled airspace Class G.
In 2022, 17 P2N airports handled commercial air cargo operations in Germany. Four of these airports are located on islands in the German North Sea, namely Juist (EDWJ), Wangeroge (EDWG), Borkum (EDWR), and Nordermey (EDWY). However, these airports on the German islands have a MTOW allowance of just 5.7 tonnes. As indicated in Table V, European regional aircraft with potential for cargo UAS applications (e.g., ATR 42 and 72, C1-600, EMB 120) start at an OEW of 7.07 tonnes with a MTOW of up to 23.00 tonnes (see Table VIII). Accordingly, the four P2N airports located in the German North Sea are not considered to have a high initial potential for early regional cargo UAS use cases since the dominant regional cargo aircraft types eligible for UAS operations are not able to operate there.
Excluding the airports in the German North Sea due to their MTOW allowance, the remaining 13 P2N airports in Germany (569.1 tonnes of annual enpained cargo) can be considered high P2N airports. Eleven of these airports can be assigned a higher potential for early cargo UAS operations based on their availability of a CTR or RMZ. Eight of these eleven airports have MTOW allowances of 520 tonnes that limit the maximum operating weight of the analysed regional aircraft in Table VIII. However, based on this analysis, 13 airports can be identified as high P2N airports that have a comparatively high potential to be upgraded with UAS IAP or other needed landing technologies.
The 22 high P2 airports in Germany are highlighted in Fig. 4 with nine being P2W airports and 13 being P2N airports. Since all P2W airports are lowered, the P2N airports are distinguished by lowered (twrd, denoted by a triangle) and non-towered (ntwrd, denoted by a circle) operations. Main airports (all lowered) include all airports with annual IFR flight movements percentages >2.2% and are therefore considered non-P2 airports (see Section 3.1).
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
** Commercial air transport operations at airports\({}^{\star}\)** & **MTOW allowance (t)** & **Enpained cargo (t)** & **All ops flight mov\({}^{\circ}\)** \\ \hline Mannheim City (EDFM) (CTR-D) & 10.0 & 546.2 & 11,364 \\ \hline Juist (EDWJ) (ntwrd-G) & 5.7 & 486.2 & 10,106 \\ \hline Wangeroge (EDWG) (ntwrd-G) & 5.7 & 60.1 & 17,035 \\ \hline Borkum (EDWR) (ntwrd-G) & 5.7 & 27.9 & 3,436 \\ \hline Emden (EDWE) (RMZ-G) & 14.0 & 11.1 & 9,542 \\ \hline Nordermey (EDWY) (ntwrd-G) & 5.7 & 7.0 & 2,089 \\ \hline Straubing (EDMS) (RMZ-G) & PCN 40\({}^{\circ}\) & 4.1 & 4,477 \\ \hline Strausberg (EDAY) (RMZ-G) & 14.0 & 3.1 & 24,100 \\ \hline Braunschweig-Wolfsburg (EDVE) (CTR-D) & 1.5 & 11,805 \\ \hline Momengladbach (EDLN) (CTR-D) & PCN 30\({}^{\circ}\) & 1.4 & 35,312 \\ \hline Frankfurt-Egelsbach (EDFE) (ntwrd-G) & 20.0 & 0.5 & 40,459 \\ \hline Hof-Plauen (EDQM) (CTR-D) & 14.0 & 0.5 & 2,456 \\ \hline Siegerland (EDSG) (RMZ-G) & PCN 53\({}^{\circ}\) & 0.3 & 19,836 \\ \hline Bautzen (EDAB) (RMZ-G) & PCN 44\({}^{\circ}\) & 0.2 & 7,514 \\ \hline Schoenhagen (EDAZ) (RMZ-G) & 14.0 & 0.1 & 15,774 \\ \hline Eisenach-Kindel (EDGE) (ntwrd-G) & 20.0 & 0.1 & 1,913 \\ \hline Wilhelmshaven (EDWI) (RMZ-G) & 14.0 & 0.1 & 6,990 \\ \hline \end{tabular}
* a. Data for operations by regional aircraft that have a MTOW <25 tonnes are not available.
* b. All operations flight movements do not exclusively contain commercial flight data from fixed-wing aircraft but also from aerial vehicles such as from helicocytes and pilot balances.
* c. The Preurement Classification Number (PCN) indicates the lead-carrying capacity of the runaway movement of an airport.
\end{table} TABLE XI.11: P2N airports ranked by enpained cargo in Germany in 2022
The German P2N airports can be distinguished by lowered airports in controlled airspace Class D (i.e., airports having a CTR) and uncontrolled airspace Class G (i.e., airports having a RMZ or being non-towered). The eleven high P2N airports that either have a CTR or an RMZ, and therefore allow for IFR approaches likely to be required for initial UAS operations, are comparatively evenly distributed among Germany. However, the P2W airports having a CTR are more heavily located in the western part of Germany. In contrast, north-eastern Germany has few P2 airports.
Some of the high P2N airports (e.g., Schoenhagen Airport EDAZ and Strausberg Airport EDAY) are located near relatively busy main airports (e.g., Berlin-Brandenburg Airport EDDB). Air cargo operations potentially performed by UAS at the latter could therefore fly to these smaller P2N airports, which would relieve the larger main airports.
Table 12 lists the top ten and the remaining twelve other P2N airports that had commercial air cargo operations in Texas in 2022 [17]. Each airport is indicated as towered (twd) or non-towered (ntwrd) and the airspace in which it is located.
Texas has 22 P2N airports in operation that had commercial air cargo operations in 2022. In total, these airports in Texas operate significantly higher amounts of enplaned cargo than German P2N airports (134,310 tonnes versus 1,150 tonnes). Nineteen of these airports are located in Class C or D airspace. Due to the MTOW allowances at the airports (>5.66 tonnes) that exceed the MTOW of regional aircraft dominant in the US (e.g., C208/B with a MTOW of 3.63/4.00 tonnes), all P2N airports in Texas can be considered as having a high potential for initial UAS operations.
Figure 5 visualizes the 23 high P2 airports with and without UAS IAP in Texas (22 P2N airports plus one P2W airport). The top 10 P2N airports that are lowered are almost all located in larger cities that are a several-hour drive from other cities. These airports may be good candidates for the introduction of UAS IAP to enable cargo UAS operations. Many of the other P2N airports with towers are located in either the Dallas-Fort Worth meteplex or along major highways in-between major cities. Another interesting note is that none of Houston (KIAH, KHOU), San Antonio (KSAT), Austin (KAUS), or El Paso (KELP) - four major metropolitan areas with main airports - have P2 airports. Some potential routes that could be serviced by cargo UAS are KAFW to the West Texas airports (KLBB, KMAF, KABI, and KSJT) or to airports in South Texas (KLRD, KMFE, KHRL, and KBRO).
California has 40 P2N airports with commercial air cargo services in 2022. In total, these airports have higher volumes of england cargo (338,285 tonnes) handled than airports in Texas (134,310 tonnes) and Germany (1,150 tonnes). Like Texas, all these airports in California can be considered high P2N airports due to the MTOW allowances at all of the airports (>5.44 tonnes) exceeding the MTOW of regional aircraft dominant in the US. Eighteen of these airports are located in controlled airspace Class C and D. The 41 high P2 airports with and without UAS IAP (40 P2N airports plus one P2W airport) in California are visualized in Fig. 6.
Like Texas, many of the top 10 P2N airports that are lowered in California are in cities hours away by truck from major metropolitan areas. California overall has more P2 airports. Like Texas, few are near the main airports. California is a very mountainous state, and many of the major metropolitan areas along the western coast are hemmed in by mountains, leaving only a few overland routes to the smaller communities away from these areas. As such, route distances that might be driven by truck in a flatter location (e.g., much of Texas) are flown due to the mountainous terrain. This terrain, along with California's large population, has led to a robust regional air cargo network. However, this same terrain may cause difficulties with reliable cargo UAS command and control links, hindering introduction. The most likely initial area for introduction of cargo UAS could be the Central Valley, a large agricultural region in the center of the state. Possible routes here could be KFAT to surrounding communities.
### Discussion of high P2 airports suitable for initial cargo UAS operations
Germany, Texas, and California each present unique challenges and opportunities for the introduction of regional air cargo UAS. In terms of cargo tonnage at those airports most able to accept UAS (i.e., P2W airports), Texas (377,719 tonnes) has significantly more tonage than Germany and California combined (223,220 and 14,438 tonnes, respectively). However, Texas currently has only one P2W airport, meaning that at least one additional airport would need to have the appropriate technology for flights between airports to occur. Similarly, California also has only one P2W airport. Both states do, however, have a healthy demand for cargo across several P2N airports, with Texas' 22 such airports receiving 134,310 tonnes of cargo in 2022 and California's 40 such airports receiving 338,285 tonnes. With the introduction of needed AP/landing systems, existing air cargo traffic in these states could be converted to UAS. One can conclude that, in Texas and California, it is the IAP/landing systems that are lacking, whereas the cargo handling infrastructure is likely at many of the airports already.
Conversely, Germany has nine P2W airports and 17 P2N airports. Two of the nine German P2W airports can be highlighted for the introduction of cargo UAS based on the total amount of england cargo and england cargo by regional aircraft. Frankfurt-Hahn (EDFH) handles 98.6% of all england cargo of the nine German P2W airports and Karlsruhe/Baden-Baden (EDSB) handles 89.1% of all england cargo operated by regional aircraft. None of the other airports received more than 1,000 tonnes of cargo in 2022 (and EDSB only barely passed that threshold). In fact, the 17 P2N airports combined received two orders of magnitude less cargo (1,150 tonnes) than similar airports in
\begin{table}
\begin{tabular}{l|c|c|c} \hline
**Commercial air** & **MTOW** & **Emplaned** & **All ops** \\
**transport operations at** & **allowance** & **cargo (t)** & **flight mov** \\
**airports** & **(t)** & & \\ \hline San Bernardino & & & \\ (KSDB) (burd-D) & 120 & 212,306 & 8,784 \\
**Sacramento** & & & \\ (KMHR) (burd-D) & 127 & 68,156 & 3,064 \\ \hline Stockton & & & \\ (KSCK) (burd-D) & 68 & 48,175 & 2,823 \\ \hline Santa Barbara & & & \\ (KSBA) (burd-C) & 73 & 2,039 & 17,436 \\ \hline Visalia & & & \\ (KVIS) (ntwd-E) & 45 & 1,097 & 1,667 \\ \hline Santa Maria & & & \\ (KSMK) (burd-D) & 82 & 1,030 & 1,821 \\ \hline Imperial & & & \\ (KIPL) (nfwdrd-E) & 36 & 757 & 4,072 \\ \hline Redding & & & \\ (KRDD) (burd-D) & 58 & 742 & 5,044 \\ \hline Bakersfield & & & \\ (KSFL) (burd-D) & 70 & 731 & 6,086 \\ \hline San Luis Obispo & & & \\ (KSBP) (burd-D) & 67 & 685 & 11,103 \\ \hline Other P2N airports combined & & & \\ (30 airports) (>5.44 1) & & 2,567 & 62,495 \\ \hline \end{tabular}
\end{table}
Table 13: P2N airports ranked by england cargo in California in 2022
Figure 6: Visualization of P2 airports with and without UAS IAP (P2W and P2N, respectively), along with main, non-P2 airports in California
Texas. Thus, in Germany, there is much less of an existing regional air cargo route network. The introduction of cargo UAS in Germany is made easier due to the greater number of P2W airports but is hampered by a lack of existing regional air cargo and, possibly, the accompanying cargo infrastructure at airports.
Across all three areas investigated, all P2W airports have a high potential for initial cargo UAS operations because currently certified landing technologies likely for initial fixed-wing UAS operations are already available. On the one hand, a comparatively high amount of current enplaned cargo at P2W airports, such as at Frankfurt-Hahn (EDFH) in Germany, Fort Worth Alliance (KAFW) in Texas, and Fresno Yossemite International (KFAT) in California, could indicate the potential of these airports for the initial introduction of cargo UAS via one-to-one replacement of operations. On the other hand, if a P2W airport has a comparatively low amount of enplaned cargo, a high amount of all operations flight movements, such as at the German airports EDSB, EDDW, and EDDG, it could indicate the relevance of these airports due to their high volume of air transport operations with potential for expansion of services via cargo UAS.
The 13 high P2N airports in Germany have significantly less enplaned cargo handled (560 tonnes) than the 22 high P2N airports in Texas (134,310 tonnes) and the 40 high P2N airports in California (338,285 tonnes). Thus, the relevance of the 13 German high P2N airports for an upgrade with UAS IAP or other landing technologies appears quite small compared to the amount of enplaned cargo that is operated at the high P2N airports in Texas and California. Nevertheless, 11 of the 13 German high P2N airports are located in controlled airspace or have an RMZ that allows for IFR approaches likely to be required for UAS operations.
However, highly automated cargo UAS operations, especially for regional use cases with the availability of many under-utilized airports, could become relevant in Germany in the future, as air transportation is used for high-value and short-time-frame deliveries. This makes air transport a critical part of the freight infrastructure, despite its low tonnage percentage [52]. Even though the entire air cargo transport was only 0.1% of total freight tonnage transported in Germany in 2021 [53], Germany is an important country for intra- and extra-European logistics due to its central geographical location in Europe and excellent ground and air infrastructure. In Germany, freight transport is currently dominated by road and rail transport, which accounted for a combined 87.1% of total freight tonnage transported in 2021. Although air cargo traffic is relatively small compared to freight transport and passenger traffic by road and rail, it is important for overall economic performance [52]. Its importance could increase as freight traffic in Germany is expected to grow by 40% by 2030 compared to 2010 [54] and highly automated aircraft operations, such as cargo UAS, might create viable business cases.
## 6 Conclusion and Future Work
Regional aircraft eligible for UAS operations and their accessibility potential at airports were analysed using 2022 data to assess the integration potential of regional fixed-wing cargo UAS into the airspace system. This study builds on previous research that identified Germany, Texas, and California as suitable areas for an initial integration of regional cargo UAS due to their relatively high number of smaller airports and/or current air cargo traffic. This paper investigates operations of regional piston, turbopporo, and jet aircraft to identify airports suitable to serve regional aircraft eligible for UAS. All airports in Germany, Texas, and California were analysed according to their current IAP, with those procedures best suited to initial fixed-wing UAS operations (i.e., ILS CAT III or GLS), termed UAS IAP, given special attention. Emphasis was also given to the investigation of less busy airports (i.e., P2 airports), as it is anticipated that cargo UAS will initially start operating from under-utilized airports.
To establish a baseline for the comparative analysis of different areas, airports were defined as P2 airports if they provide public air transport services and have <2.2% IFR flight movements of all lowered airports in the country/state. Additionally, all non-towered airports were classified as P2 airports. The total number of P2 airports with public air transport services was identified, with 173 in Germany, 376 in Texas, and 231 in California. However, currently, only nine P2 airports in Germany, one in Texas, and one in California provide UAS IAP availability. In the future, it is likely that P2 airports without UAS IAP will be equipped with GLS rather than ILS CAT III for UAS operations, since only one GLS installation per airport is required, as opposed to one installation per runway end, like LS CAT III. This analysis shows that there is currently a dearth of P2 airports equipped with UAS IAP. Either more UAS IAP will need to be installed, or other landing technologies, such as vision-based technologies, will need to be developed to enable UAS accessibility at many under-utilized airports. Should other landing technologies be developed, however, the results of this study indicate that future fixed-wing UAS could access a high number of P2 airports, regardless of powerplant.
Based on runway MTOW allowances, current air transport operations at airports, and airspace classes, individual high P2 airports were identified in Germany, Texas, and California. Since only eleven airports in the investigated areas provide UAS IAP, individual high P2 airports are distinguished by availability of UAS IAP. High P2 airports without UAS IAP might be upgraded with UAS IAP or other landing technologies first to enable widespread cargo UAS operations. Among the investigated areas, Germany has 13 high P2 airports without UAS IAP, Texas has 22, and California has 40 that have a comparatively high potential for the retrofitting of ILS CAT III, GLS, or other needed landing technologies for fixed-wing UAS operations. Alternatively, should technologies onboard the aircraft advance such that, for example, a ILS CAT I or area navigation (RNAV) approach with vertical guidance could be used, this work showcases many high P2N airports at which cargo UAS operations could occur.
Although this study focused on UAS accessibility based on the availability of UAS IAP at airports, other challenges also limit UAS operations. Future work will attempt to quantify these limitations, including the availability of reliable command and control (C2) link performance, interactions with other IFR and VFR traffic, availability of contingency airports, and plans to mitigate the loss of the C2 link. The analysis presented in this paper will also provide inputs to fast-time simulation studies, whereby different percentages of current regional air cargo operations may be replaced with UAS operations and extended to additional routes operated by UAS. |
2309.11066 | New Signature of low mass $Z^\prime$ in $J/ψ$ decays | We explore a new approach to search for a low-mass $Z^{\prime}$ particle
through $J/\psi$ decays by identifying its existence through parity-violating
phenomena in the isospin-violating final states of
$\Lambda\overline{\Sigma}^{0}$ and the corresponding charge conjugated states
of $\overline{\Lambda}\Sigma^{0}$. Our investigation centers on a
generation-independent and leptophobic $Z^{\prime}$ with its mass below 10 GeV.
Given the present experimental conditions at the Beijing Spectrometer
III~(BESIII) and the anticipated opportunities at the Super Tau Charm
Factory~(STCF), we conduct Monte-Carlo simulations to predict possible events
at both facilities. Notably, we foresee a substantial enhancement in the
precision of the lower limit estimation of $\alpha_{\text{NP}}$ as well as a
reduction in statistical uncertainty with upcoming STCF experiments.
Furthermore, it is essential to highlight that a null result in the measurement
of $\alpha_{\text{NP}}$ would impose stringent constraints, requiring the
$Z^{\prime}-q-q$ couplings to be on the order of $10^{-2}$. | Chao-Qiang Geng, Chia-Wei Liu, Jiabao Zhang | 2023-09-20T05:07:26Z | http://arxiv.org/abs/2309.11066v3 | # New Signature of low mass \(Z^{\prime}\) in \(J/\psi\) decays
###### Abstract
We explore a new approach to search for a low-mass \(Z^{\prime}\) particle through \(J/\psi\) decays by identifying its existence through parity-violating phenomena in the isospin-violating final states of \(\Lambda\overline{\Sigma}^{0}\) and the corresponding charge conjugated states of \(\overline{\Lambda}\Sigma^{0}\). Our investigation centers on a generation-independent and leptophobic \(Z^{\prime}\) with its mass below 10 GeV. Given the present experimental conditions at the Beijing Spectrometer III (BESIII) and the anticipated opportunities at the Super Tau Charm Factory (STCF), we conduct Monte-Carlo simulations to predict possible events at both facilities. Our simulations indicate that BESIII experiments hold the potential to detect \(Z^{\prime}\) signals in \(J/\psi\to\Lambda\overline{\Sigma}^{0}\) if the polarization asymmetry paramter \(\alpha_{\rm NP}\) attains a minimum threshold of 0.02. Notably, we foresee a substantial enhancement in the precision of the lower limit estimation of \(\alpha_{\rm NP}\) as well as a reduction in statistical uncertainty with upcoming STCF experiments. Furthermore, it is essential to highlight that a null result in the measurement of \(\alpha_{\rm NP}\) would impose stringent constraints, requiring the \(Z^{\prime}\) coupling to be on the order of \(10^{-2}\).
As an extra neutral U(1) gauge boson, \(Z^{\prime}\) manifests itself in many extensions of the standard model (SM), such as the Grand Unified Theories (GUTs) [1, 2, 3, 4, 5, 6], heterotic string theory [7], left-right symmetric models [8, 9, 10, 11], and gauged B-L models [12, 13, 14]. Searching for such a gauge boson helps us to gain more insights about the fundamental theory beyond the SM. Experimentally, direct searches for the \(Z^{\prime}\) boson are conducted in various types of high energy colliders, including \(e^{+}e^{-}\) colliders like LEP, and hadron colliders such as Tevatron and LHC. Various mass ranges of \(Z^{\prime}\) are scanned, and the couplings of \(Z^{\prime}\) with both leptons and quarks are constrained. In the case of leptonic collider searches, the agreement between LEP-II measurements and the SM predictions regarding the cross-section of \(e^{+}e^{-}\to f\bar{f}\) implies that either \(M_{Z^{\prime}}>209\,\)GeV, or that the \(Z^{\prime}\) couplings with leptons are smaller than \(10^{-2}\)[15, 16, 17]. Similar constraints have also been found through the dilepton mass spectrum searches in the ATLAS experiments [18]. Besides, some indirect searches through neutrino-electron scatterings are also proposed and severe constraints are given various neutrino experiments [19, 20].
These searches have led to the consideration of the leptophobic \(Z^{\prime}\) boson, which interact exclusively with quarks and are extensively searched on hadronic colliders. Through extensive scanning of the dijet mass spectrum, upper limits on the \(Z^{\prime}\) couplings have been established by the CMS collaboration in the mass range from several TeV down to 10 GeV [21, 22, 23]. For \(Z^{\prime}\) bosons with masses below 10 GeV, comprehensive explorations on the hadron colliders are limited due to significant background interferences. While some progress has been made through nonstandard quarkonium decays [24], there remains a pressing need for additional strategies to comprehensively investigate this specific low mass range.
In addressing this critical research gap, we propose to conduct the search of the \(Z^{\prime}\) boson on lepton colliders, such as Beijing Spectrometer III (BESIII) and the forthcoming Super Tau Charm Factory (STCF), which have a very clean background as well as large column of data sample. The BESIII collaboration achieved a significant milestone, accumulating a staggering 10 billion \(J/\psi\) events by 2019, with considerable amount of events producing polarized baryon-antibaryon pairs [28]. Utilizing the
entanglement of final states has enabled the extraction of observables at an unprecedented level of accuracy, offering an excellent platform for probing new physics (NP) phenomena [29, 30]. Moreover, the future STCF anticipates a remarkable dataset of approximately \(3.4\times 10^{12}\)\(J/\psi\) events [31], promising even higher precision in relevant processes. Our specific focus lies in parity violation in \(J/\psi\to\Lambda\overline{\Sigma}^{0}\) and its charge conjugate. Dominated by a single virtual photon exchange, the nonperturbative effects stemming from gluon exchanges in such decays are comparatively suppressed, allowing for a factorizable amplitude at the first order in theoretical calculations [32, 33]. Furthermore, the SM prediction for parity violation in \(J/\psi\to\Lambda\overline{\Sigma}^{0}+c.c.\) is vanishingly small, leading to a clean background for the detection of \(Z^{\prime}\). Similar model is also proposed to relieve the tensions in the \(J/\psi\to\pi^{+}\pi^{-}\) and \(\psi(2S)\to\pi^{+}\pi^{-}\) branching fractions with fitted pion form factors [34].
The parity violating effect is characterized by the polarization asymmetry parameter \(\alpha_{\rm NP}\) for the decay of \(J/\psi\to\Lambda\overline{\Sigma}^{0}\). Experimentally, \(\alpha_{\rm NP}\) is available from the angular distribution as follows:
\[\frac{1}{\Gamma}\frac{\partial\Gamma}{\partial\cos\theta_{p}}=1+\alpha_{\rm NP }\alpha_{\Lambda}\cos\theta_{p}\,, \tag{1}\]
where \(\alpha_{\Lambda}=0.748(7)\) is the asymmetry parameter in \(\Lambda\to p\pi^{-}\)[23], and \(\theta_{p}\) is the angle between \(\vec{p}_{\Lambda}\) and \(\vec{p}_{p}\) defined at the rest frames of \(\Lambda\), respectively. Theoretically, \(\alpha_{\rm NP}\) is defined as
\[\alpha_{\rm NP}=\frac{|h_{++}|^{2}+|h_{+-}|^{2}-|h_{-+}|^{2}-|h_{--}|^{2}}{|h_ {++}|^{2}+|h_{+-}|^{2}+|h_{-+}|^{2}+|h_{--}|^{2}}\,, \tag{2}\]
where \(h_{\lambda\overline{\lambda}}\) are the helicity amplitudes of \(J/\psi\to\Lambda\overline{\Sigma}^{0}\) with \(\lambda\) and \(\overline{\lambda}\) the helicities of \(\Lambda\) and \(\overline{\Sigma}^{0}\), respectively. When parity symmetry holds, we have \(|h_{\lambda\overline{\lambda}}|^{2}=|h_{-\lambda-\overline{\lambda}}|^{2}\) and consequently \(\alpha_{\rm NP}=0\). Therefore, a nonzero value of \(\alpha_{\rm NP}\) indicates the violation of parity symmetry.
Additionally, the angular distribution for the charge-conjugate process, namely, \(J/\psi\to\Sigma^{0}\overline{\Lambda}(\to\overline{p}\pi^{+})\), is given by simply substituting \((\overline{\alpha}_{\rm NP},\overline{\alpha}_{\Lambda},\overline{\alpha})\) for \((\alpha_{\rm NP},\alpha_{\Lambda},\alpha)\) in Eq. (1). It is important to note that \(\overline{\alpha}_{\Lambda}\) denotes the asymmetry parameter for \(\overline{\Lambda}\to\overline{p}\pi^{+}\), with a measured value of \(-0.757(4)\) according to the Particle Data Group (PDG) [23]. Under the assumption that CP symmetry is conserved in the decay of \(\Lambda\to p\pi^{-}\)
we construct the CP-even and CP-odd observables as \(\alpha_{\pm}=(\alpha\pm\overline{\alpha})/2\). It's worth highlighting that within the SM, both \(\alpha_{+}\) and \(\alpha_{-}\) remain below the threshold of \(10^{-3}\). Furthermore, by considering two fold cascade decays, such as the case depicted in Fig. 1, more observables can be extracted.
We adopt the general effective Lagrangian describing the \(Z^{\prime}\) boson, as prescribed by the PDG [23]. In the context of the \(J/\psi\to\Lambda\overline{\Sigma}^{0}\) decay process, our analysis focuses exclusively on the isovector-axial vector current, denoted as \((\bar{u}\gamma_{\mu}\gamma_{5}u-\bar{d}\gamma_{\mu}\gamma_{5}d)\), and the vector current of \(\bar{c}\gamma_{\mu}c\). Consequently, the effective Lagrangian tailored for our investigation is as follows:
\[Z^{\prime}_{\mu}\left[g_{A}\left(\overline{u}\gamma^{\mu}\gamma_{5}u-\overline {d}\gamma^{\mu}\gamma_{5}d\right)+g_{V}\overline{c}\gamma^{\mu}c\right]+ \mathcal{C}\,. \tag{3}\]
where \(g_{A}=(g_{u}^{R}-g_{d}^{R})/4\) and \(g_{V}=(g_{u}^{R}+g^{L})/2\) represent the pertinent coupling constants. Other terms in the Lagrangian are collectively designated as \(\mathcal{C}\), and do not affect the detection of \(Z^{\prime}\). In the presence of such a \(Z^{\prime}\) boson, parity violation arises from the interference between amplitudes associated with \(J/\psi\to Z^{\prime*}/\gamma^{*}\to\Lambda\overline{\Sigma}^{0}\). These amplitudes are labeled as \(\mathcal{A}_{Z^{\prime}/\gamma}\) and, at the first order, they are given as:
\[\mathcal{A}_{Z^{\prime}}=2g_{A}g_{V}f_{\psi}M_{\psi}S_{Z^{\prime}}\epsilon_{ \mu}\langle\Lambda\overline{\Sigma}^{0}|\overline{u}\gamma^{\mu}\gamma_{5}u|0 \rangle\,,\,\,\mathcal{A}_{\gamma}=\frac{8}{3}\pi\alpha_{em}\frac{f_{\psi}}{M_ {\psi}}\epsilon_{\mu}\langle\Lambda\overline{\Sigma}^{0}|\overline{u}\gamma^{ \mu}u|0\rangle \tag{4}\]
where \(S_{Z^{\prime}}=(M_{\psi}^{2}-M_{Z^{\prime}}^{2}+i\Gamma_{Z^{\prime}}M_{Z^{ \prime}})^{-1}\) is the propagator of \(Z^{\prime}\), and \(M_{Z^{\prime}}\) (\(\Gamma_{Z^{\prime}}\)) corresponds to its mass (decay width). Here, \(f_{\psi}\) and \(M_{\psi}\) represent the decay constant and mass of \(J/\psi\), respectively, while \(\alpha_{em}\) corresponds to the QED fine structure constant. Incorporating the interference between amplitudes outlined in Eq. (4), we arrive at a
first-order approximation of the polarization asymmetry \(\alpha_{\rm NP}\) as:
\[\alpha_{\rm NP}=\frac{3g_{A}g_{V}}{2\pi\alpha_{em}}\frac{1-r}{(r-1)^{2}+y^{2}}{ \cal F}_{0}\propto\frac{2|A_{Z^{\prime}}|}{|A_{\gamma}|}\,, \tag{5}\]
where the ratios of \(M_{Z^{\prime}}^{2}/M_{\psi}^{2}\) and \(\Gamma_{Z^{\prime}}/M_{Z^{\prime}}\) are written as \(r\) and \(y\), respectively. It is crucial to emphasize that \({\cal F}_{0}\) depends only on the ratios of the timelike baryonic form factors, which reduces certain uncertainties. We adopt \({\cal F}_{0}=0.67\) from the \({}^{3}P_{0}\) model, which aligns well with experimental measurements, as detailed in Ref. [32]. Due to the computation of \(\Gamma_{Z^{\prime}}\) requiring a comprehensive knowledge of the effective Lagrangian, which introduces additional unknown coefficients, and the observation that the dependence of \(\alpha_{\rm NP}\) on \(\Gamma_{Z^{\prime}}\) can be safely neglected under the narrow width assumption, we have opted to set \(y=0.01\) in our subsequent evaluation.
We are now prepared to evaluate the discovery potential of the \(Z^{\prime}\) boson, both within the existing BESIII experiment and in anticipation of future experiments at the STCF. The total number of events is provided as \(N_{\rm event}=N_{J/\psi}\times{\cal B}_{\Lambda\overline{\Sigma}^{0}+c.c}\times\epsilon\), where \(N_{J/\psi}\) represents the number of produced \(J/\psi\) particles, and \(\epsilon\) denotes the detector efficiency concerning the considered final states. For BESIII and STCF experiments, \(N_{J/\psi}\) is estimated to be \(10^{10}\) and \(3.4\times 10^{12}\), respectively [28, 31]. The detector efficiencies at the BESIII regarding to \(\Lambda\overline{\Sigma}^{0}\) and \(\overline{\Lambda}\Sigma^{0}\) are 17.6% and 21.7% [33], respectively. We take \(\epsilon=0.2\) in the following for the sake of simplicity. We have also adopted a theoretical value of \(\alpha_{\rm NP}=0.02\), which is well within the reach by the BESIII experiment and can be easily surpassed by the STCF. Based on the anticipated events \(N_{\rm event}\), along with the specified \(\alpha_{\rm NP}\), we conducted simulations of the angular distribution using the Monte Carlo method. Our findings are illustrated in Fig. 2.
Owing to the BESIII detector's angular blind spot, we exclusively utilize the data within the range \(\cos\theta\in[-0.7,0.7]\) to fit the parameter \(\alpha_{\rm NP}\). As depicted in Fig. 2a, it is evident that despite the presence of a relatively broad error margin, signals of the existence of the \(Z^{\prime}\) boson are discernible within the BESIII experiment, consistent with our assumed value for \(\alpha_{\rm NP}\). Furthermore, as we anticipate a substantial enhancement in statistical precision at the forthcoming STCF, the prospects of detecting the \(Z^{\prime}\) boson become considerably more promising, even for smaller values of \(\alpha_{\rm NP}\).
Importantly, it should be recognized that such an exploration bears significance even when no significant signal has been found. In such case, stringent constraints on the gauge coupling of \(Z^{\prime}\) relative to its mass are established, taking into account the promising precision of \(\alpha_{\rm NP}\) measurements at the BESIII and STCF. These constraints are depicted in Fig. 3.
In Fig. (a)a, we consider the model-independent scenario, where the exclusion regions are clearly depicted above the solid lines. Our constraints on \(\sqrt{g_{V}g_{A}}\) span the range of \(10^{-2}\sim 10^{-1}\), which surpasses the existing bounds established by the CMS experiment [22]. It is worth noting that the mass of the \(Z^{\prime}\) boson exerts only a minimal influence on the exclusion curves, with the exception being the vic
Figure 3: Coupling-Mass curves for (a) the general case and (b) a specific \(Z^{\prime}\) model.
Figure 2: Simulated number of events for (a) BESIII and (b) STCF.
specific models, we can derive constraints on the gauge coupling \(g_{Z^{\prime}}\) provided that we have knowledge of the quantum numbers of the \(U(1)^{\prime}\) gauge group. As an illustration, we consider the \(d-xu\) model, where \(x\) can adopt any rational value, and the corresponding quantum numbers of \((u_{L},d_{L}),u_{R},d_{R}\) are given in Table 87.1 of the PDG [23]. In Fig. 3b, we present the excluded parameter space for various values of \(x\), assuming an upper limit of \(\alpha_{\rm NP}\) at 0.02. Our approach is a valuable complement to other research efforts when studying \(Z^{\prime}\) bosons with masses below 10 GeV.
In conclusion, we have explored the new possibility of discovering the \(Z^{\prime}\) boson with a mass below 10 GeV, a range currently accessible at the BESIII. Our simulations indicate that these signals could be detected at the BESIII with a precision of approximately 2 \(\sigma\). There is also a potential for improved signal detection at the future STCF. If no clear signal emerges, we can still derive useful information by establishing general constraints on the couplings of the \(Z^{\prime}\) boson to quarks, typically falling within the range of \(10^{-2}\) to \(10^{-1}\), regardless of the \(Z^{\prime}\) mass. Our approach offers a competitive and complementary method for hunting down the \(Z^{\prime}\) boson with a mass below 10 GeV. Even in less favorable scenarios, it can still make valuable contributions to the constraints on the couplings of the \(Z^{\prime}\) boson in the low mass range.
|
2309.14678 | Theory of defect-mediated ionic transport in Li, Na and K beta and beta
prime prime aluminas | Alkali metal $\beta$/$\beta^{\prime\prime}$ aluminas are among the fastest
ionic conductors, yet little is understood about the role of defects in the ion
transport mechanism. Here, we use density functional theory (DFT) to
investigate the crystal structures of $\beta$ and $\beta^{\prime\prime}$
phases, and vacancy and interstitial defects in these materials. We find that
charge transport is likely to be dominated by alkali metal interstitials in
$\beta$-aluminas and by vacancies in $\beta^{\prime\prime}$ aluminas. Lower
bounds for the activation energy for diffusion are found by determining the
minimum energy paths for defect migration. The resulting migration barriers are
lower than the experimental activation energies for conduction in Na $\beta$
and $\beta^{\prime\prime}$ aluminas, suggesting a latent potential for
optimization. The lowest activation energy of about 20 meV is predicted for
correlated vacancy migration in K $\beta^{\prime\prime}$ alumina. | Suchit Negi, Alexandra Carvalho, A. H. Castro Neto | 2023-09-26T05:05:57Z | http://arxiv.org/abs/2309.14678v2 | Theory of defect-mediated ionic transport in Li\({}^{+}\), Na\({}^{+}\) and K\({}^{+}\)\(\beta\) and \(\beta^{\prime\prime}\) aluminas
###### Abstract
Alkali metal \(\beta/\beta^{\prime\prime}\) aluminas are among the fastest ionic conductors, yet little is understood about the role of defects in the ion transport mechanism. Here, we use density functional theory (DFT) to investigate the crystal structures of \(\beta\) and \(\beta^{\prime\prime}\) phases, and vacancy and interstitial defects in these materials. We find that charge transport is likely to be dominated by alkali metal interstitials in \(\beta\)-aluminas and by vacancies in \(\beta^{\prime\prime}\) aluminas. Lower bounds for the activation energy for diffusion are found by determining the minimum energy paths for defect migration. The resulting migration barriers are lower than the experimental activation energies for conduction in Na \(\beta\) and \(\beta^{\prime\prime}\) aluminas, suggesting a latent potential for optimization. The lowest activation energy of about 20 meV is predicted for correlated vacancy migration in K \(\beta^{\prime\prime}\) alumina.
## I Introduction
All-solid-state batteries are one of the possible solutions to address the current safety and capacity limitations of conventional batteries[1; 2; 3]. Solid-state systems are able to offer superior chemical and thermal stability compared to Li-ion batteries with traditional liquid electrolytes, which are toxic, flammable and unstable in contact to electrode materials[4]. In contrast, some solid-state electrolytes are stable in contact with metallic anodes, allowing for higher energy density storage[5]. Additionally, they can serve concurrently as ion conduction layers, electronic insulators, and as mechanical separators between the anode and the cathode, allowing for easier design and assembly[6].
The Na aluminates \(\beta\)-alumina and \(\beta^{\prime\prime}\)-alumina combine high conductivity and a wide electrochemical stability window with air processability, making them exceptional materials for Na all-solid-state-batteries[7]. \(\beta\)-alumina has an ionic conductivity of 0.01-0.03 Scm\({}^{-1}\) in single crystals at room temperature[8], of the same order of magnitude as conventional organic liquid electrolytes[9]; \(\beta^{\prime\prime}\)-alumina solid electrolytes, which are more difficult to grow, can have even higher ionic conductivity despite the lower crystallinity[7; 8; 10]. \(\beta^{\prime\prime}\)-alumina solid electrolytes have been employed in molten-sodium batteries as well as planar-type Na-MH batteries[11].
The \(\beta/\beta^{\prime\prime}\)alumina solid electrolytes can also conduct Li, K, Ag and other ions[12; 10]. The measured room temperature conductivity for Li is about one order of magnitude lower than for Na[13]. It is generally believed that this is due to the fact that while Na\({}^{+}\) ions form a plane between the spinel blocks, Li\({}^{+}\) ions hop between positions above and below the conduction plane, leading to slower conduction in the plane.
In contrast, the single-crystal ionic conductivity of K \(\beta^{\prime\prime}\)-alumina has been found to be even higher than that of the Na \(\beta^{\prime\prime}\)-alumina at room temperature by several groups[14; 12]. Others, however, found poorer conductivity in ceramic samples, possibly due to the presence of grain boundaries and phase transformations[15; 16]. The increase of the bulk contribution to the impedance was also found to be inconsistent with previous studies[16]. Clarifying whether the ionic conductivity of K \(\beta^{\prime\prime}\)-alumina can achieve such high values as claimed by the earlier studies, and in what conditions, is highly desirable due to possible application of this electrolyte in K-S batteries[17], liquid metal flow batteries[18] and other devices[19; 20].
In this study, we use density functional theory calculations to investigate the basic ion-transport mechanisms in Li, Na, and K \(\beta\)-aluminas. We will demonstrate that in \(\beta\)-alumina interstitial mechanisms dominate, whereas in \(\beta^{\prime\prime}\)-alumina vacancy mechanisms dominate, indicating that control of the occupation of the conduction plane sites is of paramount importance to increase conductivity. We suggest lower bounds for the activation energies for diffusion, indicating that both Na and K \(\beta^{\prime\prime}\)-aluminas have the potential to offer nearly ideal ion conduction.
## II Methods
First-principles DFT calculations were carried out using the Quantum ESPRESSO package[21]. The exchange-correlation functional of Perdew, Burke and Ernzerhof (PBE)[22] was used together with ultrasoft pseudopotentials to account for the core electrons[23]. We employed a plane-wave basis set with kinetic energy cutoffs of 66 Ry to expand the electronic wave functions. The Brillouin zone was sampled using a \(\Gamma\)-centered 1\(\times\)1\(\times\)1 Monkhorst-Pack (MP) grid[24] for all supercell calculations.
We used the nudged elastic band (NEB)[25; 26] method
to find the minimum energy path (MEP) on the potential energy surface (PES). The activation energy for migration was obtained from the difference between the MEP highest saddle point energy and the absolute energy minimum. NEB calculations were performed between energy minima or between an energy minima and a saddle point derived using symmetry considerations. A total of 9 images were used to construct the MEPs.
## III Results and discussion
### Crystal Structure
\(\beta\)-aluminas present a variety of stoichiometries and are often a mixture of the \(\beta\) and \(\beta^{\prime\prime}\) alumina phases[27], with compositions in the ranges Na\({}_{2}\)O- _n_Al\({}_{2}\)O\({}_{3}\) (8\(<\)\(n\)\(<\)11 for Na \(\beta\)-alumina) and Na\({}_{2}\)O- _m_Al\({}_{2}\)O\({}_{3}\) (5\(<\)\(m\)\(<\)7 for Na\(\beta^{\prime\prime}\)-alumina)[7]. In this section, we describe the structures and compositions used in our models, which typify both \(\beta\) and \(\beta^{\prime\prime}\) aluminas.
#### iii.1.1 Na and K \(\beta\) - aluminas
The nominal phase formula of stoichiometric \(X\beta\)-alumina, where \(X\) = {Li, Na, K}, is \(X\)Al\({}_{11}\)O\({}_{17}\), as determined in the seminal works of Beevers _et al.[10; 28; 29; 30]_. The Na and K \(\beta\)-aluminas were found to belong to space group 194 (\(D_{\text{\tiny{6h}}}^{\text{\tiny{4}}}\))[29]. Figure 1-a) shows the Na \(\beta\)-alumina unit cell containing two formula units (f.u.). The calculated lattice parameters can be found in Appendix A Tables 2,3 and are within 0.2 A of the experimental values.
The key features of the structure are the Na planes, also referred to as the 'conduction region' which alternate with Al-O blocks, also referred to as'spinel blocks'[7; 30; 31], by analogy with the MgAl\({}_{2}\)O\({}_{4}\) spinel structure[32]. In the spinel block, Al atoms are surrounded by oxygen octahedra or tetrahedra. This block is non-ion conducting and remains nearly undisturbed when the alkali metal ions move. Figure 1-b) shows the Na or K sublattices at the conduction plane. The sites occupied by Na ions are named 'Beevers-Ross' (BR) sites[31]. The unoccupied but crystallographically equivalent site is named 'anti Beevers-Ross' (aBR) site[31; 14]. All the processes of interest to ion conduction happen in this conduction region.
#### iii.1.2 Li \(\beta\) - alumina
The Li \(\beta\)-alumina structure is similar to the Na and K \(\beta\)-alumina structures [Fig. 1-c)]. The main difference is that The Li atoms are displaced 0.56 A above or below the BR sites, bonding to the neighboring oxygen atoms of the spinel layers immediately above or below, with a consequent doubling of the primitive unit cell along the \(\hat{x}\) direction. The corresponding distortion energy is 0.34 eV per unit cell. The resulting space group 18 (\(D_{2}^{3}\)), as determined with tolerances of 0.1 A and 0.5\({}^{\circ}\) for distances and angles, respectively.
Experimental evidence of the displacement of Li atoms from the BR sites can be found in the frequency of the Raman bands of the Li translational modes, found at 340--410 cm\({}^{-1}\), higher than expected from the corresponding values reported for other alkali-metal ions[33]. Besides, the probability of occupation of the out-of-plane position by Li can also be derived from an analysis of NMR \({}^{7}\)Li quadrupole interactions below 100 K[34; 35]. An activation energy for out-of-plane motion of 29 meV has been determined in in 93%Li/7%Na \(\beta\)-alumina, consistent with the value 42 meV derived from our calculations[35]. Another study using neutron and X-ray diffraction in single crystals with approximately 61%Li/39%Na suggested that the Li atoms are placed 1 A above the BR site at low temperature[36]. The displacement is larger than predicted by our calculations and could possibly be due to the presence of the larger Na ions in the experiment. In contrast, a neutron diffraction study in 50%Li/50% sodium \(\beta\) alumina[37] determined
Figure 1: Structure of \(\beta\)- and \(\beta^{\prime\prime}\)-alumina crystals: (a) unit cell of Na or K \(\beta\)-aluminas (NaAl\({}_{11}\)O\({}_{17}\)), and (b) detail of the sites in the conduction region; unit cell of (c) Li \(\beta\)-alumina (LiAl\({}_{11}\)O\({}_{17}\));(d) idealized \(\beta^{\prime\prime}\)-alumina (NaAl\({}_{5}\)O\({}_{8}\)); (e) Mg-stabilized \(\beta^{\prime\prime}\)-alumina (Na\({}_{2}\)MgAl\({}_{10}\)O\({}_{17}\)) and (f) detail of the conduction plane of \(\beta^{\prime\prime}\)-alumina. The structures of the \(\beta^{\prime\prime}\)-aluminas are similar for all the three alkali metals.
that the Li atoms sit 1 A above the mO site at 4.2 K, which is in conflict with our calculations where such position is found to be a saddle point, as will be discussed.
#### ii.1.3 Idealized \(\beta^{\prime\prime}\) aluminas
The key difference between \(\beta\) and \(\beta^{\prime\prime}\) aluminas is the stacking sequence of the combined spinel and conduction double layers: in the \(\beta\) structure, they have AB stacking, while in the \(\beta^{\prime\prime}\) structure they have ABC stacking; the experimental structure belongs to space group 166 (\(D_{3d}^{5}\))[38]. The idealized phase formula of Na \(\beta^{\prime\prime}\)-alumina is NaAl\({}_{5}\)O\({}_{8}\), of which three f.u. make up a primitive unit cell. We have not found experimental or theoretical reports of the structure of the ideal stoichiometric \(\beta^{\prime\prime}\) phase. Rather, \(\beta^{\prime\prime}\) alumina is often stabilized by extrinsic divalent cations such as Mg\({}^{2+}\)[7].
Our idealized structure [Fig. 1-d)], given as Supplementary Material, is based on the structure of potassium \(\beta^{\prime\prime}\)-aluminogallate[39], where 30 Al atoms are distributed over 36 Al sites (octahedral and tetrahedral) and 48 O atoms distributed over 51 O sites. For the purpose of the DFT calculations, we have randomly chosen the positions of the Al and O atoms in the spinel block as they have little influence on ion conduction. The unit cell has twice the number of \(X\) ions per conduction plane compared to \(\beta\) alumina, and instead of forming a planar lattice, they are staggered 0.22 A or 0.14 A above or below the BR/aBR sites for \(X=\)Li, Na and K, respectively (Fig. 1-d).
The Na \(\beta^{\prime\prime}\) in-plane lattice parameter is 1.6% larger than that of Na \(\beta\)-alumina, and 1.5% larger than the experimental value (see Appendix A). The \(c\) parameter is 3.2% smaller than the experimental value[30].
Inspired by the phase diagram of the Na\({}_{2}\)O/Al\({}_{2}\)O\({}_{3}\) system[40], we compare the formation energy of the idealized Na\(\beta^{\prime\prime}\) phase with respect to the Na\(\beta\) and \(\alpha\) alumina phases, obtaining
\[\mathrm{NaAl_{5}O_{8}(\beta^{\prime\prime})+3Al_{2}O_{3}(\alpha)\to NaAl_{11}O _{17}(\beta)+1.9\,eV.} \tag{1}\]
Thus the undoped Na\(\beta^{\prime\prime}\) phase is unstable (Fig. 2). Moreover in the calculated DFT bandstructure of idealized Na\(\beta^{\prime\prime}\) is \(p\)-type doped, with the Fermi level at the valence band top, possibly due to the Al vacancies (see Appendix A). This justifies theoretically the need to dope the material with stabilizing species. These have to be taken into account in our model to reproduce the electron insulating behavior and the right defect charge states, and will be considered in the next sub-section.
#### ii.1.4 Mg-stabilized \(\beta^{\prime\prime}\) aluminas
Divalent cations such as Mg\({}^{2+}\) act as stabilizers of the \(\beta^{\prime\prime}\) phase by compensating the charge of the additional \(X\) ions without the necessity to change the spinel layer structure[7; 41; 42]. The ideal formula for Mg-stabilized Na \(\beta^{\prime\prime}\)-alumina is Na\({}_{1+x}\)Mg\({}_{x}\)Al\({}_{11-x}\)O\({}_{17}\) [Fig. 1e-f)] where the Mg content \(x\) can be varied while keeping the number of additional Na atoms equal to the number of substitutional Mg at Al sites (Mg\({}_{\mathrm{Al}}\)) so as to keep charge neutrality. We estimate \(x=2\) with three Na atoms per unit cell and per plane to be the highest possible Na packing, with a minimum Na-Na distance of 2.9 A, comparable to the double of the ionic radius of Na\({}^{+}\) (1.02 A). Experimentally, \(x\sim 0.66\) has been reported[43; 31], and possibly an imbalance of Na compared to Mg due to sodium evaporation above 1600\({}^{\circ}\)C[8].
Adding a small fraction of Mg lowers the formation energy relative to Na\(\beta\) phase (Fig. 2), thus making the Mg-stabilized Na\(\beta^{\prime\prime}\) phase more favorable over the Na\(\beta\) phase. While the formation energy of idealized Na\(\beta^{\prime\prime}\) is positive, that of the Mg-stabilized Na\(\beta^{\prime\prime}\) phases is now negative. Mg-stabilized Na \(\beta^{\prime\prime}\)-aluminas where \(x\)=1 and \(x\)=2 in the formula Na\({}_{1+x}\)Mg\({}_{x}\)Al\({}_{11-x}\)O\({}_{17}\) are insulators with a clean bandgap (see Appendix A Fig 9).
Figure 1-e) shows the structure for \(x=1\), with composition Na\({}_{2}\)O-MgO-5Al\({}_{2}\)O\({}_{3}\), of which three f.u. make up a primitive unit cell. The three Mg\({}_{\mathrm{Al}}\) are placed one in each spinel block and as far as possible from the conduction planes. The resulting structure resembles that of undoped Na \(\beta^{\prime\prime}\)-alumina (Fig. 1-c,d), with the same stacking sequence, and similar up and down staggering of the Na ions in the conduction region, which occupy all the equivalent BR/aBR sites. Its optimized lattice parameters are given in Appendix A Table 2. The struc
Figure 2: Formation energies of idealized Na\(\beta^{\prime\prime}\) and Mg-stabilized Na\(\beta^{\prime\prime}\) phases relative to Na \(\beta\)-alumina, in MgO and Na\({}_{2}\)O conditions. The blue curve corresponds to the Mg-stabilized Na\(\beta^{\prime\prime}\) phase with phase formula Na\({}_{1+x}\)Mg\({}_{x}\)Al\({}_{11-x}\)O\({}_{17}\). The special case for \(x=0\) yields \(\beta\) alumina. The red dot corresponds to the idealized Na\(\beta^{\prime\prime}\) alumina phase (NaAl\({}_{5}\)O\({}_{8}\)).
tures of Mg-doped Li and K \(\beta^{\prime\prime}\)-aluminas are very similar to that of Na \(\beta^{\prime\prime}\)-alumina. The structure with \(x=1\) has the same number of \(X\) ions in every conduction plane and therefore we will use it as a model in subsequent calculations for all \(\beta^{\prime\prime}\)-aluminas, unless otherwise stated.
### Defect Structures
#### iii.2.1 Vacancy
We created a single vacancy in one of the conduction planes, for each of the materials studied studied.
In the Li \(\beta\)-alumina, the removal of the Li\({}^{+}\) ion leads to an expansion of the distance between its two nearest bridge oxygen neighbors in the conduction region by 18%, whereas the neighboring Li remains bonded to the respective oxygen neighbors. In the Na and K \(\beta\) aluminas, the vacancies retain the trigonal symmetry of the original sites, with the triangle of bridge oxygen nearest neighbors, in the same (0001) plane, contracting by 18% and 8%, respectively.
In the \(\beta^{\prime\prime}\) aluminas, the distance between the V\({}^{\prime}_{X}\) nearest \(X\) neighbors (\(d\)) contracts by 2%, 54% and 30% for \(X\)=Li, Na and K, respectively. The structure of the reconstructed vacancy is shown in Fig. 3. In the resulting structure, three \(X\) atom rings form around the vacant site, and the adjacent rings of \(X\) atoms around the oxygen sites become five-atom rings instead of six-atom rings. This planar arrangement is more prominent in K with effective bond length \(d\sim 4\) A compared to K\({}^{+}\) ionic radii \(\sim\) 1.38 A.
In the case of the \(\beta\) phases, the presence of the vacancy disturbs the positions of the Na atoms in the other conduction plane as well. In the case of the \(\beta^{\prime\prime}\) phases however, the relaxation in the other conduction planes was negligible. In all cases changes to the spinel structures are negligible.
#### iii.2.2 Interstitial
As considered in sec. III.1, the conduction planes of both \(\beta\) and \(\beta^{\prime\prime}\) aluminas have four sites equidistant to the bridging oxygens. In \(\beta\) alumina, two of these are occupied by \(X\) (the BR sites), and two are unoccupied (the aBR sites) - see Fig. 4. However, in \(\beta^{\prime\prime}\) alumina all four sites are occupied by \(X\) and are crystallographically equivalent. Additionally, there is another high-symmetry interstitial site named the mid-oxygen (mO) site, equidistant from an aBR site and a BR site. Lastly, we have considered split-interstitials consisting of two \(X\) atoms at adjacent mO sites, replacing the original atom at the BR site.
In Li \(\beta\)-alumina, the lowest energy configuration is a distorted \([1\bar{1}0]\) split-interstitial, with two Li atoms approximally situated between the original Li site and the aBR site. In Na \(\beta\)-alumina, a \(\langle 10\bar{1}0\rangle\) split-interstitial is the only stable configuration. Both the aBR interstitial and the mO interstitial relax to the split interstitial configuration. In K \(\beta\)-alumina, the \(\langle 10\bar{1}0\rangle\) split-interstitial and the aBR interstitial are distinct but degenerate in energy.
Experimental studies on Na \(\beta\) alumina suggest Na ions start occupying the mO site in Na-rich \(\beta\)-alumina[43; 44; 45]. This is consistent with the results of our calculations, since in the split-interstitial configuration, both Na atoms are only 0.4 A away from the respective nearest mO sites.
As for the Mg-stabilized Na \(\beta^{\prime\prime}\)-alumina, Na ions occupy all the available BR/aBR sites in the conduction region. Thus only the vicinity of mO interstitial sites is available for interstitial Na ions, as shown in Fig. 4. The split-interstitial, with two atoms occupying adjacent mO sites instead of the original BR/aBR site, was found to be the only stable structure. The presence of the Na interstitials has little influence on the atoms outside the
Figure 3: Schematic representation of a vacancy in Mg-stabilized Na or K \(\beta^{\prime\prime}\) alumina before relaxation (left) and after relaxation (right). ‘V’ indicates a vacant site. Only atoms at the conduction plane are shown for clarity.
Figure 4: Schematic representation of the interstitial sites at the conduction planes of \(\beta\) alumina (top) or \(\beta^{\prime\prime}\) alumina (bottom). The mO and aBR sites are unstable with respect to relaxation to the split-interstitial configuration. The split-interstitial configurations are shown at the relaxed geometries obtained for Na \(\beta/\beta^{\prime\prime}\) aluminas. Only atoms at the conduction plane are shown.
conduction region.
### Defect Formation Energies
We now investigate the role of vacancy and interstitial defects in the ionic diffusion and charge conduction. If the defects are thermally generated, in equilibrium conditions, the activation energy for conduction is the sum of two terms - the energy for defect formation plus the energy for defect migration.
However, defects may be present due to non-adiabatic processes during growth. For example, the \(\beta^{\prime\prime}\)-aluminas are usually Na-deficient, with the presence of Na vacancies[46], and this is believed to result from Na evaporation above 1600\({}^{\circ}\)C[8]. In such material, the activation energy for conduction measured in a closed system is the migration energy only. Thus, migration energies are the minimum bound for the activation energy.
In an alkali metal battery context, the alkali metal chemical potential can vary across the electrolyte due to the proximity to the cathode or anode. Here, we calculate the formation energy of alkali metal vacancies (V\({}_{X}\)) and interstitials (\(X_{\rm i}\)) in \(X\)-rich conditions, where \(X\) = {Li, Na, K }.
Even though \(\beta\)-aluminas are wide gap insulators, we assume that its Fermi level (\(E_{F}\)) is well defined and take it to be the chemical potential for electrons. The defect formation energy in relative charge state \(q\) is then given by
\[E_{\rm f}(D^{q})=E_{\rm t}(X\beta:D^{q})-E_{\rm t}(X\beta)\pm E_{\rm t}({\rm bcc \text{-}}X)+qE_{F}, \tag{2}\]
where \(E_{\rm t}(X\beta:D^{q})\), \(E_{\rm t}(X\beta)\) and \(E_{\rm t}({\rm bcc\text{-}}X)\) are the total energies of the supercell with the defect, the pristine supercell and the metallic reservoir of element \(X\), respectively, and the \(-/+\) signs are for interstitial/vacancy defects. The results for vacancies and interstitials are shown in Fig. 5.
The vacancy (\(-/0\)) transition level is close to the top of the valence band (\(E_{v}\)), indicating that vacancies are always negatively charged (V\({}_{X}^{\prime}\)). Similarly the (\(0/+\)) transition levels of interstitial defects are close to the conduction band (\(E_{c}\)) indicating that interstitials are always positively charged (\(X_{\rm t}^{\prime}\)). Thus both \(X\) vacancies and interstitials can in principle be responsible for ionic charge conduction.
In the case of the \(\beta\)-aluminas, if the Fermi level is close to the conduction band, both vacancies and interstitials can be created with nearly vanishing or negative formation energy, but interstitials are more favorable over a wide range of Fermi level energies.
In the case of the \(\beta^{\prime\prime}\) aluminas, the formation energies of interstitial defects are higher than in the corresponding \(\beta\)-aluminas, because the conduction plane is more densely packed. However, neutral vacancy formation energies are still high, around 4 eV in \(X\)-rich conditions.
The definition of the chemical potentials in \(X\)-poor conditions in battery systems depends on the electrodes used. In NaS batteries, Na-poor conditions can be defined by assuming equilibrium with a reservoir of the sodium polysulfide Na\({}_{2}\)S\({}_{4}\) and sulfur, which results in vacancy formation energies that are about 2 eV lower than in Na-rich conditions. In general, battery voltages are of the order of \(\sim\) eV, corresponding to the difference in the \(X\) chemical potential at the \(X\)-poor side and at the \(X\)-rich side of the battery.
Finally, we note that the calculated bandgap is underestimated as expected using the DFT-PBE functional. Assuming that the defect levels are pinned to the band edges, correcting the bandgap could lead to vanishing defect formation energies near mid-gap.
Figure 5: Vacancy and interstitial formation energies in \(X\)-rich conditions (\(X\) = {Li, Na, K }). The formation energies for defects in the \(\beta\)-alumina phases are shown on the left, and those in the Mg-stabilized \(\beta^{\prime\prime}\) alumina phases are shown on the right. The bandgap values were calculated and can be found in Appendix (Tables 2,3).
### Migration Energies
The activation energies for migration of \(X\) vacancies and interstitials have been calculated assuming that they are always in their respective charged states (V\({}^{\prime}_{X}\) and \(X^{\bullet}_{i}\)). The activation energies are found by searching the minimum energy path between two equivalent positions separated by a lattice vector, using the NEB method (see Methods section), and taking the difference between the energies at the saddle point and at the minimum energy point.
#### iii.4.1 Vacancy Migration
\(\beta\)-aluminaFigure 7 shows a hop of the Na vacancy in Na \(\beta\)-alumina from one BR site to another, and the corresponding energy profile. The saddle point is at the middle plane equidistant to the initial and final configurations. The migration path is similar for K \(\beta\)-alumina. In Li \(\beta\) alumina, the Li vacancy is not trigonal, due to the original position of the Li atoms closer to the bridge oxygen atoms. The Li vacancy migration involves a jump of one of its Li neighbors to the bridge oxygen atom near the vacancy, breaking two Li-O bonds, but still requires a lower activation energy than in the cases of Na or K. The respective activation energies can be found in Table 1.
Mg-stabilized \(\beta^{\prime\prime}\) aluminasIn the \(\beta^{\prime\prime}\) conduction region, the \(X\) atoms form a honeycomb lattice and are \(1/\sqrt{3}a_{0}\) apart, closer than in \(\beta\)-alumina. The migration energy for the vacancy hop is 0.33, 0.03 and 0.03 eV for Li, Na and K, respectively (Table 1). The small migration energies obtained for Na and K are already close to the uncertainty of our calculations.
We have also investigated the two-atom correlated process where one atom jumps into the neighboring vacancy and its neighbor jumps to the site just vacated (Fig. 7) has an activation energy of just 0.32, 0.08 and 0.02 for Li, Na and K respectively. The concerted movement allows two atoms to move with approximately the same energy as one atom for Li and K. It is possible that concerted migration involving more atoms is also energetically favored, but we could not investigate it due to the size limit imposed by the supercell dimensions.
Since the \(\beta^{\prime\prime}\) aluminas are often alkali metal deficient, these migration energies can be considered lower bounds for the activation energy for the conductivity, which have been experimentally determined to be 0.30, 0.03 (at high temperature) and 0.15 eV in Li, Na and K \(\beta^{\prime\prime}\) aluminas, respectively[10].
The exceptionally low activation energy found for V\({}^{\prime}_{\text{K}}\) in K \(\beta^{\prime\prime}\): Mg corroborates the experimental observation of room temperature ionic conductivity of K \(\beta^{\prime\prime}\) phase[12] being 2000 times higher than K \(\beta\) and 10 times higher than that of Na \(\beta^{\prime\prime}\)-alumina.
#### iii.4.2 Interstitial Migration
Since the \(X\) interstitials in all \(\beta\) and \(\beta^{\prime\prime}\) alumina materials considered are split-interstitials, their migration necessarily involves at least two atom jumps.
\(\beta\)-aluminasFigure 8 shows schematically the migration of a split-interstitial in \(\beta\)-alumina along one of the principal in-plane crystal directions. The saddle point is an interstitial at the aBR site, a configuration at the mirror plane equidistant to the initial and final configurations. While one of the split-interstitial atoms hops to the vacant site, the other one hops through the aBR site, knocking on the next \(X\) atom to create another split-interstitial one lattice spacing away.
The energy profile shows a monotonic increase from the split-interstitial to the aBR site for Na but not for Li and K (Fig. 8). The migration energies can be found in Table 1. The interstitial migration energies are lower than the vacancy migration energies for all \(\beta\)-alumina. Coupled to the low \(X\) interstitial formation energy this indicates that an interstitial-mediated mechanism is the dominant process of ionic conduction. The activation energies 0.17, 0.09 and 0.08 eV for Li, Na and K, respectively, can be considered lower bounds for the activation energy for conduction, which has been experimentally measured to be 0.24, 0.15 and 0.28 eV, respectively[10].
Mg-stabilized \(\beta^{\prime\prime}\) aluminasAs for Mg-stabilized \(\beta^{\prime\prime}\) aluminas, the \(X\) interstitials migrate through the available mO sites (Fig. 8). One of the atoms in the split interstitial migrates to the vacant site while repelling the other, which knocks-on one of its nearest neighbors into an available mO site. This mO interstitial then can relax to a split-interstitial centered on another lattice site. The process is not always on the plane, as the \(X\) atoms in the two BR/aBR sublattices are staggered, and more packed than in the \(\beta\)-aluminas. The respective activation energies for this process are 0.45, 0.12 and 0.37 eV
Figure 6: Vacancy and interstitial formation energies in Na-poor conditions (with chemical potentials defined by Na\({}_{2}\)S\({}_{4}\) and S\({}_{8}\) reservoirs). The formation energies for defects in the \(\beta\)-alumina phases are shown on the left, and those in the Mg-stabilized \(\beta^{\prime\prime}\) alumina phases are shown on the right. The bandgap values were calculated and can be found in Appendix (Tables 2,3).
for Li, Na and K, respectively. All these are higher than the respective vacancy migration energies (Table 1). The energy profiles for the interstitial migration in the three \(\beta^{\prime\prime}\) materials can be found in Fig. 8.
Figure 8: Schematic representation of Li, Na and K interstitial migration in the respective \(\beta\)-alumina (top) and Mg-stabilized \(\beta^{\prime\prime}\)-alumina (bottom), alongside the calculated energy profile. Different from all other \(X\) self-interstitials which have the orientation represented in the figure, the Li split-interstitial in Li \(\beta\)-alumina is oriented approximately along the [110], but follows a similar migration path. The migration path distance is from a split-interstitial position to a neighboring split-interstitial position (at 0 and 100%), through the aBR site in the case of \(\beta\)-alumina, or through the mO site in the case of Mg-stabilized \(\beta^{\prime\prime}\) alumina. For orientation and color code, please refer to Fig. 4.
Figure 7: Schematic of the vacancy migration pathways in \(\beta\)-aluminas (top) and Mg-stabilized \(\beta^{\prime\prime}\)-aluminas (bottom) along with their respective energy profiles.
\begin{table}
\begin{tabular}{l l l l} \hline Host & \multicolumn{2}{c}{calc. \(W_{\rm mig}\)} & exp. \(E_{\rm a}\) \\ & V\({}_{\rm X}^{\prime}\) & \(X^{\star}_{\rm i}\) & \\ \hline Li\(\beta\) & 0.27 & 0.17 & _0.24\({}^{a}\)_ \\ & & & _0.27\({}^{a}\)_ \\ Li\(\beta^{\prime\prime}\): Mg & 0.33 & 0.45 & 0.30\({}^{b}\) \\ & [0.32] & & \\ Na\(\beta\) & 0.30 & 0.09 & 0.15\({}^{b}\) \\ & & & _0.16\({}^{a}\)_ \\ Na\(\beta^{\prime\prime}\): Mg & 0.03 & 0.12 & 0.28-0.33\({}^{c}\) (LT) \\ & [0.08] & & _0.20-0.31\({}^{b}\)_ (LT) \\ & & & 0.03\({}^{b}\) (HT) \\ & & & 0.09-0.12\({}^{g}\) (HT) \\ K\(\beta\) & 0.72 & 0.08 & 0.28-0.56\({}^{d}\) \\ K\(\beta^{\prime\prime}\): Mg & 0.03 & 0.37 & 0.15\({}^{b}\) \\ & [0.02] & & _0.186\({}^{f}\)_ \\ \hline \end{tabular} \({}^{a}\) Ref. [13]
\({}^{b}\) Ref. [10]
\({}^{c}\) Ref. [46]
\({}^{d}\) Ref. [47]
\({}^{e}\) Ref. [35] and [48]
\({}^{f}\) Ref. [16]
\({}^{g}\) Ref. [12]
\end{table}
Table 1: Calculated activation energies for migration (\(W_{\rm mig}\)) of alkali metal vacancy and interstitial defects, and experimental activation energy derived from conductivity experiments (\(E_{\rm a}\)). Values in square brackets are for a correlated two-atom migration. LT and HT refer to low temperature and high temperature, respectively. Values in italic were reported for single crystals. The calculated and experimental values can be directly compared if the material has pre-existing defects of either type, or in thermodynamic equilibrium if the formation energy of one of the defects is zero or negative.
Conclusion
We have carried out DFT calculations of the structure, formation energy and migration of intrinsic defects in \(\beta\) and \(\beta^{\prime\prime}\) aluminas of Li, Na and K. We have confirmed that both alkali metal self-interstitials and vacancies in the conduction plane can carry charge almost for the whole range of Fermi level energies potentially available.
Alkali metal self-interstitials have low or even negative formation energy in \(\beta\)-aluminas in alkali metal-rich conditions, and this is consistent with Na excess reported experimentally[47]. The \(X_{i}\) activation energies for migration are slightly lower than the experimental activation energies, which could be due to the formation energy contribution or to crystal imperfection.
The formation energies of alkali metal vacancies in \(\beta^{\prime\prime}\)-aluminas are positive over most of the DFT bandgap even in the case of Na-poor Na \(\beta^{\prime\prime}\)-alumina. However, such defects can possibly be present due to high-temperature processing. Similarly, the structure can be made less Li/Na/K stuffed (but stoichiometric) by adding less Mg\({}_{\rm Al}\).
In the \(\beta\)-aluminas, alkali metal vacancy migration energies increase with increasing ionic radius, but the alkali metal interstitial migration energy decreases with ionic radius, which is somewhat counterintuitive. In the \(\beta^{\prime\prime}\)-aluminas, the vacancy migration energies for Na and K are about one order of magnitude lower than for Li. The atomic radii of Na and K are just ideal to move in a snug fit in the interlayer spacing when it is close to maximum packing.
The vacancy migration energy that we have obtained for Na \(\beta^{\prime\prime}\)-alumina is closer to the experimental value for the high-temperature regime. It has been suggested that the higher activation energy at low temperatures is due to vacancy ordering[46; 49]. However, in our calculations, the vacancies are reconstructed and due to the periodic boundary conditions have a 2\(\times\)2 ordering without jeopardizing the low activation energy. Recently, it has been proposed that interaction between the vacancies and the Mg\({}_{\rm Al}\) is at the origin of higher activation energy in the low-temperature regime[50], an explanation that reconciles the experimental interpretation with the results of our calculations. Comparing the energy of the correlated two-atom process with the one-atom process for vacancy migration in \(\beta^{\prime\prime}\)-aluminas, we found that the energy of the two is nearly the same in the cases of Li and K, indicating that the uphill movement of one atom on its potential energy surface is correlated with the downhill movement of the other atom[51]. Such processes involving two or more atoms may contribute significantly to diffusion. Unfortunately, there is lack of measurements of the Haven ratio in \(\beta^{\prime\prime}\)-aluminas.
The calculated activation energy for V\({}_{\rm K}^{\prime}\) in K \(\beta^{\prime\prime}\)-alumina is found to be only about 20 meV, suggesting that this material can present nearly ideal ion conduction if the amount of K is carefully controlled. The smaller energy barrier found in K \(\beta^{\prime\prime}\)-alumina is due to its optimal ionic radius together with lower affinity of K for O when comparing to the other alkali metals (see Table 4). We therefore believe that K \(\beta^{\prime\prime}\)-alumina deserves further attention as an ionic conductor.
###### Acknowledgements.
This research project is supported by the Ministry of Education, Singapore, under its Research Centre of Excellence award to the Institute for Functional Intelligent Materials, National University of Singapore (I-FIM, project No. EDUNC-33-18-279-V12). This work used computational resources of the supercomputer Fugaku provided by RIKEN (Project ID: hp230186); the Centre of Advanced 2D Materials (CA2DM), funded by the National Research Foundation, Prime Ministers Office, Singapore; and the Singapore National Supercomputing Centre (NSCC).
## Appendix A Calculated lattice parameters and bandstructures
The calculated lattice parameters \(a\), \(b\) and \(c\) are given in Tables 2 and 3. For hexagonal systems, \(\mathbf{a}\) and \(\mathbf{c}\) are aligned with \(\hat{x}\) and \(\hat{z}\) directions respectively. The respective crystallographic information files (CIF) are given as Supplementary Information.
For direct comparison, we represent the orthorhombic structure of the Li\(\beta\) phase in the same orientation, with \(\mathbf{a}\) and \(\mathbf{c}\) lattice vectors aligned with the \(\hat{x}\) and \(\hat{z}\) directions respectively. The \(\mathbf{b}\) vector is perpendicular to \(\mathbf{c}\) and makes an angle of 120\({}^{\circ}\) with \(\mathbf{a}\). For a standard crystallographic representation, refer to the CIF file in Supplementary Information.
The electronic bandstructures and bandgaps were obtained using DFT in the PBE approximation as detailed in Section II.
\begin{table}
\begin{tabular}{l l l l} Structure & \(a\) (Å) & \(c\) (Å) & bandgap (eV) \\ \hline Na\(\beta\) & 5.597 (5.594\({}^{\rm a}\)) & 22.485 (22.53\({}^{\rm a}\)) & 4.61 \\ Na\(\beta^{\prime\prime}\) & 5.680 (5.60\({}^{\rm a}\)) & 34.059 (34.11\({}^{\rm a}\)) & 1.98 \\ Na\(\beta^{\prime\prime}\): Mg & 5.698 & 33.784 & 3.58 \\ \hline K\(\beta\) & 5.597 (5.61\({}^{\rm b}\)) & 22.527 (22.75\({}^{\rm b}\)) & 4.80 \\ K\(\beta^{\prime\prime}\) & 5.689 (5.595\({}^{\rm c}\)) & 34.689 (34.226\({}^{\rm a}\)) & 1.90 \\ K\(\beta^{\prime\prime}\): Mg & 5.704 & 34.421 & 4.21 \\ \end{tabular}
\end{table}
Table 2: Lattice parameters and electronic bandgaps of Na and K \(\beta\)-aluminas, idealized \(\beta^{\prime\prime}\)-aluminas and Mg-stabilized \(\beta^{\prime\prime}\)-aluminas (\(x=1\)). Experimental values are given in brackets.
## Appendix B Cohesive energies of alkali metal oxides
Cohesive (atomization) energies of the ground state oxides are given in Table 4.
|
2309.05775 | All symmetries of near-horizon scattering | Asymptotic symmetries are known to constrain the infrared behaviour of
scattering processes in asymptotically flat spacetimes. By the same token, one
expects symmetries of the black hole horizon to constrain near-horizon
gravitational scattering. In this paper, we make an important advance towards
establishing this connection. We find all near-horizon symmetries relevant for
gravitational scattering near the horizon of the Schwarzschild black hole. We
study large diffeomorphisms of linearised perturbations of the Schwarzschild
black hole in a partial wave basis and in gauge that allows for gravitational
radiation crossing the event horizon. This setup is ideally suited to study
processes involving near-horizon gravitons like scattering and black hole
evaporation. We find the most general near-horizon symmetries that are
consistent with the perturbations being finite at the horizon. With no further
restriction on the boundary conditions than regularity on the horizon, we find
the associated covariant charges to be finite and non-zero. These symmetries
are therefore physical. The complete symmetry algebra, however, does not close.
The maximal subset of symmetries that forms a closed algebra turns out to be $
{\rm Diff}(S^2) $ in a semi-direct sum with two supertranslations. Our boundary
conditions in fact allow us to extend the asymptotic Killing vectors everywhere
outside the horizon and we show that a sub-algebra closes to all orders in the
near-horizon expansion parameter. Interestingly, for a large black hole, the
dominant symmetries are just two copies of $ u(1)$, one of which is not present
in the maximal closed sub-algebra. | Ankit Aggarwal, Nava Gaddam | 2023-09-11T19:11:13Z | http://arxiv.org/abs/2309.05775v2 | # All symmetries of near-horizon scattering
###### Abstract
Asymptotic symmetries are known to constrain the infrared behaviour of scattering processes in asymptotically flat spacetimes. By the same token, one expects symmetries of the black hole horizon to constrain near-horizon gravitational scattering. In this paper, we make an important advance towards establishing this connection. We find all near-horizon symmetries relevant for gravitational scattering near the horizon of the Schwarzschild black hole. We study large diffeomorphisms of linearised perturbations of the Schwarzschild black hole in a partial wave basis and in gauge that allows for gravitational radiation crossing the event horizon. This setup is ideally suited to study processes involving near-horizon gravitons like scattering and black hole evaporation. We find the most general near-horizon symmetries that are consistent with the perturbations being finite at the horizon. With no further restriction on the boundary conditions than regularity on the horizon, we find the associated covariant charges to be finite and non-zero. These symmetries are therefore physical. The complete symmetry algebra, however, does not close. The maximal subset of symmetries that forms a closed algebra turns out to be \(\mathrm{Diff}(S^{2})\) in a semi-direct sum with two supertranslations. Our boundary conditions in fact allow us to extend the asymptotic Killing vectors everywhere outside the horizon and we show that a sub-algebra closes to all orders in the near-horizon expansion parameter. Interestingly, for a large black hole, the dominant symmetries are just two copies of \(u(1)\), one of which is not present in the maximal closed sub-algebra.
## 1 Introduction
Classical analyses of perturbations of the Schwarzschild black hole have a rich history. Seeking an understanding of the classical stability of the background, influential work on the subject dates back to several decades ago [1; 2; 3]. These lead to further important developments due to Chandrasekhar [4]. In addition to various applications [5; 6; 7; 8], an effort to cast the perturbations in gauge invariant form has culminated in a rather practical and covariant avatar [9; 10; 11; 12; 13; 14] which also allowed for a study of classical gravitational radiation both crossing the horizon and reaching infinity.
Owing to Hawking's seminal work [15; 16], quantum aspects of black holes have an equally captivating and yet, as is widely acknowledged, an incomplete story. While the problem of information loss has been widely debated, quantum aspects of the most general perturbations of the Schwarzschild black hole have received considerably little attention. Only recently has a formalism to study scattering processes in the near-horizon region been developed [17; 18; 19; 20; 21].1 These processes naturally include metric perturbations resulting in a new regime of quantum
gravity where elastic \(2-2\) amplitudes eikonalise. This regime is where centre of mass energies of scattering processes satisfy \(EM\gg M_{Pl}^{2}\)[17; 18], where \(M\) is the mass of the black hole. The formalism also allows for calculations of S-matrix elements away from the black hole eikonal regime. Inelastic processes relevant for black hole evolution can also be explicitly calculated [20]. A new soft limit is also expected to emerge in the near-horizon limit where graviton momenta scale inversely with the Schwarzschild radius in the large black hole limit [26]. These developments are inherently based on the covariant avatar of the black hole perturbations alluded to earlier.
Black holes notwithstanding, it has become increasingly apparent that the infrared structure of gravity is far richer than previously thought even perturbatively about flat space. Gravitational radiation reaching null infinity is known to be intricately tied to infinite dimensional symmetries arising from large gauge transformations that survive at future and past null infinities [27; 28; 29]. In turn, a diagonal subgroup of these future and past Bondi-Metzner-Sachs (BMS) groups has been argued to be a symmetry of the asymptotically flat quantum gravity S-matrix [30]. The Ward identities associated with these symmetries have been shown to be the same as Weinberg's soft graviton theorem in flat space [31; 32; 33].
It is then natural to ask if the radiation crossing the horizon has an analogously rich relationship with asymptotic symmetries of the horizon and if these are tied to the near-horizon scattering processes. Infinite dimensional symmetries have in fact been shown to arise near the horizon of non-extremal black holes [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. Their relevance to the black hole information problem has also been debated [50; 51]. These symmetries can be thought as being emergent near the horizon, once the collapse process of a large black hole has settled. They are valid for as long as the black hole is semi-classically large (\(M\gg M_{Pl}\)). It is not known if these symmetries in the literature are the complete set of all symmetries of the horizon. Furthermore, any potential relationship between these symmetries and scattering processes near the horizon is also a glaringly open problem.
**Results:** In this article, we take an important step towards addressing the questions raised in the previous paragraph. Within the covariant approach to black hole perturbations, we derive the set of all near- horizon symmetries assuming only that the perturbations remain finite on the horizon. Remarkably, with no further boundary conditions on the perturbations than their finiteness at the horizon, we find the corresponding asymptotic charges to be finite. The asymptotic Killing vectors thus found, form a closed abelian algebra at the linearised level but there are non-linear obstructions to closure. The maximal closed sub-algebra without such obstructions turns out to arise from a restricted boundary condition that sets certain non-radiative data to zero. The resulting sub-algebra contains \(\mathrm{Diff}\left(S^{2}\right)\) in semi-direct sum with two supertranslations. In fact, we find the Killing vectors to all orders in the near-horizon expansion parameter and see that their algebra does not close to all orders. But a restriction of it (which now includes arbitrary functions of the sphere and the near-horizon parameter) does in fact close. Moreover, in the large black hole limit, the dominant symmetries form two copies of \(u(1)\), one of which is not contained in the maximal sub-algebra. Despite some open
questions which we discuss at the end, in this paper, we report the set of all near-horizon symmetries relevant for near-horizon scattering.
Organisation of this paper: In Section 2, we set up the perturbations of the Schwarzschild black hole in a partial wave basis and explain our choice of gauge that allows for gravitational radiation crossing the horizon. In Section 3, we derive the residual diffeomorphisms that preserve the said gauge choice. We also impose regular boundary conditions for the gravitons on the horizon to find the near-horizon symmetries, and work out the associated charges using covariant phase space formalism. In Section 4, we turn to the algebra of these near-horizon Killing vectors and identify the sub-algebras that close. We also study the symmetries relevant for large black holes. Finally, we consider the extension of our near-horizon symmetries to all orders in the distance away from the horizon. We also find a subset of symmetries that forms a closed algebra to all orders. We conclude with a brief summary of results and some open questions in Section 5.
## 2 Black hole perturbations in partial waves
We are interested in metric perturbations, \(h_{\mu\nu}\), defined via \(g_{\mu\nu}~{}=~{}\bar{g}_{\mu\nu}+\kappa h_{\mu\nu}\) with \(\kappa^{2}=8\pi G\). The background is the Schwarzschild black hole denoted by \(\bar{g}_{\mu\nu}\) that is specified by
\[\mathrm{d}s^{2}~{}=~{}A\left(u,v\right)\mathrm{d}u\mathrm{d}v+r \left(u,v\right)^{2}\mathrm{d}\Omega^{2}\quad\text{where}\quad A\left(u,v \right)=\frac{R}{r}\exp\left(1-\frac{r}{R}\right)\,, \tag{1}\]
where \(R\) is the Schwarzschild radius and \(r\left(u,v\right)\) is implicitly determined from the relation:
\[uv~{}=~{}2R^{2}\left(1-\frac{r}{R}\right)\exp\left(\frac{r}{R}-1 \right)\,. \tag{2}\]
The horizon is located at \(r=R\) implying that either \(u=0\) or \(v=0\). We call \(u=0\) and \(v=0\) as the future and past horizon, respectively. While we restrict our attention to the past horizon (wherefrom outgoing radiation emanates) in this article, analogous considerations directly apply to the future horizon.
Perturbations of the Schwarzschild black hole are most naturally studied in the partial wave decomposition2 of Regge and Wheeler [1]. In this basis, metric perturbations are split into the so-called even and odd modes, respectively, as follows:
Footnote 2: In this paper, following [17; 18], we use the real representation of the spherical harmonics which satisfy the same orthogonality relations as the complex ones.
\[h^{+}_{\mu\nu} = \sum_{\ell,m}\,\begin{pmatrix}H_{ab}&-h^{+}_{a}D_{A}\\ -h^{+}_{a}D_{A}&r^{2}\left(K+\frac{\ell(\ell+1)}{2}G\right)\gamma_{AB}+r^{2} GD_{A}D_{B}\end{pmatrix}Y_{\ell m}\,, \tag{3a}\] \[h^{-}_{\mu\nu} = \sum_{\ell,m}\,\begin{pmatrix}0&-h^{-}_{a}\epsilon_{A}{}^{B}D_{B }\\ -h^{-}_{a}\epsilon_{A}{}^{B}D_{B}&-h_{\Omega}\epsilon_{(A}{}^{C}D_{B)}D_{C} \end{pmatrix}Y_{\ell m}\,. \tag{3b}\]
All metric components in this decomposition naturally carry partial wave indices which we have suppressed to avoid a clutter of notation. We will continue to leave the partial wave indices implicit except when we choose to remind ourselves of their presence. The longitudinal Kruskal coordinates \((u,\,v)\) are labelled by lowercase Latin indices, while the corresponding uppercase ones stand for the angular coordinates. Moreover, \(D_{A}\) stands for the covariant derivative on the unit two-sphere parametrised by the round metric \(\gamma_{AB}\). Finally, \(\epsilon_{A}{}^{B}\) is the completely antisymmetric tensor on the sphere with \(\epsilon_{\theta}{}^{\phi}=\sin\theta\).
### Radiation gauge
Consider a generic vector field decomposed into vector spherical harmonics as
\[\bar{\chi}_{a}\ =\ \kappa\sum_{\ell m}\chi_{a}^{\ell m}Y_{\ell m}\quad\text{and} \quad\bar{\chi}_{A}\ =\ \kappa\sum_{\ell m}\left(\chi_{\ell m}^{+}\partial_{A}+\chi_{\ell m}^{-} \epsilon_{A}{}^{B}\partial_{B}\right)Y_{\ell m}\,. \tag{4}\]
It is evident that three of the components of the vector field are even modes while \(\chi_{\ell m}^{-}\) is an odd mode. As discussed in [17, 18, 20, 52, 53], a convenient choice of gauge (the "Regge-Wheeler gauge") that leaves no residual gauge symmetry is one where the vector fields are chosen such that \(G=0\), \(h_{\Omega}=0\), \(h_{a}^{+}=0\). This ensures that the even graviton mode is block diagonal whereas the odd graviton is entirely off-diagonal. However, this gauge does not allow for any radiative data.
We will use a gauge that allows for gravitational radiation crossing the past horizon, called the "radiation gauge", first proposed in [14]:
\[H_{uv}\ =\ 0\,,\quad H_{uu}\ =\ 0\,,\quad\text{and}\quad h_{u}^{\pm}\ =\ 0\,. \tag{5}\]
A similar condition can be imposed to study radiation on the future horizon and analogous results can be obtained. Near the past (future) horizon, \(v=0\) (\(u=0\)), it was argued in [14] that the free radiative data can be chosen to be the leading component of \(G\) when expanded in a small \(v\) series (\(u\) series). This component is at \(\Theta(v^{0})\). Moreover, as pointed out in [14, 17, 18] and further exploited in [20, 52, 53], it may be worth noting that there are no propagating degrees of freedom in the monopole (\(\ell=0\)) and dipole (\(\ell=1\)) sectors. Only the multipole modes with \(\ell>1\) are physical and propagating.
## 3 Symmetries and covariant charges
In this section, we find the residual symmetries of the radiation gauge, demand that the perturbations be regular at the horizon as boundary conditions, and compute the near-horizon charge using the covariant phase space method.
### Residual gauge symmetry
With the parameterisation (4) of a generic diffeomorphism decomposed in partial waves, the metric components in (3) transform as
\[\delta H^{\ell m}_{ab}\ =\ \tilde{\nabla}_{a}\chi_{b}^{\ell m}+\tilde{\nabla}_{b }\chi_{a}^{\ell m}\, \tag{6a}\]
\[\delta h^{+}_{a,\ell m} = \chi^{\ell m}_{a}+\tilde{\nabla}_{a}\chi^{+}_{\ell m}-\frac{2}{r} \left(\partial_{a}r\right)\,\chi^{+}_{\ell m}\, \tag{11b}\] \[\delta K_{\ell m} = \frac{2}{r}g^{ab}\left(\partial_{a}r\right)\chi^{\ell m}_{b}- \frac{\ell\left(\ell+1\right)}{r^{2}}\chi^{+}_{\ell m}\,,\] (11c) \[\delta G_{\ell m} = \frac{2}{r^{2}}\chi^{+}_{\ell m}\,\] (11d) \[\delta h^{-}_{a,\ell m} = \nabla_{a}\chi^{-}_{\ell m}-\frac{2}{r}\left(\partial_{a}r\right) \,\chi^{-}_{\ell m}\,\] (11e) \[\delta h^{\ell m}_{\Omega} = 2\chi^{-}_{\ell m}. \tag{11f}\]
Here all the covariant derivatives are with respect to the background metric and \(\tilde{\nabla}\) refers to the covariant derivative restricted to the longitudinal coordinates. We are interested in finding those gauge transformations that leave the gravitons in the radiation gauge (5). The radiation gauge imposes constraints on the diffeomorphisms in (4). Demanding \(H^{\ell m}_{uu}=0\) leads to
\[0\ =\ \tilde{\nabla}_{u}\chi^{\ell m}_{u}=\ \partial_{u} \chi^{\ell m}_{u}-\frac{\partial_{u}A\left(r\right)}{A\left(r\right)}\chi^{ \ell m}_{u}\,. \tag{12}\]
Therefore, we find that the most general solution for \(\chi^{\ell m}_{u}\) in terms of an arbitrary integration constant \(f^{\ell m}_{1}\left(v\right)\) is of the form
\[\chi^{\ell m}_{u}\ =\ A\left(r\right)f^{\ell m}_{1}\left(v\right)\,. \tag{13}\]
Next, we have that \(h^{+}_{a,\ell m}=0\) implies
\[0\ =\ \left(\partial_{u}-\frac{2}{r}\partial_{u}r\right) \chi^{+}_{\ell m}+\chi^{\ell m}_{u}\,. \tag{14}\]
resulting in the solution
\[\chi^{+}_{\ell m}\ =\ r^{2}\left(f^{\ell m}_{2}\left(v\right)-f^{ \ell m}_{1}\left(v\right)\int\mathrm{d}u\,\frac{A\left(r\right)}{r^{2}}\right)\,, \tag{15}\]
where we introduced a new integration constant \(f_{2}\left(v\right)\). The last gauge condition, \(H^{\ell m}_{uv}=0\), implies that
\[0\ =\ \partial_{u}\chi^{\ell m}_{v}+\partial_{v}\chi^{\ell m}_{u}\,, \tag{16}\]
wherewe used that, in the background (1), \(g^{ab}\Gamma^{c}_{ab}=0\). Therefore, we find the following solution3 in terms of yet another integration constant \(f_{3}\left(v\right)\):
Footnote 3: Alternatively, the solution to this equation may also be written in terms of an auxillary function \(q^{\ell m}\left(u,v\right)\) such that \(\chi^{\ell m}_{a}=g^{bc}\epsilon_{ac}\partial_{b}q^{\ell m}\) which implies \(\chi^{\ell m}_{v}=-\frac{1}{A\left(r\right)}\partial_{v}q^{\ell m}\).
\[\chi^{\ell m}_{v}\ =\ f^{\ell m}_{3}\left(v\right)-\partial_{v}f^{ \ell m}_{1}\left(v\right)\int\mathrm{d}u\,A\left(r\right)-f^{\ell m}_{1}\left( v\right)\int\mathrm{d}u\,\partial_{v}A\left(r\right)\,. \tag{17}\]
Finally, requiring \(h^{-}_{u,\ell m}=0\) yields the equation
\[0\ =\ \partial_{u}\left(\frac{\chi^{-}_{\ell m}}{r^{2}}\right)\quad\text{which has the solution}\quad\chi^{-}_{\ell m}\ =\ r^{2}f_{4}^{\ell m}\left(v\right)\,. \tag{3.8}\]
Collecting these components, we find the residual Killing vectors of the radiation gauge to be
\[\chi^{\ell m}_{u} =\ A\left(r\right)f_{1}\left(v\right)\,,\quad\chi^{\ell m}_{v} =\ f_{3}\left(v\right)-\partial_{v}f_{1}\left(v\right)\int\mathrm{d}u \,A\left(r\right)-f_{1}\left(v\right)\int\mathrm{d}u\,\partial_{v}A\left(r\right) \tag{3.9a}\] \[\chi^{-}_{\ell m} =\ r^{2}f_{4}\left(v\right)\,,\qquad\chi^{+}_{\ell m} =\ r^{2}\left(f_{2}\left(v\right)-f_{1}\left(v\right)\int\mathrm{d }u\,\frac{A\left(r\right)}{r^{2}}\right)\,. \tag{3.9b}\]
The derivation of these equations did not require any near-horizon expansion and they are valid to all orders in \(v\).
### Boundary conditions and equations of motion near the horizon
The only boundary conditions we demand are that the gravitons do not diverge at the horizon. As we will see, this rather mild constraint leads to finite near-horizon charges. The requirement that the perturbations remain finite allows us to expand all the six remaining off-shell fields \(\Phi\in\{G,\,K,\,h_{v}^{\pm},\,H_{vv},\,h_{\Omega}\}\) in a Madhavan-Taylor expansion around \(v=0\):
\[\Phi\ =\ \sum_{n=0}^{\infty}\Phi^{(n)}v^{n}. \tag{3.10}\]
It may be checked that the \(uu\)-component of the leading order linearised vacuum equations of motion imply that \(K^{(0)}\) is non-radiative free data determined by two arbitrary constants 4\(c_{K^{(0)}}\) and \(d_{K^{(0)}}\):
Footnote 4: In [14], \(K^{(0)}\) was set to \(0\) which turns out to be very constraining for the near-horizon symmetries.
\[\partial_{u}^{2}K^{(0)}\ =\ 0\quad\text{which implies that}\quad K^{(0)}\ =\ c_{K^{(0)}}u+d_{K^{(0)}}\,. \tag{3.11}\]
All the other even graviton fields can be expressed in terms of \(G^{(0)}\) together with other non-radiative free data
\[\partial_{u}G^{(1)} =\ \frac{1}{2R^{2}}\left(u\partial_{u}G^{(0)}+2h_{v}^{(0)}\right) \tag{3.12}\] \[\partial_{u}K^{(1)} =\ -\frac{1}{2R^{2}}\left(\ell\left(\ell+1\right)\partial_{u}h_{v}^{(0)}- u\partial_{u}K^{(0)}+R^{2}\partial_{u}^{2}H_{vv}^{(0)}\right)\] \[\partial_{u}^{2}h_{v}^{+(0)} =\ -\frac{1}{2}\left(\ell-1\right)\left(\ell+2\right)\partial_{u}G^{(0)}- \partial_{u}K^{(0)}\] \[\partial_{u}^{2}H_{vv}^{(0)} =\ \frac{1}{2R^{2}}\left(\left(\ell-1\right)\ell\left(\ell+1 \right)\left(\ell+2\right)G^{(0)}+2\left(\left(\ell+2\right)\left(\ell-1 \right)\right)K^{(0)}-2u\partial_{u}K^{(0)}\right)\,,\]
The first of these equations arises from the \(\theta\phi\) component of the equations of motion, the second from the \(\phi\phi\) component, and the third from the \(u\phi\) component. Therefore, it is evident
that near the past (future) horizon, \(v=0\) (\(u=0\)), the free radiative data for the even sector may be chosen to be the leading \(\Theta\left(1\right)\) component of \(G\) when expanded in a small \(v\) series (\(u\) series). Of course, the leading components of \(h_{v}^{+}\) or \(H_{vv}\) are equally valid choices. The above equations can be solved to yield
\[K^{(1)} = \frac{1}{2R^{2}}\left[2c_{K^{(0)}}u^{2}-\left\{\ell\left(\ell+1 \right)c_{h}+\left(\ell+2\right)\left(\ell-1\right)d_{K^{(0)}}\right\}u\right]+ d_{K^{(1)}}\,\] \[h_{v}^{+(0)} = -\frac{1}{2}\left(\ell-1\right)\left(\ell+2\right)\left(\int \mathrm{d}uG^{(0)}\right)-\frac{u^{2}}{2}c_{K^{(0)}}+c_{h}u+d_{h}\,\] \[\partial_{u}H_{vv}^{(0)} = \frac{1}{2R^{2}}\big{[}\left(\ell-1\right)\ell\left(\ell+1\right) \left(\ell+2\right)\left(\int\mathrm{d}u\ G^{(0)}\right)+\left(\ell^{2}+\ell- 3\right)c_{K^{(0)}}u^{2} \tag{3.13}\] \[\quad+\ 2\left(\ell+2\right)\left(\ell-1\right)d_{K^{(0)}}u+d_{H} \big{]}\.\]
Here we introduced additional constants of integration: \(c_{h}\), \(d_{K^{(1)}}\), \(d_{h}\), and \(d_{H}\). In the odd sector, on the other hand, the leading order linearised vacuum equations of motion imply
\[\partial_{u}h_{\Omega}^{(0)}\ =\ -\frac{2R^{2}}{\left(\ell+2\right)\left(\ell-1 \right)}\partial_{u}^{2}h_{v}^{-(0)}. \tag{3.14}\]
All the subleading fields can be expressed in terms of \(h_{v}^{-(0)}\) or \(h_{\Omega}^{(0)}\) together with some non-radiative data. Thus, near the past (future) horizon, \(v=0\) (\(u=0\)), the free radiative data for the odd sector may be chosen to be the leading component of \(h_{\Omega}\) when expanded in a small \(v\) series.
### Asymptotic Killing vectors
The near-horizon expansions (3.10) together with (3.1) lead to the following expansions for the Killing vector near the past horizon
\[\chi_{a}^{\ell m}\ =\ \sum_{n=0}^{\infty}\chi_{a}^{(n)}v^{n}\,\quad\chi_{\ell m}^{+} \ =\ \sum_{n=0}^{\infty}\chi^{+(n)}v^{n}\,\quad\chi_{\ell m}^{-}\ =\ \sum_{n=0}^{\infty}\chi^{-(n)}v^{n}\, \tag{3.15}\]
where the leading order components can be found to be5
Footnote 5: We introduced additional factors of \(R\) in some coefficients to ensure that all constants are dimensionless. These may be fixed by noting the dimensions of the metric perturbations in (2.3) and deducing the dimensions of the Killing vectors from (3.1). Based on this analysis, it can be checked that \([\chi_{a}]=L^{0}\), \(\left[\chi^{+}\right]=L^{1}\), and \(\left[\chi^{-}\right]=L\).
\[\chi_{v}^{(0)} = -\frac{\alpha_{1}u^{2}}{2R^{2}}+\frac{\alpha_{2}u}{R}+\alpha_{3}\,, \tag{3.16a}\] \[\chi_{u}^{(0)} = \alpha_{1}\,,\] (3.16b) \[\chi^{+(0)} = -\alpha_{1}u+R\beta_{1}\,\] (3.16c) \[\chi^{-(0)} = R\gamma_{1} \tag{3.16d}\]
where the partial-wave dependent constants \(\alpha_{i}\), \(\beta_{i}\), and \(\gamma_{i}\) are determined in terms of the integration constants in (11). Writing all integration constants appearing in (11) also in a power series as \(f_{i}(v)=\sum_{n=0}^{\infty}f_{i}^{(n)}v^{n}\), we have that \(\alpha_{1}=f_{1}^{(0)}\), \(\alpha_{2}=-Rf_{1}^{(1)}\), \(\alpha_{3}=f_{3}^{(0)}\), \(\beta_{1}=Rf_{2}^{(0)}\), and \(\gamma_{1}=Rf_{4}^{(0)}\). These solutions follow from (11) where we used that, in the Schwarzschild background, \(\partial_{a}A\left(r\right)=\partial_{a}r\partial_{r}A\left(r\right)\) and that
\[r\left(u,v\right)=R-\frac{uv}{2R}-\frac{u^{2}v^{2}}{4R^{3}}+\Theta\left(v^{3} \right)\,\quad A(u,v)=1+\frac{uv}{R^{2}}+\frac{9u^{2}v^{2}}{8R^{2}}+\Theta\left(v^{3}\right) \tag{23}\]
near the horizon. The subleading terms can similarly be determined to be
\[\chi_{v}^{(1)}(u) = -\frac{3\alpha_{1}}{4R^{4}}u^{3}+\frac{\alpha_{2}}{R^{3}}u^{2}+ \frac{\alpha_{4}u}{R^{2}}+\frac{\alpha_{5}}{R}\,, \tag{24a}\] \[\chi_{u}^{(1)}(u) = \frac{\alpha_{1}u}{R^{2}}-\frac{\alpha_{2}}{R}\,\] (24b) \[\chi^{+(1)}(u) = \left(\frac{\alpha_{2}}{R}-\frac{\beta_{1}}{R}\right)u+\beta_{2}\,\] (24c) \[\chi^{-(1)} = \gamma_{2}-\frac{\gamma_{1}u}{R}. \tag{24d}\]
The new coefficients appearing in this expression are determined as \(\alpha_{4}=-2R^{2}f_{1}^{(2)}\), \(\alpha_{5}=Rf_{3}^{(1)}\), \(\beta_{2}=R^{2}f_{2}^{(1)}\), and \(\gamma_{2}=R^{2}f_{4}^{(1)}\). Plugging these solutions into the variations (10), we find the following leading order variations of the metric perturbations
\[\delta\partial_{u}H_{vv}^{(0)} = -\frac{3\alpha_{1}u^{2}}{2R^{4}}+2\frac{\alpha_{4}-\alpha_{3}}{R ^{2}}\, \tag{25a}\] \[\delta h_{v}^{+(0)} = -\frac{3\alpha_{1}u^{2}}{2R^{2}}+\frac{2\alpha_{2}u}{R}+\alpha_{ 3}+\beta_{2}\,\] (25b) \[\delta K^{(1)} = \frac{\left(\ell^{2}+\ell+1\right)\alpha_{1}u^{2}}{R^{4}}-\frac{ \ell\left(\ell+1\right)\alpha_{2}u}{R^{3}}+\frac{\alpha_{3}-\ell\left(\ell+1 \right)\beta_{2}}{R^{2}}\,,\] (25c) \[\delta G^{(0)} = -\frac{2\alpha_{1}u}{R^{2}}+\frac{2\beta_{1}}{R}\,\] (25d) \[\delta K^{(0)} = \frac{\left(\ell^{2}+\ell+1\right)\alpha_{1}u}{R^{2}}-\frac{\ell \left(\ell+1\right)\beta_{1}}{R}\] (25e) \[\delta h_{v}^{-(0)} = \gamma_{2}\,\] (25f) \[\delta h_{\Omega}^{(0)} = 2R\gamma_{1}. \tag{25g}\]
It can be verified that these variations are consistent with the equations of motion we found in (20) and (21).
### Covariant charge
The gravitational charges associated with the asymptotic Killing vectors (10) can be found using the covariant phase space formalism [54, 55] (also see [56] for a review):
\[\not{\delta}Q_{\tilde{\chi}}\left[\bar{g}_{\rho\sigma};h_{\rho\sigma}\right] = \frac{\kappa}{16\pi G}\int\left(\mathrm{d}^{2}x\right)_{\mu\nu}\sqrt{-\bar{g}} \left[\bar{\chi}^{\nu}\nabla^{\mu}h-\bar{\chi}^{\mu}\nabla_{\sigma}h^{\mu \sigma}+\bar{\chi}_{\sigma}\nabla^{\nu}h^{\mu\sigma}+\frac{1}{2}h\nabla^{\nu} \bar{\chi}^{\mu}\right.\]
\[+\frac{1}{2}h^{\nu\sigma}\left(\nabla^{\mu}\bar{\chi}_{\sigma}- \nabla_{\sigma}\bar{\chi}^{\mu}\right)-\left(\mu\leftrightarrow\nu\right)\Bigg{]}\, \tag{3.20}\]
where \(\bar{\chi}\) was defined in (2.4), \(\not{\delta}\) indicates that the charges are not integrable in general and \(\left(\mathrm{d}^{2}x\right)_{\mu\nu}=\frac{1}{4}\epsilon_{\mu\nu\rho\sigma} \mathrm{d}x^{\rho}\wedge\mathrm{d}x^{\sigma}\). Plugging in our solutions for the asymptotic Killing vectors from Section 3.3 and using the decomposition of the graviton into partial waves as before, we find the charge in the even sector to be
\[Q\left[\chi_{a}^{(0)\ell m},\chi_{\ell m}^{+(0)}\right]\ =\ \frac{1}{2}\bigg{[}\ell(\ell+1) \left(\partial_{u}h_{v}^{(0)}\left(R\beta_{1}-\alpha_{1}u\right)+2 \alpha_{1}h_{v}^{(0)}\right)\] \[+2R^{2}\alpha_{1}K^{(1)}+\left(2R\alpha_{2}-3u\alpha_{1}\right)K ^{(0)}\] \[+\left(u^{2}\alpha_{1}-2Ru\alpha_{2}-2R\alpha_{2}\right)\partial_ {u}K^{(0)}\Bigg{]}. \tag{3.21}\]
Here, we used the following orthogonality relation for spherical harmonics to integrate over the two-sphere
\[\int\mathrm{d}\Omega D^{A}Y(\Omega)_{\ell m}D_{A}Y_{\ell^{\prime}m^{\prime}}( \Omega)\ =\ \ell(\ell+1)\delta_{\ell\ell^{\prime}}\delta_{m,m^{\prime}}. \tag{3.22}\]
The \(\ell,m\) labels are again implicit in the charges. Upon imposing equations of motion (3.12), this charge reduces to
\[Q\left[\chi_{a}^{\ell m},\chi_{\ell m}^{+}\right]\ =\ \frac{1}{2}\bigg{[}\ell \left(\ell+1\right) \left(\partial_{u}h_{v}^{(0)}\left(R\beta_{1}-\alpha_{1}u\right)+2 \alpha_{1}h_{v}^{(0)}\right)\] \[-u\left(\ell\left(\ell+1\right)c_{h}+d_{K^{(0)}}\right)\alpha_{1 }-\ell\left(\ell+1\right)\alpha_{1}c_{K^{(0)}}\] \[+2R\left(\alpha_{2}d_{K_{(0)}}-R\alpha_{3}c_{K^{(0)}}\right)\] \[+2\ell(\ell+1)\alpha_{1}d_{h}+2R^{2}\alpha_{1}d_{K^{(1)}}\bigg{]}\,. \tag{3.23}\]
The first line of this expression contains free radiative data (which we chose to be \(h_{v}^{(0)}\)), whereas the terms in the second line and below only contain non-radiative data. Such terms may be eliminated by an appropriate choice of boundary conditions that fix the integration constants appearing in the solutions to equations of motion in (3.13).6 Similarly, in the odd sector, we find the following linearised charge
Footnote 6: Similar conditions were used in the study of higher dimensional supertranslations in [57].
\[Q\left[\chi_{\ell m}^{-}\right]\ =\ \frac{R}{2}\left[\ell\left(\ell+1\right) \gamma_{1}^{\ell,m}\partial_{u}h_{v}^{-(0)}\right]\,, \tag{3.24}\]
where we used the following orthogonality relation to integrate over the two-sphere
\[\int\mathrm{d}\Omega\ \epsilon^{AB}D_{B}Y_{\ell m}(\Omega)\epsilon_{A}{}^{C}D_{C }Y_{\ell^{\prime}m^{\prime}}(\Omega)\ =\ \ell(\ell+1)\delta_{\ell\ell^{\prime}}\delta_{m,m^{\prime}}. \tag{3.25}\]
We note that the Iyer-Wald [54] and Barnich-Brandt [55] charges coincide. Furthermore, there are no central extensions in the charge algebra.
### More restrictive boundary conditions
Some of the non-radiative data may be fixed by a slightly stronger boundary condition
\[\partial_{u}K^{(0)}\ =\ c_{K^{(0)}}\ =\ 0\,. \tag{3.26}\]
This condition implies that \(\alpha_{1}=0\) in the asymptotic Killing vector as is evident from (3.19c). As we will see in Section 4.1, the most general asymptotic Killing vectors do not form a closed algebra. However, setting \(\alpha_{1}=0\) indeed leads to a closed algebra. Therefore, these boundary conditions are more natural in this sense. The charge in the even sector then simplifies to
\[Q\left[\chi_{a}^{\ell m},\chi_{\ell m}^{+}\right]\Bigg{|}_{c_{K}^{(0)}=0, \alpha_{1}=0}\ =\ \frac{R}{2}\bigg{[}\ell\left(\ell+1\right)\left(\partial_{u}h_{v}^{(0)} \beta_{1}\right)+2\alpha_{2}d_{K_{(0)}}\bigg{]}\,. \tag{3.27}\]
## 4 Symmetry algebra
In this section, we study the algebra of the near-horizon symmetries we found in Section 3.3 and its closed sub-algebras, and the algebra of the symmetries (in (3.9)) to all orders in \(v\). We examine the closure of these algebras under the usual Lie bracket and determine necessary conditions for closure. Of the two resulting closed sub-algebras of the near-horizon asymptotic Killing symmetries, one corresponds to a maximal sub-algebra while the other is the dominant one for large black holes. It must be noted that in the linearised theory, the algebra is abelian and therefore always closes. This can be seen from (2.4) by noting that \(\bar{\chi}\) is \(\Theta(\kappa)\). Therefore, the Lie bracket is \(\Theta(\kappa^{2})\). So, all the statements we make below about the closure (or otherwise) go beyond the linear approximation.
### Near-horizon symmetry algebra
From the solutions for the asymptotic Killing vectors in (3.16) written in terms of arbitrary partial wave dependent constants, we notice that the complete asymptotic Killing vector can be parameterised in terms of arbitrary functions \(F\left(\Omega\right)\), \(M\left(\Omega\right)\), \(P\left(\Omega\right)\) and \(Z^{A}\left(\Omega\right)\) on the unit sphere as follows:
\[\bar{\chi}\left[F,M,P,Z^{A}\right] = \kappa\bigg{[}\left(F\left(\Omega\right)\frac{u^{2}}{2R^{2}}+M \left(\Omega\right)\frac{u}{R}+P\left(\Omega\right)\right)\partial_{u}-F \left(\Omega\right)\partial_{v}-\frac{u}{R^{2}}\gamma^{AC}\partial_{C}F\left( \Omega\right)\partial_{A} \tag{4.1}\] \[\ +\frac{Z^{A}\left(\Omega\right)}{R}\partial_{A}+\ \cdots\bigg{]}\,\]
with the dots representing sub-leading terms and \(\Omega\) a point on the unit 2-sphere. Here,
\[F(\Omega) = \sum_{\ell,m}\alpha_{1}^{\ell m}Y_{\ell m}(\Omega)\,\] \[M(\Omega) = -\sum_{\ell,m}\alpha_{2}^{\ell m}Y_{\ell m}(\Omega)\,\]
\[P(\Omega) = -\sum_{\ell,m}\alpha_{3}^{\ell m}Y_{\ell m}(\Omega)\,\] \[Z^{A}(\Omega) = \sum_{\ell,m}\beta_{1}^{\ell m}\gamma^{AC}D_{C}Y_{\ell m}(\Omega)+ \gamma_{1}^{\ell m}\epsilon^{AB}D_{B}Y_{\ell m}(\Omega). \tag{4.2}\]
Note that the spacetime index of the asymptotic Killing vector (3.16) must be raised with the background metric \((\bar{g}^{\mu\nu}\bar{\chi}_{\nu})\) to arrive at this expression. The even and odd modes on the sphere result in independent even and odd functions on the sphere which have been combined into two arbitrary functions \(Z^{A}\left(\Omega\right)\) on the sphere. It can be checked that the lie bracket of (4.1) does not close,
\[[\bar{\chi}(F_{1},M_{1},P_{1},Z_{1}^{A}),\ \bar{\chi}(F_{2},M_{2},P_{2},Z_{2}^ {A})]\ \neq\ \bar{\chi}\left(F_{12},M_{12},P_{12},Z_{12}^{A}\right)\, \tag{4.3}\]
for any arbitrary functions \(F_{12},\ M_{12},\ P_{12},\ Z_{12}\). However, this is a non-linear statement since the right hand side is \(\Theta(\kappa^{2})\).
It is conceivable that the non-linear closure of this algebra requires the addition of more terms to the asymptotic Killing vector (4.1) that may resemble [43, 48, 49] where diffeomorphisms along the horizon (in addition to \(\mathrm{Diff}(S^{2})\)) were found. Indeed, it can be checked that the Lie bracket (4.3) generates terms of the type \(F_{n}(\Omega)u^{n}\partial_{u}\) for some arbitrary function on the sphere \(F_{n}(\Omega)\) and an arbitrary integer \(n\). We defer further discussion to Section 5.
#### 4.1.1 Maximal closed sub-algebra
The maximal sub-algebra that closes is that of vector fields with \(F\left(\Omega\right)=0\).
\[[\bar{\chi}(M_{1},P_{1},Z_{1}^{A}),\ \bar{\chi}(M_{2},P_{2},Z_{2}^{A})]= \bar{\chi}\left(M_{12},P_{12},Z_{12}^{A}\right) \tag{4.4}\]
with
\[M_{12} = \kappa\left[\frac{Z_{1}^{A}\partial_{A}M_{2}-Z_{2}^{A}\partial_{A }M_{1}}{R}\right]\, \tag{4.5a}\] \[P_{12} = \kappa\bigg{[}M_{1}P_{2}-M_{2}P_{1}+\frac{Z_{1}^{A}\partial_{A}P _{2}-Z_{2}^{A}\partial_{A}P_{1}}{R}\bigg{]}\,\] (4.5b) \[Z_{12}^{A} = \kappa\left[\frac{Z_{1}^{B}\partial_{B}Z_{2}^{A}-Z_{2}^{B}\partial _{B}Z_{1}^{A}}{R}\right]\,. \tag{4.5c}\]
As pointed out in Section 3.5, this sub-algebra naturally arises from a stronger boundary condition (3.26) than finiteness of metric perturbations on the horizon. It can be verified that it consists of diffeomorphisms of the two-sphere, \(\mathrm{Diff}\left(S^{2}\right)\) generated by \(Z^{A}\left(\Omega\right)\), in semi-direct sum with two supertranslations, generated by \(M\left(\Omega\right)\) and \(P\left(\Omega\right)\) on the horizon. This is in agreement with the results of [40] and also the results found in [36] where two copies of the Virasoro algebra were found and the complete set of \(\mathrm{Diff}\left(S^{2}\right)\) was anticipated.
It is interesting to note that in our setup, in contrast to the analysis of [36, 40], we see that one of the supertranslations (associated with \(M\left(\Omega\right)\)) and \(\mathrm{Diff}(S^{2})\) are suppressed in powers of \(R\) for a large black hole, \(R\gg\kappa\).
#### 4.1.2 Symmetries of large black holes
While we saw that \(F(\Omega)=0\) was necessary for the maximal closed algebra defined by the conventional Lie bracket, it is evident that for very large black holes, the Killing vector (4.1) reduces to
\[\bar{\chi}\left[F,P\right]\ =\ \kappa\bigg{[}P\left(\Omega\right) \partial_{u}-F\left(\Omega\right)\partial_{v}+\Theta\left(\frac{1}{R}\right) \bigg{]}\,. \tag{4.6}\]
The algebra generated by this asymptotic Killing vector clearly closes with a trivial commutator leading to an algebra of two copies of \(u(1)\). In the maximal sub-algebra discussed in the previous subsection, the near-horizon symmetry associated with \(F\left(\Omega\right)\) did not contribute. However, we see that for a large black hole this symmetry dominates and without any obstructions to the algebra closure. Thus, it has physical significance for scattering processes near large black holes which are of semi-classical interest.
### Symmetry algebra to all orders in \(v\)
Our boundary conditions allow us to write the asymptotic Killing vector fields (4.1) to all order in \(v\) using (3.9):
\[\bar{\chi}\ =\ \kappa\bigg{[}\left(-\frac{\hat{\mathbf{F}}\left(v, \Omega\right)}{A\left(r\right)}+\frac{\hat{A}(u,v)}{A\left(r\right)}\partial_ {v}\mathbf{F}\left(v,\Omega\right)+\mathbf{F}\left(v,\Omega\right)\frac{ \partial_{v}\hat{A}(u,v)}{A\left(r\right)}\right)\partial_{u}-\mathbf{F} \left(v,\Omega\right)\partial_{v}\] \[\qquad\qquad\qquad-\ \tilde{A}(u,v)\ \gamma^{AC}\partial_{C} \mathbf{F}\left(v,\Omega\right)\partial_{A}+\mathbf{Z}^{A}\left(v,\Omega \right)\partial_{A}\bigg{]} \tag{4.7}\]
where we defined the background functions
\[\hat{A}(u,v)\ :=\ \int\mathrm{d}u\,A\left(r\right)\,,\quad\text{and}\quad \tilde{A}(u,v)\ :=\ \int\mathrm{d}u\,\frac{A\left(r\right)}{r^{2}}\,. \tag{4.8}\]
Here,
\[\mathbf{F}(v,\Omega) = \sum_{\ell,m}f_{1}(v)^{\ell m}Y_{\ell m}(\Omega)\,\] \[\hat{\mathbf{F}}(v,\Omega) = \sum_{\ell,m}f_{3}(v)^{\ell m}Y_{\ell m}(\Omega)\,\] \[\mathbf{Z}^{A}(v,\Omega) = \sum_{\ell,m}f_{2}(v)^{\ell m}\gamma^{AC}D_{C}Y_{\ell m}(\Omega) +f_{4}(v)^{\ell m}\epsilon^{AB}D_{B}Y_{\ell m}(\Omega). \tag{4.9}\]
The algebra of these vector fields does not close in general (since the asymptotic algebra did not close (4.3))
\[\left[\bar{\chi}(\hat{\mathbf{F}}_{1},\mathbf{F}_{1},,\mathbf{Z} _{1}^{A}),\ \bar{\chi}(\hat{\mathbf{F}}_{2},\mathbf{F}_{2},\mathbf{Z}_{2}^{A}) \right]\ \neq\ \bar{\chi}\left(\hat{\mathbf{F}}_{12},\mathbf{F}_{12},\mathbf{Z}_{12}^{A} \right). \tag{4.10}\]
However, it can easily be checked that the sub-algebra formed by vector fields with \(\mathbf{F}\left(v,\Omega\right)=0\) does indeed close with
\[\hat{\mathbf{F}}_{12}\ =\ \kappa\left[\mathbf{Z}_{1}^{A}\partial_{A} \hat{\mathbf{F}}_{2}-\mathbf{Z}_{2}^{A}\partial_{A}\hat{\mathbf{F}}_{1}\right] \quad\text{and}\quad\mathbf{Z}_{12}^{A}\ =\ \kappa\left[\mathbf{Z}_{1}^{B}\partial_{B}\mathbf{Z}_{2}^{A}-\mathbf{Z}_{2}^{B} \partial_{B}\mathbf{Z}_{1}^{A}\right]\,. \tag{4.11}\]
Note that this sub-algebra coincides with the one formed by (4.1) for \(F=M=0\), in the near-horizon region. Therefore, it consists of only one supertranslation instead of two. Notice that the Killing vector now contains arbitrary functions of \(v\) and a dependence on \(u\) via the background function \(A\left(r\right)\). It would be interesting to find less stringent restrictions on the function \(\mathbf{F}\left(v,\Omega\right)\) that may still lead to a closed algebra. In general, it is also of interest to find the general role and meaning of these symmetries that do not lead to a closed algebra, both on the horizon to leading order and also possibly to all orders in the distance to the horizon, parametrised by \(v\).
## 5 Conclusions and outlook
In this article, we studied the perturbations of the Schwarzschild black hole in a partial wave basis in a gauge that allows for radiation crossing the horizon. This setup is ideally suited for studying near-horizon scattering. We found the residual symmetries that preserve the aforementioned "radiation gauge". By requiring only that the perturbations be finite on the horizon, we found the Killing vectors not only near the horizon but also to all orders in \(v\) in Section 4.2. Remarkably, with no further restrictions we found the charges on the horizon corresponding to these symmetries to be finite and non-vanishing. The physical significance of the sub-leading terms is not clear as they do not contribute to the charges. Nevertheless, there is some evidence that such sub-leading terms can be related to sub-leading soft theorems for scattering in flat spacetime [58; 59]. Our leading order vector fields correspond to near-horizon symmetries that were not previously known. In the linearised theory, the algebra is abelian and therefore closes to leading order. However, it is curious that the complete algebra on the horizon does not close with the conventional Lie bracket owing to sub-leading non-linear obstructions. We identified two important sub-cases that lead to a closed algebra. The first is obtained by setting \(F(\Omega)=0\) and leads to an algebra of two supertranslations in semi-direct sum with all diffeomorphisms of the two sphere, \(\mathrm{Diff}\left(S^{2}\right)\). We found a specific restriction on the boundary conditions that naturally leads to this maximal sub-algebra. The second sub-algebra emerges in the large black hole limit and is considerably smaller containing only two copies of \(u(1)\). It would be interesting to check if a modified bracket, like the one due to Barnich and Troessaert [60; 61], closes the complete algebra with no restrictions.7
Footnote 7: AA would like to thank Glenn Barnich for insightful discussions related to the modified bracket.
**Comparison to previous work:** Several authors have studied the symmetries of the black hole horizon in recent years as mentioned in the introduction. Here we list a comparison between some of these works and our results. In [36], the gauge that was used is different from the radiation gauge that we employ. Consequently, we find different near-horizon symmetries. In [37], the choice of gauge, fixed the size of the horizon. In our paper, we found that some symmetries do change the size of the black hole, while others affect its shape. In [40; 42; 49], further restrictions on the symmetries were imposed in order to, for instance, preserve the location of the (bifurcate) horizon. In this work, we allow for symmetri
location of the horizon. Indeed, scattering processes near the black hole horizon may change the location of the horizon in general. In the linearised theory, the perturbations (including but not limited to changes in the location of the horizon) are small when \(\kappa/R\ll 1\). However, we may improve our results order by order in \(\kappa\), thereby generating non-linear effects, all the while preserving our relaxed boundary conditions. It would be interesting to compare the results of such non-linear effects with the recent work of [48, 49]. In particular, it is of interest to compare the resulting (potentially) closed algebra (by adding more terms to our vector fields) with what is found there. The maximal sub-algebra that does close is in line with the previous literature [36, 40]. However, we found an additional symmetry which finds its place in a smaller commuting sub-algebra that survives in the large black hole limit. Finally, in contrast to all the above works, we find a hierarchy between the different supertranslations in the large black hole limit.
The primary motivation for studying linear perturbations in the partial wave basis, as opposed to the non-linear theory, is that scattering processes in the near-horizon region arise naturally in this basis [17, 18, 19, 20, 21]. The results of this article can therefore to be seen as capturing all the symmetries of near-horizon scattering. The next natural step is to study the implications of these symmetries for near-horizon scattering. This warrants an understanding of the Ward-Takahashi identities of these near-horizon symmetries. The relation between these identities and emergent soft graviton theorems near the horizon is an important question that we will report on in an upcoming work [26].
In a dynamical problem of collapse, boundary conditions imposed on past null infinity would in-principle dictate boundary conditions to be imposed on the horizon. Similarly, if the black hole is to evaporate in its entirety, boundary conditions on future null infinity would also determine those on the horizon. However, determining these boundary conditions on the horizon in practice is a difficult task. One may approach this by setting up a WKB-like analysis where equations of motion are solved order by order both near the horizon and near infinity. An appropriate matching condition in the intermediate region will then determine boundary conditions on the horizon in terms of the choices at infinity.8 This might be possible in our formalism because the symmetries are known to all orders in \(v\) and therefore at any point in spacetime.
Footnote 8: NG would like to thank Godwin Martin for discussions regarding this idea.
Recently, in [62], a scattering algebra of 't Hooft's associated with shockwaves was shown to be related to the soft algebra near infinity. This can be exploited to provide a physical interpretation of the familiar antipodal matching condition at spatial infinity from the perspective of scattering in the bulk [63]. Our results in this paper, especially given their direct relevance for near-horizon scattering and an analogous shockwave algebra in black hole backgrounds, may potentially imply a similar antipodal matching condition on the bifurcation sphere.9
While we have entirely focussed on graviton perturbations in this paper, it is also possible to study charged particle scattering near the black hole horizon [67]. Our analysis can be extended in a straightforward manner10 to study emergent and potentially new symmetries of QED near the black hole horizon and ask about their relationship with an emergent soft-photon theorem near the horizon.
Footnote 10: See also [68].
Asymptotic symmetries near null infinity are subtle in different dimensions [69, 70, 68, 44, 82] because the desirable asymptotic charges seem to diverge in a naive \(r\to\infty\) limit. Such divergences do not arise in the case of higher dimensional black holes where the near-horizon charges are computed in the \(r\to R\) limit. Therefore, our formalism is easily adapted to the case of higher dimensional black holes.11 An appropriate choice of spherical harmonics would imply that in the final expressions, only the eigenvalues would have to be adapted in comparison to our results. Further potential subtleties, if any, may be interesting to understand.
Footnote 11: A similar observation was made in [83].
Finally, it is conceivable that a version of the memory effect in the near horizon region owing to the symmetries near the horizon may have observable consequences. For instance, it would be of great importance to understand whether the radiation emerging from the horizon can leave an imprint on the luminosity fluctuations of the spectra of stellar oscillations (of the S-stars orbiting the Saggittarius A* for example). While there are other (potentially stronger) sources for the said luminosity fluctuations like those due to internal pressure and density dynamics of the orbiting stars, those owed to near-horizon radiation considered in this paper are likely to occur on a much shorter time-scale (inverse of the speed-of-light as opposed to the inverse of the speed-of-sound). This separation of scales may make these signals potentially observable in the near future. However, such measurements depend greatly on an asteroseismological understanding of the spectral type, or any other signature, that captures the average and/or dominant luminosity fluctuations of the stellar oscillations of interest.
## Acknowledgements
It is a pleasure to thank Anupam A.H., Uddipan Banik, Glenn Barnich, Chandramouli Chowdhury, Fabiano Feleppa, Nico Groenenboom, and Alok Laddha for various helpful discussions. We are also grateful to the Cargese Summer School 2021 where this collaboration bore fruit, and Perimeter Institute, where a part of this work was carried out, for hosting both authors.
AA is a Research Fellow of the Fonds de la Recherche Scientifique F.R.S.-FNRS (Belgium). AA is partially supported by IISN - Belgium (convention 4.4503.15) and by the Delta ITP consortium, a program of the NWO that is funded by the Dutch Ministry of Education, Culture and Science (OCW).
NG was supported by the Delta-Institute for Theoretical Physics (D-ITP) that is funded by the Dutch Ministry of Education, Culture and Science (OCW) during the initial stages
of this work and is currently supported by project RTI4001 of the Department of Atomic Energy, Govt. of India.
|
2309.08251 | Cartoondiff: Training-free Cartoon Image Generation with Diffusion
Transformer Models | Image cartoonization has attracted significant interest in the field of image
generation. However, most of the existing image cartoonization techniques
require re-training models using images of cartoon style. In this paper, we
present CartoonDiff, a novel training-free sampling approach which generates
image cartoonization using diffusion transformer models. Specifically, we
decompose the reverse process of diffusion models into the semantic generation
phase and the detail generation phase. Furthermore, we implement the image
cartoonization process by normalizing high-frequency signal of the noisy image
in specific denoising steps. CartoonDiff doesn't require any additional
reference images, complex model designs, or the tedious adjustment of multiple
parameters. Extensive experimental results show the powerful ability of our
CartoonDiff. The project page is available at: https://cartoondiff.github.io/ | Feihong He, Gang Li, Lingyu Si, Leilei Yan, Shimeng Hou, Hongwei Dong, Fanzhang Li | 2023-09-15T08:55:59Z | http://arxiv.org/abs/2309.08251v1 | # Cartoondiff: Training-free Cartoon Image Generation with Diffusion Transformer Models
###### Abstract
Image cartoonization has attracted significant interest in the field of image generation. However, most of the existing image cartoonization techniques require re-training models using images of cartoon style. In this paper, we present CartoonDiff, a novel training-free sampling approach which generates image cartoonization using diffusion transformer models. Specifically, we decompose the reverse process of diffusion models into the semantic generation phase and the detail generation phase. Furthermore, we implement the image cartoonization process by normalizing high-frequency signal of the noisy image in specific denoising steps. CartoonDiff doesn't require any additional reference images, complex model designs, or the tedious adjustment of multiple parameters. Extensive experimental results show the powerful ability of our CartoonDiff. The project page is available at: [https://cartoondiff.github.io/](https://cartoondiff.github.io/)
Feihong He\({}^{1}\), Gang Li\({}^{2,3}\), Lingyu Si\({}^{2}\),Leilei Yan\({}^{1}\), Shimeng Hou\({}^{4}\), Hongwei Dong\({}^{2}\), Fanzhang Li\({}^{\dagger,1}\), School of Computer Science and Technology, Soochow University1,
Footnote 1: Corresponding Author. This work is supported in part by the National Key R&D Program of China under Grant (2018YFA071700, 2018YFA0701701), by the National Natural Science Foundation of China under Grant (61672364, 62176172, 62301539), and by the China Postdoctoral Science Foundation under Grant 2023M733615.
Institute of Software, Chinese Academy of Sciences2,
University of Chinese Academy of Sciences3,
Northwestern Polytechnical University4
Footnote 1: Corresponding Author. This work is supported in part by the National Key R&D Program of China under Grant (2018YFA0701700, 2018YFA0701701), by the National Natural Science Foundation of China under Grant (61672364, 62176172, 62301539), and by the China Postdoctoral Science Foundation under Grant 2023M733615.
Diffusion models, cartoon image generation, training-free cartoonization
## 1 Introduction
Cartoon-style art has experienced tremendous popularity and has been employed in various fields, e.g., animations, comics and games. Image cartoonization has attracted the attention of many researchers in the field of image generation. The current mainstream image cartoonization methods mainly are GAN-based [2] methods. In the previous stage, generative models were conventionally trained using paired datasets of real images and cartoon images [3, 4]. However, obtaining such paired data in real-world scenarios proves to be quite challenging. To address the challenge, CycleGAN [5] and CartoonGAN [6] learn to translate between domains (real and cartoon images) without paired input-output examples. Furthermore, research efforts expand beyond the singular goal of generating cartoonized images, with a significant body of works making remarkable progress in various dimensions, e.g., interpretability [7, 8, 9], the generation of diverse cartoon styles [10] and guided cartoon generations [11, 12].
As one currently mainstream type of generative models, diffusion models [13, 14, 15, 16] have gained widespread attention because of the stable convergence and diverse generation in visual content synthesis compared to GANs [2]. Recently, Back-D [17] represents the training-free cartoonization image generation using diffusion models, which employs rollback to perturb specific steps in the reverse process of classifier-free diffusion models.
In this paper, we introduce a novel image cartoonization approach based on diffusion model called CartoonDiff. Compared to Back-D [17], our approach is simpler and more efficient. We first analyse classifier-free sampling and the denoising process in diffusion models, shown in Figure 1 (a) and Figure 1 (b), respectively. Figure 1 (a) clearly illustrates how classifier-free guided diffusion models align with different class directions under related class guidance. This ensures that the model generates images by class conditions during the denoising process. By exploring and analyzing this denoising process, we can broadly divide it into two stages: the semantic generation phase and the detail generation phase. As shown in Figure 1 (b), diffusion models build the overall semantic information of the image during the semantic generation phase and capture fine-grained texture details during the detail generation phase. To achieve the cartoonization of generated images, we employ a straightforward approach by inserting token normalization layers at specific steps during the denoising process to perturb the predicted noise, thereby restraining the generation of image textures and enhancing the image contours. Compared to our closest counterpart, Back-D [17], our method differs in that it eliminates the need for a complex rollback process, requires no additional image information, and entails minimal parameter adjustments. It is worth emphasizing that we are the first groundbreaking achievement in achieving training-free image cartoonization using the DiT [1] model. Despite its simplicity, it still exhibits superior performance. In summary, our work makes the following contributions:
* Our work involves a comprehensive exploration and analysis of the denoising process in diffusion models. We discover that the diffusion model generates image semantic information in the early stages and detailed information in the later stages during the reverse process.
* Based on the aforementioned observations, we introduce a new training-free cartoonization method named CartoonDiff. It achieves image cartoonization by adding disturbances at specific steps during the reverse process of the diffusion model.
* We have achieved superior generation results compared to the concurrent training-free method, exemplified by Back-D, which is applied to stable diffusion models.
## 2 Method
### Preliminaries
In the diffusion models, the forward process involves gradually adding noise to the image, resulting in a fully gaussian-noised image. This process can be represented as follows: \(X_{t}=\sqrt{\bar{\alpha}_{t}}X_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\), where \(\bar{\alpha}_{t}\) is \(\prod_{t=0}^{s}\alpha_{t},\alpha_{t}\in\left(0,1\right)\) and \(\epsilon\) stands for the result of random sampling from a gaussian distribution \(N\left(0,I\right)\). In the reverse process of the diffusion model, noise prediction can be formulated using Bayes' theorem and the Markov property as follows: \(P\left(X_{t-1}|X_{t},X_{0}\right)=\frac{P\left(X_{t}|X_{t-1},X_{0}\right)P \left(X_{t-1}|X_{0}\right)}{P\left(X_{t}|X_{0}\right)}\). The diffusion models optimize by measuring the KL divergence between the forward noise and the predicted noise distributions and constructs its variational lower bound\(\left(VLB\right)\) to obtain the optimization objective as follows: \(Loss_{t}\left(\theta\right)=E_{t\sim\left[1:T\right],X_{0},\epsilon_{t}}\left[ \left\|\;\epsilon_{t}-\epsilon_{\theta}\left(\sqrt{\bar{\alpha}_{t}}X_{0}+ \sqrt{1-\bar{\alpha}_{t}}\epsilon_{t},t\right)\;\right\|^{2}\right]\).
Classifier-free guidance addresses the issues of additional training costs, classifier dependency, and adversarial attack effect with classifier guidance, making it widely adopted in numerous diffusion models. We utilize class information as conditional guidance, denoted as \(c\). For noise prediction at step \(t\), we can represent it as follows: \(\epsilon_{\theta}\left(X_{t}|c\right)=\epsilon_{\theta}\left(X_{t}|\emptyset \right)+\lambda\left(\epsilon_{\theta}\left(X_{t}|c\right)-\epsilon_{\theta} \left(X_{t}|\emptyset\right)\right)\). \(\epsilon_{\theta}\left(X_{t}|\emptyset\right)\) represents the noise generated in the null-class condition \(\emptyset\) within the classifier-free approach, while \(\epsilon_{\theta}\left(X_{t}|c\right)\) represents the noise generated under class conditional guidance \(c\). This formula facilitates the model's extrapolation for class-conditional image generation.
### CartoonDiff
To further analyse the diffusion model's sampling process, we visualize the intermediate results of different denoising steps. Figure 2 showcases the results obtained through model denoising at different steps during the inference process, specifically at steps 1000, 400, 300, 200, 100, and 0, with a total of 1000 DDPM steps. It's evident that during the denoising process, the images first capture semantic information and then gradually acquire the finer details.
We analyze the experimental results concerning the de
Figure 1: Methodology overview and model structure diagram. The diagram labeled (a) expresses the classifier-free extrapolation the models to specific classes under conditional guidance. The (b) illustrates our investigation of the denoising process for the diffusion model, categorizing it into the semantic generation phase and the detail generation phase based on the generated images for relative frequency information. In (c), we present the improvements made to the model based on the DiT[1] structure.
Figure 2: The intermediate results during the sampling process of DiT.
noising process in Figure 2 and make minimal modifications to the DiT architecture to generate cartoon-style images. As illustrated in Figure 1(c), we incorporate a token normalization block into the output of DiT, performing normalization on tokens at specific steps to inject cartoon-style statistics:
\[Norm\left(v_{token}\right)=\frac{v_{token}}{\max\left(\|v_{token}\|_{1}, \epsilon\right)} \tag{1}\]
We achieve image cartoonization by normalizing the predicted noise in the latent space to suppress the generation of fine texture details. In order to preserve low-frequency signals such as image contours and lines, we maintain the diffusion model's ability to capture important image information by setting the hyperparameter \(\sigma\). The denoising process is divided two stages:
\[\begin{cases}\epsilon_{\theta}\left(X_{t}|\emptyset\right)+\lambda\left( \epsilon_{\theta}\left(X_{t}|c\right)-\epsilon_{\theta}\left(X_{t}|\emptyset \right)\right),t\geq\sigma\\ Norm\left(\epsilon_{\theta}\left(X_{t}|\emptyset\right)+\lambda\left( \epsilon_{\theta}\left(X_{t}|c\right)-\epsilon_{\theta}\left(X_{t}|\emptyset \right)\right)\right),t<\sigma\end{cases} \tag{2}\]
the hyperparameter \(\sigma\) indicates that we introduce perturbations to the generated noise starting at step \(\sigma\) during the inference process and continue until step 0.
In Algorithm 1, we provide a brief algorithmic overview of CartoonDiff. We perform token normalizing on the generated noise during the denoising process when \(t<\sigma\) to achieve image cartoonization.
```
Input: A pre-trained diffusion model \(\epsilon_{\theta}\left(\cdot\right)\), conditional guidance \(c\), guidance scale \(\lambda\), disturbance time \(\sigma\), temperature parameters \(\bar{\alpha}_{t}\). Output: the cartoon-style image \(X_{0}^{*}\)
1Initial:\(X_{T}\sim\mathbb{N}\left(0,I\right)\);
2for\(t\) from \(T\) to \(0\)do
3\(\epsilon_{t}=\epsilon_{\theta}\left(X_{t}|\emptyset\right)+\lambda\left( \epsilon_{\theta}\left(X_{t}|c\right)-\epsilon_{\theta}\left(X_{t}|\emptyset \right)\right)\);
4if\(t<\sigma\)then
5\(\epsilon_{t}=Norm\left(\epsilon_{t}\right)\)
6 end if
7\(X_{t-1}=\sqrt{\bar{\alpha}_{t-1}}\left(\frac{X_{t}-\sqrt{1-\bar{\alpha}_{t}} \epsilon_{t}}{\sqrt{\bar{\alpha}_{t}}}\right)+\sqrt{1-\bar{\alpha}_{t-1}} \epsilon_{t}\)
8 end for return\(X_{0}\) as \(X_{0}^{*}\)
```
**Algorithm 1**Training-free generation of the cartoon-style image with CartoonDiff
## 3 Experiments
In this section, we demonstrate the effectiveness of CartoonDiff through experimental results. We conducted experiments
Figure 3: Based on DiT XL/2[1], with a hyperparameter \(\sigma\) set to 250, we present the results of DiT’s generation and the results generated using CartoonDiff. In each pair of images, the left side shows the original image generated by DiT, while the right side shows the image cartoonized using CartoonDiff.
using the DiT-XL/2 model [1] pre-trained on the ImageNet \(512\times 512\)[18] as the foundational generative network. DiT-XL/2 is a transformer architecture network comprising 28 layers of transformer blocks with the patch size of \(2\times 2\). During the inference phase, we simply append token normalization modules after its transformer blocks [19] to introduce perturbations at specific steps in the inference process, as illustrated in Figure 1(c).
### Comparasion of generation Results
We adopt equidistant sampling with a sampling step setting of 100 (the total steps is 1000), while the hyperparameter \(\sigma\) was configured as 250. In Figure 3, we present the generated results of cartoon images by our method applied to the DiT-XL/2 pre-trained model. From the experimental results, we successfully achieve image cartoonization while preserving the essential details of the images. Additionally, we can notice that the cartoonized images exhibit diverse cartoon styles, because of subtle variations in their original art styles. For instance, the cartoonization results for "limbkin" and "basset" in the images exhibit a flat cartoon style, whereas the cartoonization results for "parrot", "crab", and "golden retriever" display a more three-dimensional cartoon style.
Additionally, we conducted a comparison between CartoonDiff and the existing training-free cartoonization method based on diffusion models. In the context of Back-D, we adhere to the recommended hyperparameter settings, with both hyperparameters b and s set to 300. Likewise, when configuring CartoonDiff, we maintain the aforementioned parameterization, specifying the hyperparameter \(\sigma\) as 250.
In Figure 4, we present comparative experimental results between Back-D and CartoonDiff. From the visualization, it is evident that our approach consistently achieves the desired cartoonization results. Conversely, Back-D tends to excessively enhance image contour information during the cartoonization process, resulting in a sense of dissonance in certain images, as exemplified by the "cat" and "daisy" images in the figure 4. It's worth noting that Back-D's core generative model is trained on the large-scale dataset LAION-5B [20]. And our model is exclusively trained on ImageNet, which does not include cartoon images.
### Ablation Study
We perform ablation experiments of hyperparameter \(\sigma\) for our proposed method in Figure 5. We observe that as \(\sigma\) increased, image details gradually smoothed and eventually disappeared. Simultaneously, within the \(\sigma\) range of 0 to 400, low-frequency image information become both smoother and more pronounced. Based on our extensive experimentation, we find that \(\sigma\) between 200 and 300 produce the best results.
## 4 Conclusions
In this paper, we first explore and analyze the classifier-free sampling and the denoising process in diffusion models. We discover the denoising process can be roughly divided into two phases, the semantic generation phase and the details generation phase. Based on that, we introduce the CartoonDiff method, which normalizes high-frequency details of the noisy image in specific denoising steps (the details learning phase). Experimental results show the effectiveness of our CartoonDiff. It is worth noting that, to the best of our knowledge, CartoonDiff is currently the simplest and most effective training-free method among cartoonization approaches based on diffusion models.
Figure 4: The comparative experiment between CartoonDiff and Back-D[17] includes images where each small image in the upper-left corner represents the original image output, while the larger image represents the output after cartoonization.
Figure 5: We adjusted the hyperparameter \(\sigma\) to examine its impact on the significant image details and the degree of cartoonization. |
2309.17371 | Adversarial Imitation Learning from Visual Observations using Latent
Information | We focus on the problem of imitation learning from visual observations, where
the learning agent has access to videos of experts as its sole learning source.
The challenges of this framework include the absence of expert actions and the
partial observability of the environment, as the ground-truth states can only
be inferred from pixels. To tackle this problem, we first conduct a theoretical
analysis of imitation learning in partially observable environments. We
establish upper bounds on the suboptimality of the learning agent with respect
to the divergence between the expert and the agent latent state-transition
distributions. Motivated by this analysis, we introduce an algorithm called
Latent Adversarial Imitation from Observations, which combines off-policy
adversarial imitation techniques with a learned latent representation of the
agent's state from sequences of observations. In experiments on
high-dimensional continuous robotic tasks, we show that our model-free approach
in latent space matches state-of-the-art performance. Additionally, we show how
our method can be used to improve the efficiency of reinforcement learning from
pixels by leveraging expert videos. To ensure reproducibility, we provide free
access to our code. | Vittorio Giammarino, James Queeney, Ioannis Ch. Paschalidis | 2023-09-29T16:20:36Z | http://arxiv.org/abs/2309.17371v3 | # Adversarial Imitation Learning from Visual Observations using Latent Information
###### Abstract
We focus on the problem of imitation learning from visual observations, where the learning agent has access to videos of experts as its sole learning source. The challenges of this framework include the absence of expert actions and the partial observability of the environment, as the ground-truth states can only be inferred from pixels. To tackle this problem, we first conduct a theoretical analysis of imitation learning in partially observable environments. We establish upper bounds on the suboptimality of the learning agent with respect to the divergence between the expert and the agent latent state-transition distributions. Motivated by this analysis, we introduce an algorithm called Latent Adversarial Imitation from Observations, which combines off-policy adversarial imitation techniques with a learned latent representation of the agent's state from sequences of observations. In experiments on high-dimensional continuous robotic tasks, we show that our algorithm matches state-of-the-art performance while providing significant computational advantages. Additionally, we show how our method can be used to improve the efficiency of reinforcement learning from pixels by leveraging expert videos. To ensure reproducibility, we provide free access to our code.
## 1 Introduction
Learning from videos represents a compelling opportunity for the future, as it offers a cost-effective and efficient way to teach autonomous agents new skills and behaviors. Compared to other methods, video recording is a faster and more flexible alternative for gathering data. Moreover, with the abundance of high-quality videos available on the internet, learning from videos has become increasingly accessible in recent years. However, despite the potential benefits, this approach remains challenging as it involves several technical problems that must be addressed simultaneously in order to succeed. These problems include representation learning, significant computational demands due to high-dimensional observation space, the partial observability of the decision process, and lack of expert actions. Our objective is to establish algorithms capable of overcoming all of these challenges, enabling the learning of complex robotics tasks directly from videos of experts.
Formally, our focus is on the problem of _Visual Imitation from Observations (V-ftO)_. In V-ftO, the learning agent does not have access to a pre-specified reward function, and instead has to learn by imitating an expert's behavior. Additionally, in V-ftO, expert actions are not accessible during the learning process, and the pixel-based observations we obtain from video frames result in partial observability. The absence of expert actions and the partial observability of the environment distinguish V-ftO from other types of imitation from experts. Specifically, we identify three other frameworks previously addressed in the literature: _Imitation Learning (IL)_[1, 2, 3, 4, 5] where
states are fully observable and expert state-action pairs are accessible, _Visual Imitation Learning (V-IL)_[6] which explores the idea of imitating directly from pixels but still assumes that expert actions are provided to the learning agent, and _Imitation from Observations (IfO)_[7, 8, 9] which retains full observability but considers only the availability of expert states. Table 1 summarizes these frameworks.
In order to address the V-IfO problem, this paper introduces both theoretical and algorithmic contributions. First, we provide a theoretical analysis of the problem and demonstrate that the suboptimality of the learning agent can be upper bounded by the divergence between the expert and the agent latent state-transition distributions. Our analysis motivates the reduction of the V-IfO problem to two subproblems: \((i)\) estimating a proper latent representation from sequences of observations and \((ii)\) efficiently minimizing the divergence between expert and agent distributions in this latent space. Next, we propose practical solutions to these subproblems. By doing so, we formalize a novel algorithm called _Latent Adversarial Imitation from Observations (LAfO)_, which tackles the divergence minimization step using off-policy adversarial imitation techniques [10] and recovers a latent representation of the ground-truth state by means of observations stacking [11, 12] and data augmentation [13, 14]. We evaluate our algorithm on the DeepMind Control Suite [15], demonstrating that we can match state-of-the-art performance while significantly reducing wall-clock time due to our model-free approach in latent space. We conclude by showing how LAfO can be used to improve the efficiency of Reinforcement Learning (RL) from pixels by leveraging expert videos. Our approach successfully solves challenging environments, such as the humanoid from pixels [15], in one-third of the interactions required by state-of-the-art RL from pixels algorithms.
The remainder of the paper is organized as follows: Section 2 provides a summary of the most related works to this paper. Section 3 introduces notation and background on RL and IL. Section 4 provides a theoretical analysis of the V-IfO problem. Section 5 introduces our algorithm, LAfO, and outlines how it can leverage expert videos to improve data efficiency of RL from pixels. Finally, Section 6 presents our experimental results and Section 7 concludes the paper providing a general discussion on our findings.
## 2 Related work
In recent years, several studies have focused on the IL problem [1, 2, 3, 4, 5] and, in particular, on the generative adversarial IL framework [16] which has emerged as one of the most promising approaches for IL. Adversarial IL builds upon a vast body of work on inverse RL [2, 17, 18, 19, 20, 21]. The primary goal of inverse RL is to identify a reward function that enables expert trajectories (i.e., state-action pairs) to be optimal. The reward function obtained from the inverse RL step is then used to train agents in order to match the expert's expected reward. In the fully observable setting, adversarial IL was originally formalized in [16, 22] and extended to the observation only setting in [7, 8]. Furthermore, the adversarial IfO problem has been theoretically analyzed in [23, 24]. Note that all of these studies are built upon on-policy RL [25], which provides good learning stability but is known for poor sample efficiency. In recent works, this efficiency issue has been addressed by using off-policy RL algorithms in the adversarial optimization step [26, 27]. These include DAC [28], SAM [29], and ValuelDICE [30] for the adversarial IL problem, and OPOLO [31] and MobilE [32] for the adversarial IfO problem. Another line of research has tackled IfO by directly estimating expert actions and subsequently deploying IL techniques on the estimated state-action pairs [9, 33, 34, 35, 36, 37, 38].
All of the aforementioned works consider fully observable environments modeled as _Markov Decision Processes (MDPs)_. However, when dealing with pixels, individual observations alone are insufficient for determining optimal actions. As a result, recent works [6, 39] have treated the V-IL problem
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & IL & IfO & V-IL & V-IfO \\ \hline Fully observable environment & ✓ & ✓ & ✗ & ✗ \\ Access to expert actions & ✓ & ✗ & ✓ & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: A summary of imitation from experts: Imitation Learning (IL), Imitation from Observations (IfO), Visual Imitation Learning (V-IL), and Visual Imitation from Observations (V-IfO).
as a _Partially Observable Markov Decision Process (POMDP)_[40]. In particular, the work in [6] addressed the V-IL problem by proposing a model-based extension [41, 42] of generative adversarial IL called VMAIL. The work in [43] also considered IL in a POMDP in order to handle missing information in the agent state, but did not directly focus on learning from pixels. The more difficult V-IFO problem, on the other hand, has received less attention in the literature. To the best of our knowledge, this problem has only been considered by the recent algorithm PatchAIL [44], where off-policy adversarial IL is performed directly on the pixel space. Different from [44], we first study V-IFO from a theoretical perspective, which motivates an algorithm that performs imitation on a _latent representation of the agent state_ rather than directly on the pixel space as in PatchAIL. This difference is crucial to ensure improved computational efficiency.
Finally, our work is also related to the RL from pixels literature which tackles the challenge of maximizing an agent's expected return end-to-end, from pixels to actions. This approach has proven successful in playing Atari games [11, 12]. Recently, RL from pixels has also been extended to tackle continuous action space tasks, such as robot locomotion, by leveraging either data augmentation techniques [13, 14, 45, 46, 47, 48] or variational inference [41, 42, 46, 49].
## 3 Preliminaries
Unless indicated otherwise, we use uppercase letters (e.g., \(S_{t}\)) for random variables, lowercase letters (e.g., \(s_{t}\)) for values of random variables, script letters (e.g., \(\mathcal{S}\)) for sets, and bold lowercase letters (e.g., \(\mathbf{\theta}\)) for vectors. Let \([t_{1}:t_{2}]\) be the set of integers \(t\) such that \(t_{1}\leq t\leq t_{2}\); we write \(S_{t}\) such that \(t_{1}\leq t\leq t_{2}\) as \(S_{t_{1}:t_{2}}\). We denote with \(\mathbb{E}[\cdot]\) expectation, with \(\mathbb{P}(\cdot)\) probability, and with \(\mathbb{D}_{f}(\cdot,\cdot)\) an \(f\)-divergence between two distributions of which the total variation (TV) distance, \(\mathbb{D}_{\text{TV}}(\cdot,\cdot)\), and the Jensen-Shannon divergence, \(\mathbb{D}_{\text{JS}}(\cdot||\cdot)\), are special cases.
We model the decision process as an infinite-horizon discounted POMDP described by the tuple \((\mathcal{S},\mathcal{A},\mathcal{X},\mathcal{T},\mathcal{U},\mathcal{R}, \rho_{0},\gamma)\), where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) is the set of actions, and \(\mathcal{X}\) is the set of observations. \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\to P(\mathcal{S})\) is the transition probability function where \(P(\mathcal{S})\) denotes the space of probability distributions over \(\mathcal{S},\mathcal{U}:\mathcal{S}\to P(\mathcal{X})\) is the observation probability function, and \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\) is the reward function which maps state-action pairs to scalar rewards. Alternatively, the reward function can also be expressed as \(\mathcal{R}:\mathcal{S}\times\mathcal{S}\to\mathbb{R}\) mapping state-transition pairs to scalar rewards rather than state-action pairs. Finally, \(\rho_{0}\in P(\mathcal{S})\) is the initial state distribution and \(\gamma\in[0,1)\) the discount factor. The true environment state \(s\in\mathcal{S}\) is unobserved by the agent. Given an action \(a\in\mathcal{A}\), the next state is sampled such that \(s^{\prime}\sim\mathcal{T}(\cdot|s,a)\), an observation is generated as \(x^{\prime}\sim\mathcal{U}(\cdot|s^{\prime})\), and a reward \(\mathcal{R}(s,a)\) or \(\mathcal{R}(s,s^{\prime})\) is computed. Note that an MDP is a special case of a POMDP where the underlying state \(s\) is directly observed.
Reinforcement learningGiven an MDP and a stationary policy \(\pi:\mathcal{S}\to P(\mathcal{A})\), the RL objective is to maximize the expected total discounted return \(J(\pi)=\mathbb{E}_{\tau}[\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(s_{t},a_{t})]\) where \(\tau=(s_{0},a_{0},s_{1},a_{1},\dots)\). A stationary policy \(\pi\) induces a normalized discounted state visitation distribution defined as \(d_{\pi}(s)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}(s_{t}=s|\rho_{0}, \pi,\mathcal{T})\), and we define the corresponding normalized discounted state-action visitation distribution as \(\rho_{\pi}(s,a)=d_{\pi}(s)\pi(a|s)\). Finally, we denote the state value function of \(\pi\) as \(V^{\pi}(s)=\mathbb{E}_{\tau}[\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(s_{t},a _{t})|S_{0}=s]\) and the state-action value function as \(Q^{\pi}(s,a)=\mathbb{E}_{\tau}[\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(s_{t}, a_{t})|S_{0}=s,A_{0}=a]\). When a function is parameterized with parameters \(\mathbf{\theta}\in\Theta\subset\mathbb{R}^{k}\) we write \(\pi_{\mathbf{\theta}}\).
Generative adversarial imitation learningAssume we have a set of expert demonstrations \(\tau_{E}=(s_{0:T},a_{0:T})\) generated by the expert policy \(\pi_{E}\), a set of trajectories \(\tau_{\mathbf{\theta}}\) generated by the policy \(\pi_{\mathbf{\theta}}\), and a discriminator network \(D_{\mathbf{\chi}}:\mathcal{S}\times\mathcal{A}\to[0,1]\) parameterized by \(\mathbf{\chi}\). Generative adversarial IL [16] optimizes the min-max objective
\[\min_{\mathbf{\theta}}\max_{\mathbf{\chi}}\ \mathbb{E}_{\tau_{E}}[\log(D_{\mathbf{\chi}}(s,a))]+ \mathbb{E}_{\tau_{\mathbf{\theta}}}[\log(1-D_{\mathbf{\chi}}(s,a))]. \tag{1}\]
Maximizing (1) with respect to \(\mathbf{\chi}\) is effectively an inverse RL step where a reward function, in the form of the discriminator \(D_{\mathbf{\chi}}\), is inferred by leveraging \(\tau_{E}\) and \(\tau_{\mathbf{\theta}}\). On the other hand, minimizing (1) with respect to \(\mathbf{\theta}\) can be interpreted as a RL step, where the agent aims to minimize its expected cost. It has been demonstrated that optimizing the min-max objective in (1) is equivalent to minimizing \(\mathbb{D}_{\text{JS}}(\rho_{\pi_{\mathbf{\theta}}}(s,a)||\rho_{\pi_{E}}(s,a))\), so we are recovering the expert state-action visitation distribution [10].
Latent representation in POMDPWhen dealing with a POMDP, a policy \(\pi_{\mathbf{\theta}}(x_{t})\) that selects an action \(a_{t}\) based on a single observation \(x_{t}\in\mathcal{X}\) is likely to perform poorly since \(x_{t}\) lacks enough information about the actual state \(s_{t}\). It is therefore beneficial to estimate a distribution of the true state from prior experience. To do so, a latent variable \(z_{t}\in\mathcal{Z}\) is introduced such that \(z_{t}=\phi(x_{\leq t},a_{<t})\), where \(\phi\) maps the history of observations and actions to \(\mathcal{Z}\). Alternatively, when actions are not observable, we have \(z_{t}=\phi(x_{\leq t})\). If \(z_{t}\) is learned such that \(\mathbb{P}(s_{t}|x_{\leq t},a_{<t})\approx\mathbb{P}(s_{t}|z_{t})\), meaning that \(z_{t}\) effectively represents a sufficient statistic of the history, it can be used as a latent representation of \(s_{t}\) and the agent can be effectively trained on \(z_{t}\).
## 4 Theoretical analysis
In the following, we build the theoretical foundations for our algorithm. Recall that we consider the V-IfO problem where expert actions are not available and the ground-truth states \(s\in\mathcal{S}\) are not observable (see Table 1). As a result, a latent representation \(z\in\mathcal{Z}\) is inferred from the history of observations and used by the learning agent to make decisions.
Throughout the paper we make the following assumptions: \((i)\) the expert and the agent act on the same POMDP and \((ii)\) the latent variable \(z\) can be estimated from the history of observations as \(z_{t}=\phi(x_{\leq t})\) such that \(\mathbb{P}(s_{t}|z_{t},a_{t})=\mathbb{P}(s_{t}|z_{t})=\mathbb{P}(s_{t}|x_{\leq t },a_{<t})\). Assumption \((i)\) is instrumental for both our derivations and experiments. Relaxing this assumption would lead to dynamics mismatch [50] and visual domain adaptation problems [51], which represent interesting extensions for future work. On the other hand, assumption \((ii)\) explicitly states the characteristics required by the latent variable \(z\); i.e., \(z_{t}\) can be successfully estimated from the history of observations \(x_{\leq t}\) in order to approximate a sufficient statistic of the history. Note that this is a common assumption in the IL literature for POMDPs [6, 43], and estimating such a variable is a non-trivial problem that we address in the next section. We further discuss the importance of this assumption from a theoretical perspective in Appendix B (Remark 1).
On the latent space \(\mathcal{Z}\), we can define the normalized discounted latent state visitation distribution as \(d_{\pi_{\mathbf{\theta}}}(z)=(1-\gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}(z_ {t}=z|\rho_{0},\pi_{\mathbf{\theta}},\mathcal{T},\mathcal{U})\) and the normalized discounted latent state-action visitation distribution as \(\rho_{\pi_{\mathbf{\theta}}}(z,a)=d_{\pi_{\mathbf{\theta}}}(z)\pi_{\mathbf{\theta}}(a|z)\). Further, we define the latent state-transition visitation distribution as \(\rho_{\pi_{\mathbf{\theta}}}(z,z^{\prime})=d_{\pi_{\mathbf{\theta}}}(z)\int_{\mathcal{ A}}\mathbb{P}(z^{\prime}|z,\bar{a})\pi_{\mathbf{\theta}}(\bar{a}|z)d\bar{a}\) and the normalized discounted joint distribution as \(\rho_{\pi_{\mathbf{\theta}}}(z,a,z^{\prime})=\rho_{\pi_{\mathbf{\theta}}}(z,a)\mathbb{ P}(z^{\prime}|z,a)\), where
\[\mathbb{P}(z^{\prime}|z,a)= \int_{\mathcal{S}}\int_{\mathcal{S}}\int_{\mathcal{X}}\mathbb{P} (z^{\prime}|x^{\prime},a,z)\mathcal{U}(x^{\prime}|s^{\prime})\mathcal{T}(s^{ \prime}|s,a)\mathbb{P}(s|z)dx^{\prime}ds^{\prime}ds. \tag{2}\]
Finally, we obtain \(\mathbb{P}_{\pi_{\mathbf{\theta}}}(a|z,z^{\prime})\) as
\[\mathbb{P}_{\pi_{\mathbf{\theta}}}(a|z,z^{\prime})=\frac{\mathbb{P}(z^{\prime}|z, a)\pi_{\mathbf{\theta}}(a|z)}{\int_{\mathcal{A}}\mathbb{P}(z^{\prime}|z,\bar{a})\pi_{ \mathbf{\theta}}(\bar{a}|z)d\bar{a}}.\]
Note that we write \(\mathbb{P}_{\pi_{\mathbf{\theta}}}\), with \(\pi_{\mathbf{\theta}}\) as subscript, in order to explicitly denote the dependency on the policy and omit the subscript, as in (2), when such probability depends only on the environment.
We start by considering the case in which \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) and \(J(\pi)=\mathbb{E}_{\tau}[\sum_{t=0}^{\infty}\gamma^{t}\mathcal{R}(s_{t},a_{t})]\). The following Theorem shows how the suboptimality of \(\pi_{\mathbf{\theta}}\) can be upper bounded by the TV distance between latent state-transition visitation distributions, reducing the V-IfO problem to a divergence minimization problem in \(\mathcal{Z}\).
**Theorem 1**.: _Consider a POMDP, and let \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) and \(z_{t}=\phi(x_{\leq t})\) such that \(\mathbb{P}(s_{t}|z_{t},a_{t})=\mathbb{P}(s_{t}|z_{t})=\mathbb{P}(s_{t}|x_{\leq t },a_{<t})\). Then, the following inequality holds:_
\[\big{|}J(\pi_{E})-J(\pi_{\mathbf{\theta}})\big{|}\leq \frac{2R_{\max}}{1-\gamma}\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{ \mathbf{\theta}}}(z,z^{\prime}),\rho_{\pi_{E}}(z,z^{\prime})\big{)}+C,\]
_where \(R_{\max}=\max_{(s,a)\in\mathcal{S}\times\mathcal{A}}|\mathcal{R}(s,a)|\) and_
\[C=\frac{2R_{\max}}{1-\gamma}\mathbb{E}_{\rho_{\pi_{\mathbf{\theta}}}(z,z^{\prime})} \big{[}\mathbb{D}_{\text{TV}}\big{(}\mathbb{P}_{\pi_{\mathbf{\theta}}}(a|z,z^{ \prime}),\mathbb{P}_{\pi_{E}}(a|z,z^{\prime})\big{)}\big{]}. \tag{3}\]
Proof.: Using the definition of \(J(\pi_{\mathbf{\theta}})\), we first upper bound the performance difference between expert and agent by \(\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{\mathbf{\theta}}}(s,a),\rho_{\pi_{E}}(s,a) \big{)}\). Next, we bound the latter divergence by
\(\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{\mathbf{\theta}}}(z,a),\rho_{\pi_{E}}(z,a)\big{)}\) using the assumption \(\mathbb{P}(s_{t}|z_{t},a_{t})=\mathbb{P}(s_{t}|z_{t})\) and noticing that \(\mathbb{P}(s_{t}|z_{t})\) is policy independent. Finally, we bound this last divergence in terms of \(\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{\mathbf{\theta}}}(z,z^{\prime}),\rho_{\pi_{ E}}(z,z^{\prime})\big{)}\) (Lemma 3 in Appendix B). We provide the full derivations in Appendix C.
Theorem 1 addresses the challenge of considering rewards that depend on actions without the ability to observe expert actions. Consequently, in our setting, we cannot compute \(C\) in (3). Similar to the MDP case [23], a sufficient condition for \(C=0\) is the injectivity of \(\mathbb{P}(z^{\prime}|z,a)\) in (2) with respect to \(a\), indicating that there is only one action corresponding to a given latent state transition. This property ensures that \(\mathbb{P}(a|z,z^{\prime})\) remains unaffected by different executed policies, ultimately reducing \(C\) to zero. For the sake of completeness, we formally state this result in Appendix C. However, in our setting, it is difficult to guarantee the injectivity of \(\mathbb{P}(z^{\prime}|z,a)\) due to its dependence on both the environment through \(\mathcal{U}(x^{\prime}|s^{\prime})\) and \(\mathcal{T}(s^{\prime}|s,a)\), and the latent variable estimation method through \(\mathbb{P}(z^{\prime}|x^{\prime},a,z)\) and \(\mathbb{P}(s|z)\). Instead, we demonstrate in Theorem 2 how redefining the reward function as \(\mathcal{R}:\mathcal{S}\times\mathcal{S}\to\mathbb{R}\), which is commonly observed in robotics learning, allows us to reformulate the result in Theorem 1 without the additive term \(C\) in (3).
**Theorem 2**.: _Consider a POMDP, and let \(\mathcal{R}:\mathcal{S}\times\mathcal{S}\to\mathbb{R}\) and \(z_{t}=\phi(x_{\leq t})\) such that \(\mathbb{P}(s_{t}|z_{t},a_{t})=\mathbb{P}(s_{t}|z_{t})=\mathbb{P}(s_{t}|x_{\leq t },a_{<t})\). Then, the following inequality holds:_
\[\big{|}J(\pi_{E})-J(\pi_{\mathbf{\theta}})\big{|}\leq \frac{2R_{\max}}{1-\gamma}\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_ {\mathbf{\theta}}}(z,z^{\prime}),\rho_{\pi_{E}}(z,z^{\prime})\big{)},\]
_where \(R_{\max}=\max_{(s,s^{\prime})\in\mathcal{S}\times\mathcal{S}}|\mathcal{R}(s, s^{\prime})|\)._
Proof.: The proof proceeds similarly to the one for Theorem 1, by using that \(\mathbb{P}(s,s^{\prime}|z,z^{\prime})\) is not characterized by the policy but only by the environment. We show the full proof in Appendix C.
In summary, Theorems 1 and 2 provide theoretical motivation for the two main ingredients of our algorithm: a method for estimating \(z\) such that it can effectively approximate a sufficient statistic of the history, and an efficient algorithm to minimize \(\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{\mathbf{\theta}}}(z,z^{\prime}),\rho_{\pi _{E}}(z,z^{\prime})\big{)}\). We introduce practical solutions to both of these problems in the next section.
## 5 Latent Adversarial Imitation from Observations
In the following, we introduce the main components of our algorithm LAIfO. First, we outline our adversarial imitation pipeline in the latent space \(\mathcal{Z}\), which minimizes the divergence between the latent state-transition visitation distributions of the agent and expert. Then, we describe a simple and effective approach for estimating the latent state \(z\) from sequences of observations. Finally, we show how LAIfO can leverage expert videos to enhance the efficiency of RL from pixels in a number of highly challenging tasks.
Off-policy adversarial imitation from observationsBased on the results in Section 4, given a latent variable \(z\) that captures a sufficient statistic of the history, we can minimize the suboptimality of the policy \(\pi_{\mathbf{\theta}}\) by solving the minimization problem
\[\min_{\mathbf{\theta}}\quad\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{\mathbf{\theta}}}(z,z^{\prime}),\rho_{\pi_{E}}(z,z^{\prime})\big{)}. \tag{4}\]
We propose to optimize the objective in (4) using off-policy adversarial IFO. We initialize two replay buffers \(\mathcal{B}_{E}\) and \(\mathcal{B}\) to respectively store the sequences of observations generated by the expert and the agent policies, from which we infer the latent state-transitions \((z,z^{\prime})\). Note that we write \((z,z^{\prime})\sim\mathcal{B}\) to streamline the notation. Then, given a discriminator \(D_{\mathbf{\chi}}:\mathcal{Z}\times\mathcal{Z}\to[0,1]\), we write
\[\max_{\mathbf{\chi}}\ \ \mathbb{E}_{(z,z^{\prime})\sim\mathcal{B}_{E}}[\log(D_{ \mathbf{\chi}}(z,z^{\prime}))]+\mathbb{E}_{(z,z^{\prime})\sim\mathcal{B}}[\log(1-D_ {\mathbf{\chi}}(z,z^{\prime}))]+g\big{(}\nabla_{\mathbf{\chi}}D_{\mathbf{\chi}}\big{)}, \tag{5}\] \[r_{\mathbf{\chi}}(z,z^{\prime})=D_{\mathbf{\chi}}\big{(}z,z^{\prime}). \tag{6}\]
As mentioned, alternating the maximization of the loss in (5) with a RL step using the reward function defined in (6) leads to the minimization of \(\mathbb{D}_{\text{IS}}\big{(}\rho_{\pi_{\mathbf{\theta}}}(z,z^{\prime})||\rho_{\pi_{ E}}(z,z^{\prime})\big{)}\)[52]. Since \(\mathbb{D}_{\text{IS}}(\cdot||\cdot)\) can be used to upper bound \(\mathbb{D}_{\text{TV}}(\cdot,\cdot)\) (cf. Lemma 1 in Appendix B), this approach effectively minimizes
the loss in (4). In order to stabilize the adversarial training process, it is important to ensure local Lipschitz-continuity of the learned reward function [53]. Therefore, as proposed in [54], we include in (5) the gradient penalty term
\[g\big{(}\nabla_{\mathbf{\chi}}D_{\mathbf{\chi}}\big{)}=\lambda\mathbb{E}_{ (\hat{z},\hat{z}^{\prime})\sim\mathbb{P}_{(\hat{z},\hat{z}^{\prime})}}[(||\nabla_ {\mathbf{\chi}}D_{\mathbf{\chi}}(\hat{z},\hat{z}^{\prime})||_{2}-1)^{2}], \tag{7}\]
where \(\lambda\) is a hyperparameter, and \(\mathbb{P}_{(\hat{z},\hat{z}^{\prime})}\) is defined such that \((\hat{z},\hat{z}^{\prime})\) are sampled uniformly along straight lines between pairs of transitions respectively sampled from \(\mathbf{B}_{E}\) and \(\mathbf{\mathcal{B}}\). See [54] for additional details about this choice of gradient penalty term. Finally, from a theoretical standpoint, note that we should perform importance sampling correction in order to account for the effect of off-policy data when sampling from \(\mathbf{\mathcal{B}}\)[55, 56]. However, neglecting off-policy correction works well in practice and does not compromise the stability of the algorithm [28].
Latent variable estimation from observationsNote that the problem in (4) is defined on the latent space \(\mathcal{Z}\). Therefore, we now present a simple and effective method to estimate the latent variable \(z\) from sequences of observations. Inspired by the model-free RL from pixels literature, we propose to combine the successful approaches of observations stacking [11, 12] and data augmentation [13, 14]. We stack together the most recent \(d\in\mathbb{N}\) observations, and provide this stack as an input to a feature extractor which is trained during the RL step. More specifically, we define a feature extractor \(\phi_{\mathbf{\delta}}:\mathcal{X}^{d}\rightarrow\mathcal{Z}\) such that \(z=\phi_{\mathbf{\delta}}(x_{t-t})\) where \(t-t^{-}+1=d\). When learning from pixels, we also apply data augmentation to the observations stack to improve the quality of the extracted features as in [14]. We write \(\text{aug}(x_{t-t})\) to define the augmented stack of observations. The latent representations \(z\) and \(z^{\prime}\) are then computed respectively as \(z=\phi_{\mathbf{\delta}}\big{(}\text{aug}(x_{t-t})\big{)}\) and \(z^{\prime}=\phi_{\mathbf{\delta}}\big{(}\text{aug}(x_{t-t+1:t+1})\big{)}\). We train the feature extractor \(\phi_{\mathbf{\delta}}\) with the critic networks \(\bar{Q}_{\mathbf{\psi}_{k}}\) (\(k=1,2\)) in order to minimize the loss function
\[\mathcal{L}_{\mathbf{\delta},\mathbf{\psi}_{k}}(\mathcal{B}) =\mathbb{E}_{(z,a,z^{\prime})\sim\mathcal{B}}[(Q_{\mathbf{\psi}_{k}} (z,a)-y)^{2}],\] \[y =r_{\mathbf{\chi}}(z,z^{\prime})+\gamma\min_{k=1,2}Q_{\bar{\mathbf{\psi} }_{k}}(z^{\prime},a^{\prime}). \tag{8}\]
In (8), \(a\) is an action stored in \(\mathcal{B}\) used by the agent to interact with the environment, while \(a^{\prime}=\pi_{\mathbf{\theta}}(z^{\prime})+\epsilon\) where \(\epsilon\sim\text{clip}(\mathcal{N}(0,\sigma^{2}),-c,c)\) is a clipped exploration noise with \(c\) the clipping parameter and \(\mathcal{N}(0,\sigma^{2})\) a univariate normal distribution with zero mean and \(\sigma\) standard deviation. The reward function \(r_{\mathbf{\chi}}(z,z^{\prime})\) is defined as in (6), and \(\bar{\mathbf{\psi}}_{1}\) and \(\bar{\mathbf{\psi}}_{2}\) are the slow moving weights for the target Q networks. We provide more implementation details and the complete pseudo-code for our algorithm in Appendix D.
Note that the feature extractor \(\phi_{\mathbf{\delta}}\) is shared by both the critics \(Q_{\mathbf{\psi}_{k}}\), the policy \(\pi_{\mathbf{\theta}}\), and the discriminator \(D_{\mathbf{\chi}}\). However, we stop the backpropagation of the gradient from \(\pi_{\mathbf{\theta}}\) and \(D_{\mathbf{\chi}}\) into \(\phi_{\mathbf{\delta}}\). The logic of this choice involves obtaining a latent variable \(z\) that is not biased towards any of the players in the adversarial IFO game in (5), but only provides the information necessary to determine the expert and agent expected performance.
Improving RL from pixels using expert videosWe have so far considered the pure imitation setting where a reward function can only be estimated from expert data. However, for many real-world tasks a simple objective can often be provided to the learning agent. Assuming that videos of experts are also available, we show how we can use LAIfO to accelerate the RL learning process.
We combine the standard RL objective with our V-IfO objective in (4), leading to the combined problem
\[\max_{\mathbf{\theta}} \mathbb{E}_{\tau_{\mathbf{\theta}}}\left[\sum_{t=0}^{\infty}\gamma^{t }\mathcal{R}(s_{t},a_{t})\right]-\mathbb{D}_{\text{TV}}\big{(}\rho_{\pi_{\mathbf{ \theta}}}(z,z^{\prime}),\rho_{\pi_{\mathbf{\pi}}}(z,z^{\prime})\big{)}. \tag{9}\]
Using the adversarial IFO pipeline presented in (5), we can rewrite (9) as
\[\max_{\mathbf{\theta}} \mathbb{E}_{\tau_{\mathbf{\theta}}}\left[\sum_{t=0}^{\infty}\gamma^{t }\Big{(}\mathcal{R}(s_{t},a_{t})+r_{\mathbf{\chi}}\big{(}z_{t},z_{t+1}\big{)} \Big{)}\right], \tag{10}\]
with \(r_{\mathbf{\chi}}\) in (6). By learning \(r_{\mathbf{\chi}}\) with LAIfO and optimizing the problem in (10) throughout training, we will show that we are able to solve challenging humanoid from pixels tasks [15] in one-third of the interactions required by state-of-the-art RL from pixels algorithms (DrQV2 [48] and DreameV2 [49]).
## 6 Experiments
In this section, we conduct experiments that aim to answer the following questions:
1. For the V-IfO problem, how does LAIfO compare to PatchAIL [44], a state-of-the-art approach for V-IfO, in terms of asymptotic performance and computational efficiency?
2. How does the V-IL version of LAIfO with access to expert actions, named _Latent Adversarial Imitation Learning (LAIL)_, compare to VMAIL [6], a state-of-the-art approach for V-IL?
3. What is the impact on performance due to partial observability and the absence of expert actions?
4. Can LAIfO leverage expert videos to improve the efficiency of RL from pixels in high-dimensional continuous robotic tasks?
For more details about the hardware used to carry out these experiments, all the learning curves, and other implementation details, refer to Appendix E and to our code.
Visual Imitation from ObservationsIn order to address question \((1)\), we evaluate LAIfO and PatchAIL [44], in its weight regularized version denoted by PatchAIL-W, on \(6\) different locomotion tasks from the DeepMind Control Suite [15]. The results are summarized in Table 2 and Figure 1. Table 2 includes the asymptotic performance of each algorithm, as well as the ratio of wall-clock times between the two algorithms to achieve 75% of expert performance. Figure 1 depicts the average return per episode throughout training as a function of wall-clock time (top row) and as a function of training steps (bottom row). These results demonstrate that LAIfO can successfully solve
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & walker & & hopper & & cheath \\ \cline{2-7} & walk & stand & run & stand & hop & run \\ \hline Expert & \(960\) & \(980\) & \(640\) & \(920\) & \(217\) & \(900\) \\ \hline PatchAIL-W [44] & \(955\pm 7.02\) & \(\mathbf{971\pm 10.5}\) & \(569\pm 53.2\) & \(\mathbf{867\pm 33.9}\) & \(191\pm 13.0\) & \(695\pm 312\) \\ LAIfO (our) & \(\mathbf{960\pm 2.2}\) & \(961\pm 20.0\) & \(\mathbf{618\pm 4.6}\) & \(800\pm 46.7\) & \(\mathbf{206\pm 8.5}\) & \(\mathbf{773\pm 41.2}\) \\ \hline Wall-clock time ratio & & & & & & \\ to \(75\%\) expert performance & \(0.15\) & \(0.27\) & \(0.22\) & \(0.16\) & \(0.16\) & \(0.46\) \\ (LAIfO (our) / PatchAIL-W) & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experimental results for V-IfO (i.e., imitation from experts with partial observability and without access to expert actions). We use DDPG to train experts in a fully observable setting and collect \(100\) episodes of expert data. We train all algorithms for \(10^{6}\) frames in walker-walk, walker-stand, and hopper-stand, and \(3\times 10^{6}\) frames for the other tasks. We evaluate the learned policy using average performance over \(10\) episodes. We run each experiment for \(6\) seeds. In the top two rows, we report repeat mean and standard deviation of final performance over seeds. In the bottom row, we report the ratio of wall-clock times between the two algorithms to achieve \(75\%\) of expert performance. For each task, we highlight the highest asymptotic performance.
Figure 1: Learning curves for the results in Table 2. Plots show the average return per episode as a function of wall-clock time (top row) and as a function of training steps (bottom row). Our algorithm LAIfO achieves state-of-the-art asymptotic performance, and significantly reduces computation time compared to PatchAIL.
the V-IfO problem, achieving asymptotic performance comparable to the state-of-the-art baseline PatchAIL. Importantly, _LAIfO is significantly more computationally efficient than PatchAIL_. This is well highlighted both in Table 2 and in the top row of Figure 1, where we show that LAIfO converges _at least twice as fast_ as PatchAIL in terms of wall-clock time. This improved computational efficiency is the result of performing imitation on the latent space \(\mathcal{Z}\), instead of directly on the high-dimensional observation space \(\mathcal{O}\) (i.e., pixel space) as in PatchAIL.
Visual Imitation LearningTo answer question \((2)\), we test LAIL, the V-IL version of LAIfO, and VMAIL [6] using the same experimental setup that we considered in the V-IfO setting. Note that VMAIL stands for _Variational Model Adversarial Imitation Learning_, and represents a model-based version of generative adversarial IL built upon the variational models presented in [41, 42, 49]. The results for these experiments are summarized in Table 3 and Figure 2. Compared to VMAIL, we see that _LAIL achieves better asymptotic performance and better computational efficiency_. While both algorithms perform imitation on a latent space \(\mathcal{Z}\), LAIL is a _model-free_ algorithm that requires a lower number of learnable parameters compared to the model-based VMAIL. VMAIL must learn an accurate world model during training, which can be a challenging and computationally demanding task. The model learning process contributes to higher wall-clock times, and can also lead to instability in the training process for some environments (cf. the bottom row of Figure 2). On the other hand, the model-free approach of LAIL results in stable training that yields faster convergence and better efficiency.
Ablation studyIn order to answer question \((3)\), we compare performance for each type of imitation from experts in Table 1. For the partially observable setting, we consider our algorithms LAIL and LAIfO. For the fully observable setting, we consider DAC [28] and our implementation of _DAC from Observations (DAC/O)_. We provide the full learning curves for DAC and DACFO in Appendix E (cf. Table 5 and Figure 5). The results are summarized in Figure 3, which shows the average normalized return obtained by each algorithm throughout the different tasks in Table 2. These experiments
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{walker} & \multicolumn{2}{c}{hopper} & \multicolumn{2}{c}{cheath} \\ \cline{2-7} & walk & stand & run & stand & hop & run \\ \hline Expert & \(960\) & \(980\) & \(640\) & \(920\) & \(217\) & \(900\) \\ \hline VMAIL [6] & \(939\pm 9.8\) & \(805\pm 309\) & \(516\pm 224\) & \(567\pm 285\) & \(72.3\pm 73.0\) & \(539\pm 367\) \\ LAIL (our) & \(\mathbf{946\pm 8.5}\) & \(\mathbf{893\pm 106}\) & \(\mathbf{625\pm 5.1}\) & \(\mathbf{764\pm 111}\) & \(\mathbf{208\pm 3.1}\) & \(\mathbf{811\pm 67.9}\) \\ \hline WaILclock time ratio to \(755\) expert performance (LAL (our) / VMAIL) & \(0.4\) & \(0.82\) & \(0.58\) & \(0.12\) & \(0.23\) & \(0.83\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experimental results for V-IL (i.e., imitation from experts with partial observability and access to expert actions). The experiments are conducted as in Table 2. In the top rows, we report mean and standard deviation of final performance over seeds. In the bottom row, we report the ratio of wall-clock times between the two algorithms to achieve \(75\%\) of expert performance. For each task, we highlight the highest asymptotic performance.
Figure 2: Learning curves for the results in Table 3. Plots show the average return per episode as a function of wall-clock time (top row) and as a function of training steps (bottom row). LAIL outperforms VMAIL in terms of both asymptotic performance and computational efficiency.
highlight how our algorithms can successfully address the absence of expert actions and partial observability, suffering only marginal performance degradation due to these additional challenges. As explained in our theoretical analysis in Section 4, partial observability is addressed by estimating a latent state representation that successfully approximates a sufficient statistic of the history. On the other hand, marginal degradation due to the absence of expert actions occurs either because we are in the context described by Theorem 2, where the environment reward function does not depend on actions, or because \(C\) in Theorem 1 becomes negligible.
Improving RL using expert videosWe answer question \((4)\) by applying LAIfO to the problem in (9) for the humanoid from pixels environment. We consider the state-of-the-art RL from pixels algorithms DrQV2 [48] and DreamerV2 [49] as baselines. The results are illustrated in Figure 4. By leveraging expert videos, we see that our algorithm significantly outperforms the baselines. Note that we solve these tasks by using only \(10^{7}\) interactions, compared to the \(3\times 10^{7}\) required by the baselines in prior studies [48].
## 7 Conclusion
In this work, we formally analyzed the V-IfO problem and introduced our algorithm LAIfO as an effective solution. We experimentally showed that our approach matches the performance of state-of-the-art V-IL and V-IfO methods, while requiring significantly less computational effort due to our model-free approach in latent space. Furthermore, we showed how LAIfO can be used to improve the efficiency and asymptotic performance of RL methods by leveraging expert videos.
Limitations and future workDespite the advancement in addressing the V-IfO problem, it is important to understand the limitations of our approach. The primary limitation arises from the
Figure 4: Performance using the multi-objective RL framework in (9) on the humanoid environment. The experiments are designed as in Table 2. We report mean and standard error over seeds.
Figure 3: Normalized returns obtained by each type of imitation from experts over the tasks in Table 2. For each run, we normalize the agent’s return with respect to expert performance. For each type of imitation from experts, we plot mean and standard deviation over the full set of runs. The performance of our algorithms in the partially observable setting are comparable to the performance in the fully observable setting, and the absence of expert actions and partial observability leads only to marginal performance degradation.
assumption that the expert and the agent act within the same POMDP. In realistic scenarios, such alignment rarely occurs, emphasizing the need for methods that can handle dynamics mismatch and visual domain adaptation. This is a crucial next step towards enabling successful learning from expert videos. Furthermore, throughout this work we have used adversarial learning for divergence minimization between distributions. Adversarial learning can introduce optimization challenges and stability issues. While we propose practical solutions to mitigate these problems, exploring alternatives to this framework offers another interesting avenue for future research. Additionally, from an experimental standpoint, our emphasis has been on robotics locomotion tasks. In the future, we plan to address navigation tasks, considering not only third-view perspectives but also egocentric camera viewpoints.
|
2309.05985 | Curve neighborhoods of Seidel products in quantum cohomology | A conjecture of Buch-Chaput-Perrin asserts that the two-pointed curve
neighborhood corresponding to a quantum product of Seidel type is an explicitly
given Schubert variety. We prove this conjecture for flag varieties in type A. | Mihail Ţarigradschi | 2023-09-12T06:30:06Z | http://arxiv.org/abs/2309.05985v1 | # Curve neighborhoods of Seidel products in quantum cohomology
###### Abstract.
A conjecture of Buch-Chaput-Perrin asserts that the two-pointed curve neighborhood corresponding to a quantum product of Seidel type is an explicitly given Schubert variety. We prove this conjecture for flag varieties in type \(A\).
The author was partially supported by NSF grant DMS-2152316.
## 1. Introduction
Given a flag variety \(X=G/P_{X}\), the Seidel representation on the small quantum cohomology ring \(\operatorname{QH}(X)|_{q=1}\) specialized at \(q=1\) can be described combinatorially, see [1, 1]. In the recent paper [1], this action is extended to the quantum K-theory ring \(\operatorname{QK}(X)|_{q=1}\) when \(X\) is cominuscule, and it is found that similar identities hold. On \(\operatorname{QH}(X)\) it is realized as the action of a subgroup \(W^{\operatorname{comin}}\subset W\) through the identity
\[[X^{w}]\star[X^{u}]=q^{d}[X^{wu}], \tag{1}\]
where \(w\in W^{\operatorname{comin}}\), \(u\in W\), and \(d=d_{\min}(w,u)\in H_{2}(X,\mathbb{Z})\) denotes the smallest degree of a rational curve connecting general translates of \(X^{w}\) and \(X^{u}\). The subgroup \(W^{\operatorname{comin}}\) consists of the identity and the minimal length representatives of the longest Weyl group element \(w_{0}\) in \(W^{M}=W/W_{M}\) for each cominuscule flag variety \(M=G/P_{M}\):
\[W^{\operatorname{comin}}=\{1\}\cup\{(w_{0})^{M}:M=G/P_{M}\text{ where }M\text{ is a cominuscule variety}\},\]
here \((w_{0})^{M}\in W\) denotes the representative of \(w_{0}\) in \(W^{M}\).
Equation (1) also implies the following identity:
\[[X^{w}]\star[X^{u}]=q^{d}[\Gamma_{d}(X_{w_{0}w},X^{u})],\]
where \(\Gamma_{d}(X_{w_{0}w},X^{u})\) is the two-pointed curve neighborhood defined as the union of all stable curves of degree \(d\) that pass through \(X_{w_{0}w}\) and \(X^{u}\).
This led the authors of [1] to conjecture that \(\Gamma_{d}(X_{w_{0}w},X^{u})\) is a translate of \(X^{wu}\):
**Conjecture 1** ([1], Conjecture 3.11).: _Let \(X=G/P_{X}\) be any flag variety. For \(u\in W,w\in W^{\operatorname{comin}}\), and \(d=d_{\min}(w,u)\in H_{2}(X,\mathbb{Z})\), we have_
\[\Gamma_{d}(X_{w_{0}w},X^{u})=w^{-1}.X^{wu}.\]
It is known that the conjecture holds in the following cases:
* \(d=0\) (see [1, Proposition 2.2]);
* \(X\) is cominuscule and \(w=w_{0}^{X}\) (see [1, Lemma 3.1]);
* \(X=\operatorname{Gr}(k,n)\) and \(w=w_{0}^{\mathbb{P}^{n-1}}\) (see [13, Proposition 4.5]).
In this paper we consider the following specialization:
**Conjecture 2**.: _Let \(X=G/P_{X}\) be a flag variety where \(P_{X}\) is a **maximal** parabolic subgroup of \(G\). For \(u\in W,w\in W^{\operatorname{comin}}\), and \(d=d_{\min}(w,u)\in H_{2}(X,\mathbb{Z})\), we have_
\[\Gamma_{d}(X_{w0w},X^{u})=w^{-1}.X^{wu}.\]
We prove the following reduction theorem.
**Theorem 3**.: _Conjecture 1 follows from Conjecture 2 for the same \(G\)._
And then prove Conjecture 2 for Grassmannians of type A to conclude that Conjecture 1 holds for flag varieties of type A. Our proof is inspired from a description of two-pointed curve neighborhoods in Grassmannians given in [13, Proposition 4.5].
### Acknowledgements
We thank Anders Buch for useful discussions and bringing to our attention Conjecture 1.
## 2. Notation and Preliminaries
Let \(G\) be a complex semisimple linear algebraic group, let \(T\subset B\) be a maximal torus and a Borel subgroup of \(G\). Denote by \(B^{-}\) the Borel subgroup opposite to \(B\), by \(\Phi\) the set of roots of \(G\), by \(\Delta\subset\Phi\) the set of simple roots, and by \(W\) the Weyl group of \(G\).
Let \(P_{X}\supseteq B\) be a parabolic subgroup of \(G\). In the flag variety \(X=G/P_{X}\) we consider the Schubert varieties \(X_{u}=\overline{BuP_{X}/P_{X}}\) and \(X^{u}=\overline{B^{-}uP_{X}/P_{X}}\) where \(u\in W\). Let \(W_{X}\) be the Weyl group of \(P_{X}\), then we denote \(W^{X}\subset W\) to be the set of minimal length representatives of the cosets in \(W/W_{X}\). For an element \(u\in W\), denote by \(u^{X}\in W^{X}\) the representative of \(uW_{X}\in W/W_{X}\). Let \(\Delta_{P_{X}}\subseteq\Delta\) be the set of simple roots that define \(P_{X}\), i.e. \(\beta\in\Delta_{P_{X}}\iff s_{\beta}\in W_{X}\).
Recall the general construction of curve neighborhoods (described for example in [1]). Let \(\overline{\mathcal{M}}_{0,3}(X,d)\) denote the moduli space of \(3\)-pointed, genus \(0\) stable maps to \(X\) of effective degree \(d\in H_{2}(X,\mathbb{Z})\). We have the evaluation maps \(ev_{i}:\overline{\mathcal{M}}_{0,3}(X,d)\to X\) where \(i\in\{1,2,3\}\) that correspond to taking the image of the \(i\)-th marked point. Given two opposite Schubert varieties \(X_{u},X^{v}\), define the _Gromov-Witten variety_\(M_{d}(X_{u},X^{v})=ev_{1}^{-1}(X_{u})\cap ev_{2}^{-1}(X^{v})\subset\overline{ \mathcal{M}}_{0,3}(X,d)\). It describes the coefficients in the quantum product of Schubert classes in \(\operatorname{QH}(X)\):
\[[X_{u}]\star[X^{v}]=\sum_{d\geq 0}(ev_{3})_{*}[M_{d}(X_{u},X^{v})]q^{d}. \tag{2}\]
We denote \(\Gamma_{d}(X_{u},X^{v})=ev_{3}(M_{d}(X_{u},X^{v}))\). The subvariety \(\Gamma_{d}(X_{u},X^{v})\subset X\) is called a two-pointed curve neighborhood and consists of the union of all stable curves of degree \(d\) that pass through \(X_{u}\) and \(X^{v}\).
## 3. Proof of the reduction to maximal parabolics
In this section, we prove Theorem 3. The main observation is the fact that the intersection of Schubert varieties \(X^{u^{Y}}\) for appropriate "smaller" flag varieties \(Y\) is the Schubert variety \(X^{u}\), this follows from a combinatorial fact from [1].
**Lemma 4**.: _Let \(X=G/P_{X}\) be a flag variety, \(F\subseteq X\) a subvariety, \(w\in W\), and \(g\in G\). Let \(Y=G/P_{Y},Z=G/P_{Z}\) be flag varieties such that \(P_{Y}\cap P_{Z}=P_{X}\). Denote by \(p:X\to Y\) and \(q:X\to Z\) the projection maps, assume that \(p(F)\subseteq g.Y^{w}\) and \(q(F)\subseteq g.Z^{w}\). Then \(F\subseteq g.X^{w}\)._
Proof.: Using [1, Lemma 2.1(e)] we have
\[F\subseteq p^{-1}(p(F))\subseteq p^{-1}(g.Y^{w})=g.X^{w^{Y}}.\]
By the same argument for \(Z\), we have \(F\subseteq g.X^{w^{Z}}\), so that \(F\subseteq g.(X^{w^{Y}}\cap X^{w^{Z}})\). By [1, Theorem 2.6.1], using \(\Delta_{P_{Y}}\cap\Delta_{P_{Z}}=\Delta_{P_{X}}\), the elements \(w^{Y},w^{Z}\) have a well-defined join equal to \(w^{X}\) in the poset \(W^{X}\). In particular \(X^{w^{Y}}\cap X^{w^{Z}}=X^{w^{X}}=X^{w}\), and we are done.
Proof of Theorem 3.: By [1, Corollary 3.3], the two-pointed Gromov-Witten variety \(M_{d}(X_{w_{0}w},X^{u})\subset\overline{\mathcal{M}}_{0,3}(X,d)\) is irreducible, and so is \(\Gamma_{d}(X_{w_{0}w},X^{u})=\operatorname{ev}_{3}(M_{d}(X_{w_{0}w},X^{u}))\subset X\). In the graded ring \(\operatorname{QH}(X)\), by [1, 1] we have the identity \([X_{w_{0}w}]\star[X^{u}]=q^{d}[X^{wu}]\). On the other hand, from Equation (2) we have
\[[X_{w_{0}w}]\star[X^{u}]=q^{d}(ev_{3})_{*}[M_{d}(X_{w_{0}w},X^{u})]=q^{d} \deg(ev_{3})[\Gamma_{d}(X_{w_{0}w},X^{u})],\]
so that \(\deg(ev_{3})=1\), \([X^{wu}]=[\Gamma_{d}(X_{w_{0}w},X^{u})]\), and \(\operatorname{codim}X^{wu}=\operatorname{codim}\Gamma_{d}(X_{w_{0}w},X^{u})\). Since Schubert varieties are irreducible, we are left with proving the inclusion \(\Gamma_{d}(X_{w_{0}w},X^{u})\subseteq w^{-1}.X^{wu}\).
We prove it by induction on \(|\Delta\setminus\Delta_{X}|\). If \(|\Delta\setminus\Delta_{X}|=1\), we are done by assumption of Conjecture 2.
Otherwise, consider parabolic subgroups \(P_{Y},P_{Z}\supsetneq P_{X}\) such that \(P_{Y}\cap P_{Z}=P_{X}\). Let \(p:X\to Y=G/P_{Y}\), \(q:X\to Z=G/P_{Z}\) be the projections. Then \(p(\Gamma_{d}(X_{w_{0}w},X^{u}))\subseteq\Gamma_{p_{*}(d)}(Y_{w_{0}w},Y^{u})\). Furthermore, \(p_{*}(d_{\min}(w,u))=d_{\min}(w,u)\in H_{2}(Y,\mathbb{Z})\) (see [1, Section 2.3] or [1, 1]). Applying induction to \(Y\), \(\Gamma_{p_{*}(d)}(Y_{w_{0}w},Y^{u})\subseteq w^{-1}.Y^{wu}\), so that
\[p(\Gamma_{d}(X_{w_{0}w},X^{u}))\subseteq\Gamma_{p_{*}(d)}(Y_{w_{0}w},Y^{u}) \subseteq w^{-1}.Y^{wu}.\]
Analogously, we have \(q(\Gamma_{d}(X_{w_{0}w},X^{u}))\subseteq w^{-1}.Z^{wu}\). Using Lemma 4 for \(F=\Gamma_{d}(X_{w_{0}w},X^{u})\), we are done.
## 4. Proof of the conjecture in type A
In this section we assume that the group \(G\) is of type A. Identify \(\Delta\) with the integers \(\{1,\dots,n-1\}\) and \(W\) with the symmetric group \(S_{n}\). When \(P_{X}\) is the maximal parabolic corresponding to the simple root \(k\), i.e. \(\Delta_{P_{X}}=\Delta\setminus\{k\}\), then \(X\) is isomorphic to the Grassmannian variety \(\operatorname{Gr}(k,n)\) of \(k\)-planes in \(\mathbb{C}^{n}\).
We have the isomorphism \(W^{\operatorname{comin}}\cong\mathbb{Z}/n\mathbb{Z}\), with a generator \(w\in S_{n}\) given by
\[w(t)=\begin{cases}n,&\text{ if }t=1\\ t-1,&\text{ if }2\leq t\leq n\end{cases}.\]
The element corresponding to the simple root \(i\in\{1,\dots,n-1\}\) is given by \(w^{i}\in S_{n}\).
When \(X=\operatorname{Gr}(k,n)\) is a Grassmannian, we can describe the two-pointed curve neighborhoods using the _quantum equals classical_ theorems (see [1]). For an effective degree \(0\leq d\leq\min(k,n-k)\) we denote \(Z_{d}=\operatorname{Fl}(k-d,d,k+d;n)\) to be the variety of three-step flags of dimensions \((k-d,d,k+d)\) in \(\mathbb{C}^{n}\). Similarly, we
denote \(Y_{d}=\operatorname{Fl}(k-d,k+d;n)\) to be the variety of two-step flags of dimensions \((k-d,k+d)\) in \(\mathbb{C}^{n}\). We then have the projection maps \(p_{d}:Z_{d}\to X\), \(q_{d}:Z_{d}\to Y_{d}\) and \(\Gamma_{d}(X_{u},X^{v})=p_{d}(Z_{d}(X_{u},X^{v}))\) where \(Z_{d}(X_{u},X^{v})=q_{d}^{-1}(q_{d}(p_{d}^{-1}(X_{u}))\cap q_{d}(p_{d}^{-1}(X^ {v})))\subset Z_{d}\). Analogous to Equation (2),
\[[X_{u}]\star[X^{v}]=\sum_{d\geq 0}(q_{d})_{*}[Z_{d}(X_{u},X^{v})]q^{d}.\]
Recall that Grassmannian Schubert varieties can be equivalently indexed by a partition. Fix a basis \(e_{1},\ldots,e_{n}\) of \(\mathbb{C}^{n}\) such that \(B\) acts on \(\mathbb{C}^{n}\) by upper-triangular matrices. Consider the complete flag \(E_{\bullet}=E_{1}\subset E_{2}\subset\cdots\subset E_{n}\) where \(E_{i}=\operatorname{Span}\{e_{1},\ldots,e_{i}\}\), then \(B.E_{\bullet}=E_{\bullet}\). Similarly, consider the opposite flag \(E_{\bullet}^{\operatorname{opp}}=E_{1}^{\operatorname{opp}}\subset E_{2}^ {\operatorname{opp}}\subset\cdots\subset E_{n}^{\operatorname{opp}}\) where \(E_{i}^{\operatorname{opp}}=\operatorname{Span}\{e_{n},\ldots,e_{n-i+1}\}\), then \(B^{-}.E_{\bullet}^{\operatorname{opp}}=E_{\bullet}^{\operatorname{opp}}\).
Given a partition \(\lambda=(\lambda_{1},\ldots,\lambda_{k})\) such that \(n-k\geq\lambda_{1}\geq\cdots\geq\lambda_{k}\geq 0\), define the \(B\)-stable Schubert variety
\[X_{\lambda}=\{\Sigma\in X:\dim\Sigma\cap E_{i+\lambda_{k-i+1}}\geq i\text{ for }i=1,\ldots,k\},\]
of dimension \(\dim X_{\lambda}=|\lambda|\), and the \(B^{-}\)-stable Schubert variety
\[X^{\lambda}=\{\Sigma\in X:\dim\Sigma\cap E_{n-k+i-\lambda_{i}}^{\operatorname {opp}}\geq i\text{ for }i=1,\ldots,k\}\]
of codimension \(\operatorname{codim}X^{\lambda}=|\lambda|\).
These varieties satisfy \(X_{\lambda}=X_{w}\) and \(X^{\lambda}=X^{w}\) where \(w\in W^{X}\subset S_{n}\) is the permutation of minimal length such that \(\lambda=(w(k)-k,w(k-1)-(k-1),\ldots,w(1)-1)\). We will also think of \(\lambda\) as a Young diagram inside a \(k\times(n-k)\) rectangle such that row \(i\) has \(\lambda_{i}\) boxes.
**Theorem 5**.: _Let \(X=\operatorname{Gr}(k,n)\). For \(u\in W\), \(w\in W^{\operatorname{comin}}\), and \(d=d_{\min}(w,u)\in H_{2}(X,\mathbb{Z})\), we have_
\[\Gamma_{d}(X_{w_{0}w},X^{u})=w^{-1}.X^{wu}\]
Proof.: If \(w=1\), then \(d=0\) and \(\Gamma_{d}(X_{w_{0}w},X^{u})=\Gamma_{0}(X_{w_{0}},X^{u})=X_{w_{0}}\cap X^{u}= X^{u}=w^{-1}.X^{wu}\). Assume \(w\neq 1\).
Let \(\beta\in\Delta\) be a simple root that indexes \(w\). Using the duality isomorphism \(\operatorname{Gr}(k,n)\cong\operatorname{Gr}(n-k,n)\), we may assume \(\beta\geq k\). As an element of \(S_{n}\), \(w_{0}w=(\beta\ (\beta-1)\ \ldots\ 1\ n\ (n-1)\ \ldots(\beta+1))\) in one-line notation. The corresponding partition in \(X\) is given by \((\beta-k,\beta-k,\ldots,\beta-k)=(\beta-k)^{k}\). Denote by \(\lambda\) the partition associated to \(u^{X}\).
We use [10] to get an explicit expression for \(d\). The degree \(d\) is the minimal degree of \(q\) in \([X_{(\beta-k)^{k}}]\star[X^{\lambda}]=[X^{(n-\beta)^{k}}]\star[X^{\lambda}]\). Consider the overlap of \(\lambda\) and the \(180^{\circ}\) rotation of \((n-\beta)^{k}\) in the \(k\times(n-k)\) rectangle. This overlap is a partition, call it \(\lambda^{\prime}\), where we remove the left-most \((\beta-k)\) columns from \(\lambda\) (see Figure 1). Then \(d\) is the length of the longest NW-SE diagonal sequence of boxes in this overlap. Since \(\lambda^{\prime}\) is a partition, we can always move a NW-SE diagonal sequence of boxes to the NW corner of \(\lambda^{\prime}\). This gives the following formula for \(d\):
\[d=\max(\{0\}\cup\{j:\lambda_{j}-(\beta-k)\geq j\}). \tag{3}\]
Here is an example where \(n=9,k=4,\lambda=(5,4,3,1),\beta=5\):
By the quantum equals classical theorem, we have that \(\Gamma_{d}(X_{w_{0}w},X^{u})=p_{d}(Z_{d}(X_{w_{0}w},X^{u}))\). Looking at the three-step flags in \(q_{d}^{-1}q_{d}p_{d}^{-1}(X^{u})\subset Z_{d}\), we have
\[q_{d}^{-1}q_{d}p_{d}^{-1}(X^{u}) =q_{d}^{-1}q_{d}p_{d}^{-1}(X^{\lambda})\] \[=\{V_{k-d}\leq V_{k}\leq V_{k+d}: \exists\widetilde{V_{k}}\text{ such that }V_{k-d}\leq\widetilde{V_{k}}\leq V_{k+d}\text{ and }\] \[\dim\widetilde{V_{k}}\cap E_{n-k+i-\lambda_{i}}^{\text{opp}}\geq i \text{ for }1\leq i\leq k\}\] \[\subseteq\{V_{k-d}\leq V_{k}\leq V_{k+d}: \dim V_{k+d}\cap E_{n-k+i-\lambda_{i}}^{\text{opp}}\geq i\text{ for }1\leq i\leq k\text{ and }\] \[\dim V_{k-d}\cap E_{n-k+i-\lambda_{i}}^{\text{opp}}\geq i-d\text { for }d+1\leq i\leq k\},\]
where the last condition follows from \(\dim V_{k-d}\cap E_{n-k+i-\lambda_{i}}^{\text{opp}}\geq\dim\widetilde{V_{k}} \cap E_{n-k+i-\lambda_{i}}^{\text{opp}}-d\geq i-d\). By symmetry, for the special case of \(X_{w_{0}w}\), we have
\[q_{d}^{-1}q_{d}p_{d}^{-1}(X_{w_{0}w}) =q_{d}^{-1}q_{d}p_{d}^{-1}(X_{(\beta-k)^{k}})\] \[\subseteq\{V_{k-d}\leq V_{k}\leq V_{k+d}: \dim V_{k+d}\cap E_{i+\beta-k}\geq i\text{ for }1\leq i\leq k\text{ and }\] \[\dim V_{k-d}\cap E_{i+\beta-k}\geq i-d\text{ for }d+1\leq i\leq k\}\] \[=\{V_{k-d}\leq V_{k}\leq V_{k+d}: V_{k-d}\subset E_{\beta},\ \dim V_{k+d}\cap E_{\beta}\geq k\}.\]
Let \(V_{k}\in\Gamma_{d}(X_{w_{0}w},X^{u})=p_{d}(Z_{d}(X_{w_{0}w},X^{u}))=p_{d}(q_{d }^{-1}q_{d}p_{d}^{-1}(X_{w_{0}w})\cap q_{d}^{-1}q_{d}p_{d}^{-1}(X^{u}))\). If \(1\leq i\leq d\), then \(k-i+\lambda_{i}+1>\beta\) by (3), so that \(E_{n-k+i-\lambda_{i}}^{\text{opp}}\cap E_{\beta}=0\) and
\[\dim V_{k}\cap(E_{n-k+i-\lambda_{i}}^{\text{opp}}+E_{\beta}) \geq\dim V_{k+d}\cap(E_{n-k+i-\lambda_{i}}^{\text{opp}}+E_{\beta} )-d\] \[\geq\dim V_{k+d}\cap E_{n-k+i-\lambda_{i}}^{\text{opp}}+\dim V_{k +d}\cap E_{\beta}-d\geq i+k-d.\]
Otherwise, if \(d+1\leq i\leq k\), then \(k-i+\lambda_{i}+1\leq\beta\) by (3), so that \(E_{n-k+i-\lambda_{i}}^{\text{opp}}+E_{\beta}=\mathbb{C}^{n}\) and
\[\dim V_{k}\cap(E_{n-k+i-\lambda_{i}}^{\text{opp}}\cap E_{\beta}) \geq\dim V_{k-d}\cap(E_{n-k+i-\lambda_{i}}^{\text{opp}}\cap E_{ \beta})\] \[=\dim V_{k-d}\cap E_{n-k+i-\lambda_{i}}^{\text{opp}}\geq i-d.\]
For simplicity, we denote
\[G_{i}=\begin{cases}E_{n-k+(i+d)-\lambda_{i+d}}^{\text{opp}}\cap E_{\beta},& \text{if }1\leq i\leq k-d,\\ E_{n-k+(i-k+d)-\lambda_{i-k+d}}^{\text{opp}}+E_{\beta},&\text{if }k-d+1\leq i\leq k. \end{cases} \tag{4}\]
So that we have a chain of inclusions
\[G_{1}\leq G_{2}\leq\dots\leq G_{k-d}\leq G_{k-d+1}\leq\dots\leq G_{k},\]
and dimension conditions \(\dim V_{k}\cap G_{i}\geq i\) for all \(1\leq i\leq k\).
Figure 1. Finding \(d\) from \(\lambda\) and \(\lambda^{\prime}\).
Note that the subspaces \(G_{i}\) are parts of the complete flag \(F_{\bullet}^{\mathrm{opp}}\) defined by
\[F_{i}^{\mathrm{opp}}=\begin{cases}\mathrm{Span}\{e_{\beta},e_{\beta-1},\dots,e_{ \beta-i+1}\},&\text{if $1\leq i\leq\beta$}\\ \mathrm{Span}\{e_{\beta},e_{\beta-1},\dots,e_{1},e_{n},\dots,e_{n-(i-\beta)+1 }\},&\text{if $\beta+1\leq i\leq n$}\end{cases}\]
which is precisely given by \(F_{\bullet}^{\mathrm{opp}}=w^{-1}.E_{\bullet}^{\mathrm{opp}}\).
The dimension conditions \(\dim V_{k}\cap G_{i}\geq i\) for \(1\leq i\leq k\) define a Schubert variety with respect to the flag \(F_{\bullet}^{\mathrm{opp}}\), it is given by
\[\{V_{k}:\dim V_{k}\cap G_{i}\geq i\text{ for }1\leq i\leq k\} =w^{-1}.\{w.V_{k}:\dim V_{k}\cap G_{i}\geq i\text{ for }1\leq i\leq k\}\] \[=w^{-1}.\{V_{k}:\dim V_{k}\cap w.G_{i}\geq i\text{ for }1\leq i\leq k\}\] \[=w^{-1}.X^{v}\]
for some \(v\in W^{X}\).
We compute using (4)
\[\ell(v) =\operatorname{codim}w^{-1}.X^{v}=\sum(n-k+i-\dim G_{i})\] \[=\sum_{1\leq i\leq k-d}(n-k+i-(\beta-k+i+d-\lambda_{i+d}))\] \[+\sum_{k-d<i\leq k}(n-k+i-(\beta+n-k+(i-k+d)-\lambda_{i-k+d}))\] \[=\sum_{1\leq i\leq k-d}(n-\beta-d+\lambda_{i+d})+\sum_{k-d<i\leq k }(k-\beta-d+\lambda_{i-k+d})\] \[=n(k-d)-\beta k+|\lambda|=n(k-d)-\beta k+\ell(u^{X})\]
On the other hand, from (1) and \([X^{w}]=[X_{w_{0}w}]\) we compute
\[\ell((wu)^{X}) =\operatorname{codim}X^{wu}=\operatorname{codim}X_{w_{0}w}+ \operatorname{codim}X^{u}-d\deg(q)\] \[=\dim X-\dim X_{w_{0}w}+\ell(u^{X})-dn\] \[=k(n-k)-(\beta-k)k+\ell(u^{X})-dn\] \[=n(k-d)-\beta k+\ell(u^{X})=\ell(v)\]
Since \(\Gamma_{d}(X_{w_{0}w},X^{u})\subseteq w^{-1}.X^{v}\) and \(\dim X^{v}=\dim X^{wu}=\dim\Gamma_{d}(X_{w_{0}w},X^{u})\) we get that the inclusion is an equality. Since \([\Gamma_{d}(X_{w_{0}w},X^{u})]=[X^{wu}]\), we get \(v=(wu)^{X}\) from which the conclusion follows.
By the above and Theorem 3, we conclude that Conjecture 1 holds in type A.
|
2309.00143 | Self-supervised Semantic Segmentation: Consistency over Transformation | Accurate medical image segmentation is of utmost importance for enabling
automated clinical decision procedures. However, prevailing supervised deep
learning approaches for medical image segmentation encounter significant
challenges due to their heavy dependence on extensive labeled training data. To
tackle this issue, we propose a novel self-supervised algorithm,
\textbf{S$^3$-Net}, which integrates a robust framework based on the proposed
Inception Large Kernel Attention (I-LKA) modules. This architectural
enhancement makes it possible to comprehensively capture contextual information
while preserving local intricacies, thereby enabling precise semantic
segmentation. Furthermore, considering that lesions in medical images often
exhibit deformations, we leverage deformable convolution as an integral
component to effectively capture and delineate lesion deformations for superior
object boundary definition. Additionally, our self-supervised strategy
emphasizes the acquisition of invariance to affine transformations, which is
commonly encountered in medical scenarios. This emphasis on robustness with
respect to geometric distortions significantly enhances the model's ability to
accurately model and handle such distortions. To enforce spatial consistency
and promote the grouping of spatially connected image pixels with similar
feature representations, we introduce a spatial consistency loss term. This
aids the network in effectively capturing the relationships among neighboring
pixels and enhancing the overall segmentation quality. The S$^3$-Net approach
iteratively learns pixel-level feature representations for image content
clustering in an end-to-end manner. Our experimental results on skin lesion and
lung organ segmentation tasks show the superior performance of our method
compared to the SOTA approaches. https://github.com/mindflow-institue/SSCT | Sanaz Karimijafarbigloo, Reza Azad, Amirhossein Kazerouni, Yury Velichko, Ulas Bagci, Dorit Merhof | 2023-08-31T21:28:46Z | http://arxiv.org/abs/2309.00143v1 | # Self-supervised Semantic Segmentation: Consistency over Transformation
###### Abstract
Accurate medical image segmentation is of utmost importance for enabling automated clinical decision procedures. However, prevailing supervised deep learning approaches for medical image segmentation encounter significant challenges due to their heavy dependence on extensive labeled training data. To tackle this issue, we propose a novel self-supervised algorithm, \(\textbf{S}^{3}\)-**Net**, which integrates a robust framework based on the proposed Inception Large Kernel Attention (I-LKA) modules. This architectural enhancement makes it possible to comprehensively capture contextual information while preserving local intricacies, thereby enabling precise semantic segmentation. Furthermore, considering that lesions in medical images often exhibit deformations, we leverage deformable convolution as an integral component to effectively capture and delineate lesion deformations for superior object boundary definition. Additionally, our self-supervised strategy emphasizes the acquisition of invariance to affine transformations, which is commonly encountered in medical scenarios. This emphasis on robustness with respect to geometric distortions significantly enhances the model's ability to accurately model and handle such distortions. To enforce spatial consistency and promote the grouping of spatially connected image pixels with similar feature representations, we introduce a spatial consistency loss term. This aids the network in effectively capturing the relationships among neighboring pixels and enhancing the overall segmentation quality. The \(\emph{S}^{3}\)-Net approach iteratively learns pixel-level feature representations for image content clustering in an end-to-end manner. Our experimental results on skin lesion and lung organ segmentation tasks show the superior performance of our method compared to the SOTA approaches. Github.
## 1 Introduction
Over the past decade, deep learning approaches have achieved significant success, which can largely be attributed to the progress made in supervised learning research. However, the efficacy of these methods is highly dependent on the availability of a large amount of annotated training data. In situations where annotated data is limited or resource-intensive to obtain, these approaches may prove inefficient. One domain where this scarcity is evident in medical image analysis. Given the large size of medical images and the importance of precise labeling, which requires experts, the process of providing a wide range of manually annotated data is time-consuming, labor-intensive, and expensive [5, 23, 34]. Moreover, the process of manual segmentation and labeling is prone to human error. To mitigate the labor-intensive nature of annotation, several strategies have been proposed in the literature. One such strategy is transfer learning, which serves as a benchmark approach. Transfer learning facilitates the process of representational learning by fine-tuning the pre-trained network for the new task at hand. While knowledge transfer provides a promising starting point for the optimization algorithm, the scarcity of annotated data on the downstream task limits the network's convergence and its ability to learn task-specific features, resulting in less stable models. Moreover, in complex tasks such as segmentation, this approach proves to be inefficient due to the predefined model architecture [4, 3, 31]. Unsupervised methods offer an alternative solution by reformulating the problem based on learning features directly from the data itself [17, 32, 20, 27, 26, 2]. However, the reliability of these approaches is not always guaranteed as no label or metric is available to validate their effectiveness.
A semi-supervised algorithm, which is a machine learning methodology that combines labeled and unlabeled data to construct predictive models, is also another approach to tackle the problem of data scarcity. The labeled data provides explicit supervision, guiding the learning pro
cess, while the unlabeled data contributes additional information for capturing underlying patterns and data structure [29, 30, 36]. Nevertheless, the effectiveness of semi-supervised approaches can be compromised when the labeled data fails to adequately represent the entire distribution. Furthermore, although semi-supervised learning reduces the need for extensive manual labeling, it still necessitates a small set of labeled data. The process of annotation, even on a smaller scale, can be time-consuming, expensive, and dependent on domain expertise. The high cost associated with labeling data may hinder the scalability and practicality of semi-supervised methods. Additionally, labeling bias is another limitation of this approach.
In contrast to the previously mentioned strategies that rely on modeling the data distribution, the self-supervised technique has attracted recent interest and uses a different perspective by introducing a set of matching tasks [9, 13, 15, 24]. SSL has gained significant acceptance as a viable technique for learning medical image representations without specialized annotations. This approach allows for learning semantic features by generating supervisory signals from a large set of unlabeled data, effectively eliminating the need for human annotation [12]. SSL consists of two main tasks: the pretext task and the downstream task. In the pretext task, where the SSL takes place, a model is trained in a supervised manner using unlabeled data. Labels are generated from the data to guide the model to learn semantic segmentation from the data. Subsequently, the learned representations obtained from the pretext task are transferred to the downstream task as initial weights. This transfer of weights allows the model to fine-tune and successfully achieve its intended goal.
Contrastive learning (CL), which has been extensively studied by various researchers [38, 19, 16], is a successful variant of SSL that has the ability to achieve the performance of SOTA algorithms even with a small number of annotated data. The CL methods aim to increase the similarity between representations of differently augmented input samples (referred to as positive pairs) while ensuring that representations of distinct samples (referred to as negatives) are dissimilar. The resulting neural network parameters are well-suited for initializing downstream tasks, where the learned representations from the pretext task are fine-tuned to adapt to the specific downstream task. This approach has been extended to handle dense pixel-wise image data, facilitating semantic segmentation even with limited available data [28, 21, 11].
Despite the promising results achieved by CL, we argue that some aspects are relatively unexplored in the existing literature. Addressing these gaps has the potential to improve the current SOTA methods in medical imaging. The first limitation arises when the unlabeled dataset used for training self-supervised contrastive learning contains biases or imbalances. In such scenarios, the learned representations may inherit these biases, as the learned representations are derived only from the intrinsic structure and patterns in the unlabeled data, resulting in biased predictions or limited generalization capabilities. Second, accurate semantic segmentation requires a model that can effectively capture long-range dependencies and maintain local consistency within images. Third, in the medical domain, it is important to consider that lesions often exhibit deformations in their shapes. Therefore, a learning algorithm employed for medical image analysis should possess the capability to capture and understand such deformations. Moreover, the algorithm should be invariant to common transformations encountered in medical images, such as shear, scale, and rotation. This ensures that the algorithm can effectively handle variations and changes in the appearance of lesions.
To address the encountered challenges outlined above, first, we propose I-LKA modules, which serve as a fundamental building block in our network design. These modules are specifically designed to capture contextual information comprehensively while preserving local descriptions. By striking a balance between these two aspects, our architecture facilitates precise semantic segmentation by effectively leveraging both global and local information. Recognizing the prevalence of deformations in medical image lesions, we incorporate deformable convolution as a crucial component in our approach. This enables our model to effectively capture and delineate deformations, leading to improved boundary definition for the identified objects. In order to make our model more robust to geometric transformations commonly encountered in medical scenarios, we integrate a self-supervised algorithm based on contrastive learning. By emphasizing the acquisition of invariance to affine transformations, our approach enhances the model's capacity to handle such transformations effectively. This allows the model to better generalize and adapt to different spatial configurations and orientations, thus improving overall performance in medical image segmentation tasks. To ensure spatial consistency and promote the grouping of spatially connected pixels with similar features, we model a spatial consistency loss term based on edge information. This loss term facilitates the learning process by encouraging the network to capture the relationships among neighboring pixels. Finally, our proposed method (Figure 1) effectively tackles dataset bias by performing the prediction process based on a single image only. This approach helps to mitigate the potential bias that may arise from imbalanced or skewed datasets.
## 2 Related Works
SSL has shown significant benefits for vision tasks by allowing the extraction of semantic and effective representations from unlabeled data in an unsupervised manner. This approach is rooted in the idea that significant performance improvements can be achieved by enhancing representation learning. A specific variant of SSL, known as contrastive learning, has gained substantial attention in recent years. Contrastive learning approaches strive to acquire resilient representations by optimizing similarity constraints,
enabling the discrimination between similar (positive) and dissimilar (negative) pairs within a given dataset. In this direction, Chaitanya et al. [10] introduced a contrastive learning framework specifically designed for segmenting volumetric medical images in a semi-supervised scenario. Their approach involved leveraging contrasting strategies that take advantage of the structural similarity present in volumetric medical images. Additionally, they incorporated a local variant of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation. You et al. [37] introduced SimCVD, a contrastive distillation framework that improves voxel-wise representation learning. This contrastive training strategy involves using two views of an input volume and predicting their signed distance maps, using only two independent dropout masks. Additionally, they performed structural distillation by distilling pair-wise similarities. In another work, Moriya et al. [35] suggested the utilization of k-means clustering for grouping pixels with similar characteristics in micro-CT images, facilitating the extraction of significant feature representations. Nonetheless, self-supervised clustering methods encounter constraints such as the need for manual cluster size selection and difficulties posed by complex regions characterized by fuzzy boundaries, diverse shapes, artifacts, and noise. Despite their efforts to learn task-specific features without relying on annotations, these methods often demonstrate a bias towards the training data, which can lead to reduced performance when confronted with new samples, especially in the presence of domain shifts. The absence of labeled data for fine-tuning or retraining the model on the target domain hampers its ability to adapt and generalize effectively. Hence, the performance of these annotation-free methods may suffer when faced with variations in data distribution, making them less robust in scenarios characterized by domain shifts. Therefore, it is crucial to address this limitation and explore techniques that can enhance the adaptability and generalization capabilities of the method.
To address the concern raised by SSL approaches, recent work by Ahn et al. [1] introduces SGSCN, which specifically designed for segmenting medical images on a single image. This method utilizes different loss functions to facilitate the grouping of spatially connected image pixels with similar feature representations. During the process of iterative learning, the network simultaneously learns feature representations and clustering assignments for each pixel from a single image. Moreover, a contextual aware consistency loss is introduced to enhance the delineation of image regions by enforcing spatial proximity between pixels belong
Figure 1: A general overview of the S\({}^{2}\)-Net framework. The I-LKA module captures the intricate relationships between local and global features. Subsequently, it leverages a self-supervised learning strategy to enforce feature consistency throughout the network’s predictions.
ing to the same cluster and its center. More recently, Karimi et al. [25] proposed a novel dual-branch Transformer network. This network aims to capture global contextual dependencies and local information at different scales. Their self-supervised learning approach takes into account the semantic dependency between scales, generating a supervisory signal to ensure inter-scale consistency and enforcing a spatial stability loss within each scale to facilitate content clustering. Additionally, they introduce a cross-entropy loss function applied to the clustering score map, effectively modeling cluster distributions and improving the decision boundary between clusters. Through iterative training, the algorithm progressively learns to assign each pixel to a cluster that is semantically related to it. Building upon this advancement, our method combines a contrastive learning schema with SSL losses to perform accurate and robust semantic segmentation on a single image.
## 3 Proposed Method
The framework of our proposed method is depicted in Figure 1. The \(\mathbf{S}^{3}\)-\(\mathbf{Net}\) integrates local and long-range dependencies to perform the feature embedding process. By incorporating these dependencies, we aim to capture comprehensive contextual information while retaining the local details inherent in the input data. To achieve effective content clustering without the need for manual annotations, our approach incorporates auxiliary modules and carefully designed data-driven loss functions. These components synergistically facilitate the learning process and promote the formation of meaningful clusters within the embedded feature space. By leveraging the inherent structure and relationships present in the data, our approach empowers the model to drive the segmentation task with robustness and accuracy. In the subsequent subsections, we will provide detailed explanations of each module integrated into our approach.
### Encoder Network
Our encoder architecture comprises three blocks that collectively encode the input image into the latent space. The first block adopts a sequential structure comprising of a \(3\times 3\) convolutional layer, followed by a batch normalization layer. This configuration facilitates the embedding of the input image into a high-dimensional space. In the subsequent stacked \(N\) blocks, we employ the I-LKA module to capture both local and global dependencies. To integrate localized descriptions with the I-LKA module's output, a skip connection path is included, followed by another \(1\times 1\) convolutional layer. This setup ensures the preservation of fine-grained spatial information at the pixel level through the skip connection, while simultaneously guiding the network to capture global dependencies using the I-LKA module. In the last block, we incorporate a deformable convolution layer, specifically designed to effectively model deformations in the lesion boundary, which is a common occurrence in medical images. This additional layer enhances the network's capability to accurately capture and represent deformations, contributing to the overall performance.
#### 3.1.1 Inception Large Kernel Attention (I-LKA)
The attention mechanism, the process of identifying the most informative features or important regions, is a critical step toward learning effective feature representation. Two popular attention mechanisms are the self-attention mechanism and the large kernel convolution mechanism, each with its own advantages and drawbacks [7]. While the self-attention mechanism excels at capturing long-range dependencies, it lacks adaptability to different channels, exhibits high computational complexity for high-resolution images, and disregards the 2D structure of images. On the other hand, large kernel convolution is effective at establishing relevance and generating attention maps, but it introduces computational overhead and increases parameter count.
To overcome these limitations and leverage the strengths of both self-attention and large kernel convolutions, we propose an enhanced approach called the large kernel attention (LKA) with inception design. In our study, we enhance the LKA module [18] by integrating inception strategies. The rationale behind this enhancement is to efficiently capture and integrate information at various spatial resolutions, which is particularly crucial for dense prediction tasks. Unlike the original LKA, which employs fixed-sized filters and thus struggles to fully capture information at different scales within an image, our I-LKA module employs parallel filters of varying sizes to capture both fine-grained details and high-level semantic information concurrently. The LKA module decomposes a \(C\times C\) convolution into three components: a \([\frac{c}{d}]\times[\frac{c}{d}]\) depth-wise dilation convolution (\(DW\)-\(D\)-\(Conv\)) (representing spatial long-range convolution), a \((2d-1)\times(2d-1)\) depth-wise convolution (\(DW\)-\(Conv\)) (representing spatial local convolution), and a \(1\times 1\) convolution (representing channel convolution). This decomposition enables the extraction of long-range relationships within the feature space while maintaining low computational complexity and parameter count when generating the attention map. Our I-LKA define as:
\[\text{Inc(x)}=\{(\mathrm{DW}\text{-}\mathrm{Conv}(\mathrm{F(x)}))_{r}|r\in \mathbb{N}\} \tag{1}\]
\[\text{Attention }=\mathrm{Conv}_{1\times 1}(\mathrm{DW}\text{-}\mathrm{D} \text{-}\mathrm{Conv}(Inc(x)) \tag{2}\]
\[\text{Output }=\text{ Attention }\otimes\mathrm{F(x)} \tag{3}\]
where \(Inc(x)\) and \(F(x)\in R^{C\times H\times W}\) show the inception features and convolutional operation, respectively, while \(Attention\in R^{C\times H\times W}\) represents the attention map. The symbol \(\otimes\) denotes the element-wise product, with the value of the attention map indicating the importance of each feature. Notably, unlike conventional attention methods, the I-LKA approach does not require additional normalization functions such as Sigmoid or SoftMax.
### Network Prediction
Given an input image \(X^{H\times W\times C}\), where \(H\times W\) represents the spatial dimensions and \(C\) denotes the number of
channels, our network initiates the segmentation process by utilizing the encoder module to generate a soft prediction map \(S^{H\times W\times K}\), where \(K\) represents the number of clusters. To obtain the final semantic segmentation map \(Y^{H\times W\times K}\), we apply the ArgMax function at each spatial location to activate the corresponding cluster index. During the training phase, we employ an iterative approach to minimize the cross-entropy loss, which measures the discrepancy between the soft prediction map and the segmentation map. By optimizing this loss function, our network learns to produce accurate and meaningful segmentation results:
\[\mathcal{L}_{\text{ce}}\left(\mathbf{S},\mathbf{Y}\right)=-\frac{1}{H\times W }\sum_{i=1}^{H\times W}\sum_{j=1}^{K}\mathbf{Y}_{i,j}\log\left(\mathbf{S}_{i,j }\right). \tag{4}\]
While the cross-entropy loss employed in our approach effectively captures the distribution of clusters by promoting the grouping of similar pixels and increasing the separation between different clusters, it falls short in modeling the spatial relationships within local regions. Consequently, it exhibits limitations in accurately merging neighboring clusters, leading to sub-optimal performance. To address this issue, we propose the integration of a spatial consistency loss, which serves as an additional regularization term.
### Spatial Consistency Loss
In addition to the cross-entropy loss, which primarily focuses on capturing the distributional differences among clusters, we introduce the spatial consistency loss to address the limitation of disregarding spatial arrangements. The spatial consistency loss takes into account the local discrepancies in the image by leveraging edge information. By calculating the edges in the \(X\), \(Y\), and \(XY\) directions using the Sobel operator, we can model the spatial relationships and boundaries between regions. By minimizing the pairwise differences based on the edge information, our spatial consistency loss promotes spatial coherence and encourages neighboring pixels with similar visual characteristics to be grouped. This enables our method to not only capture the distributional information but also consider the spatial arrangement of pixels, resulting in more accurate and visually coherent segmentation. The spatial loss function, denoted as \(\mathcal{L}_{\text{S}}\), is formulated as follows:
\[\begin{split}\mathcal{L}_{\text{S}}=\sum_{i,j}(&| (\mathbf{X}_{i,j}-\mathbf{Y}_{i,j})-\mathbf{Z}_{i,j}|+\\ &|(\mathbf{X}_{i,j}-\mathbf{X}\mathbf{Y}_{i,j})-\mathbf{Z}_{i,j}| +\\ &|(\mathbf{Y}_{i,j}-\mathbf{X}\mathbf{Y}_{i,j})-\mathbf{Z}_{i,j}| ),\end{split} \tag{5}\]
where \(\mathbf{X}_{i,j}\), \(\mathbf{Y}_{i,j}\), and \(\mathbf{X}\mathbf{Y}_{i,j}\) represent the edge information at pixel location \((i,j)\) in the \(X\), \(Y\), and \(XY\) directions, respectively. \(\mathbf{Z}_{i,j}\) also represents a zero image with the same dimension as edge images. The \(L_{1}\) distance is computed between the pairwise differences of edges in different directions and the zero image, and the absolute differences are summed over all pixels in the image. The goal is to minimize spatial loss, which encourages the alignment of edges and promotes spatial consistency between neighboring pixels. This helps enhance the accuracy and visual coherence of the segmentation results. The overall process of the spatial consistency loss is depicted in Figure 2.
### Surrogate Task
To further promote the network's robustness against affine transformations, we introduce an additional segmentation head and leverage the concept of consistency in the feature space. By incorporating affine transformations in both the feature space and the ground truth masks (we use the prediction of the main path as a pseudo label), we aim to establish consistency between the transformed features and the corresponding transformed masks. This approach enables the network to learn positive pairs in the form of contrastive learning, facilitating the generation of a feature space that is resilient to affine transformations. The motivation behind incorporating consistency over affine transformations lies in the inherent challenges posed by geometric distortions commonly encountered in medical imaging. Affine transformations, such as translation, rotation, scaling, and shearing, can significantly alter the spatial arrangement and appearance of anatomical structures in medical images. Consequently, accurate segmentation in the presence of such transformations becomes a critical requirement for reliable medical image analysis. To this end, we define the affine loss as:
\[\mathcal{L}_{\text{AT}}\left(\mathbf{Y}^{{}^{\prime}},\mathbf{Y^{a}}\right)=- \frac{1}{H\times W}\sum_{i=1}^{H\times W}\sum_{j=1}^{K}\mathbf{Y^{a}}_{i,j}\log \left(\mathbf{Y}^{{}^{\prime}}_{i,j}\right), \tag{6}\]
where \(\mathbf{Y^{a}}=\mathbf{A}\cdot\mathbf{Y}+\mathbf{t}\) shows affine transformation (indicated with affinity matrix \(\mathbf{A}\) and translation parameter \(t\)) applied on the network prediction map \(\mathbf{Y}\) and \(\mathbf{Y}^{{}^{\prime}}\) indicate the prediction map of the second path. This consistency over transformation loss encourages the network to minimize the discrepancy between the predicted masks generated from the transformed features and the transformed ground truth masks. This drives the network to become more robust and capable of accurately modeling and segmenting anatomical structures despite variations caused by transformations.
Figure 2: Illustration of the spatial loss calculation process. The spatial loss, denoted by \(\mathcal{L}_{\text{S}}\), is computed by subtracting pixel values in \(X\), \(Y\), and \(XY\) directions, and then taking the absolute difference between the resultant image and the zero image \(Z\). The summation of these absolute differences is the output of \(\mathcal{L}_{\text{S}}\) loss.
### Joint Objective
The final loss function employed in our training process encompasses three distinct loss terms as:
\[\mathcal{L}_{\text{joint}}=\lambda_{1}\mathcal{L}_{ce}+\lambda_{2}\mathcal{L}_{AT }+\lambda_{3}\mathcal{L}_{S}, \tag{7}\]
where, the first term, denoted as \(\mathcal{L}_{\text{ce}}\), represents the cross-entropy loss. This term measures the discrepancy between the predicted scores generated by the network and the corresponding maximum index of the ground truth labels. Its purpose is twofold: to ensure prediction confidence by optimizing the agreement between the network's output and the true labels, and to enable the network to learn the distribution characteristics of each cluster. The second term in our loss function, denoted as \(\mathcal{L}_{\text{AT}}\), is designed to enhance the network's invariance against affine transformations. In our self-supervised strategy, we integrate a contrastive learning approach that emphasizes the acquisition of invariance to affine transformations. The final term, denoted as \(\mathcal{L}_{\text{S}}\), is included to promote spatial consistency within each image region. This term aims to reduce local variations and facilitate the smooth merging of neighboring clusters. To control the relative importance of each loss term, we introduce weighting factors \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\). These factors allow us to balance the influence of each term.
## 4 Experiments
**Skin Lesion Segmentation:** For the first task, we focused on segmenting skin lesion regions in dermoscopic images. To evaluate our method, we utilized the PH\({}^{2}\) dataset [33], which consists of 200 RGB images of melanocytic lesions. This dataset offers a diverse range of lesion types, presenting a challenging real-world problem for segmentation. Similar to [1], we utilized all 200 samples from the dataset to assess the performance of our method.
**Lung Segmentation:** In the second task, we addressed lung segmentation in CT images. To conduct this evaluation, we employed the publicly available lung analysis dataset provided by Kaggle, as described in [6]. This dataset includes both 2D and 3D CT images. Following the approach outlined in [6], we prepared the dataset for evaluation. Specifically, we follow [25] and extract 2D slices from the 3D images, and selected the first 700 samples for our evaluation. As the organ tissue is usually separable based on the pixel values in this experiment, we have also included the pixel values alongside the score map to predict the lung organ.
### Experimental Setup
**Training**: To learn the trainable parameters, we employ SGD optimization, minimizing the overall loss function iteratively for a maximum of 50 iterations. The SGD optimization is configured with a learning rate of 0.36 and a momentum of 0.9. The experiments are performed using the PyTorch library on a single RTX 3090 GPU.
**Evaluation Protocol**: We employ the Dice (DSC) score, XOR metric, and Hammoud distance (HM) as evaluation metrics. These metrics allow us to compare our method against both the unsupervised \(k\)-means clustering method and recent self-supervised approaches, namely DeepCluster [8], IIC [22], and spatial guided self-supervised strategy (SGSCN) [1], and MS-Former [25]. In accordance with the methodology presented in [1], we only consider the cluster that exhibits the highest overlap with the ground truth (GT) map as the target class prediction for evaluating our method. In our evaluation, the DSC score serves as an indicator of the agreement between the predicted target region and the GT map. Higher DSC scores reflect improved performance. Conversely, the HM and XOR metrics measure the discrepancy between the predicted target and the GT map. Therefore, lower HM and XOR values correspond to superior performance.
### Evaluation Results
**Skin Lesion Segmentation** In the skin lesion segmentation task (refer to Table 1), our method outperforms the SOTA approaches across all evaluation metrics, demonstrating the effectiveness of our self-supervised content clustering strategy. Notably, our method exhibits superior performance compared to SGSCN and MS-Former by modeling spatial consistency at both the pixel and region levels. This modeling of spatial dependency provides a stronger foundation for accurate segmentation. Furthermore, our approach incorporates consistency over transformations, allowing the network to learn transformation-invariant feature representations, leading to smoother clustering space. This recalibration of feature representations contributes to improved segmentation accuracy. The visual comparison in Figure 3 confirms the superiority of our strategy, as it generates smoother segmentation maps with better delineation of lesion boundaries compared to DeepCluster and \(K\)-means methods. Additionally, our method successfully avoids under-segmentation issues encountered by MS-Former and SGSCN, where edges around the lesion and inside the lesion class are mistakenly treated as a separate class.
**Lung Organ Segmentation** The quantitative results for lung organ segmentation (refer to Table 1) further highlight the superiority of our self-supervised method over SOTA approaches. Our approach demonstrates exceptional performance in addressing the specific challenges encountered when working with CT images, as evidenced by its outstanding results across various evaluation metrics. CT images are known for their inherent noise and the presence of spiky ground truth labels, which can pose significant obstacles to traditional self-supervised methods. However, our self-supervised approach excels in overcoming these challenges and achieves highly accurate segmentation results. Notably, the utilization of the \(k\)-means algorithm proves particularly effective in lung segmentation tasks, benefiting from the comparatively simpler shapes and lower variations observed in localized areas of the lung dataset. The visual segmentation outputs showcased in Figure 3 further validate the efficacy of our approach, as they exhibit noticeably smoother
contour lines compared to alternative methods, affirming the superiority of our proposed methodology. This improvement indicates that the integration of the I-LKA module facilitates the network's ability to accurately perceive the actual boundary of the target.
### Ablation Study
In our proposed method, we integrated two pivotal modules, namely affine transformation invariance and spatial consistency, to improve the feature representation for pixel-level image clustering. This section focuses on conducting an ablation study to explore the process of hyperparameter selection for the loss functions. Furthermore, we examine the influence of these modules on the model's generalization performance.
**Hyper-parameter Tuning:** The hyperparameters for our method were carefully selected and fine-tuned based on empirical evaluations using a small subset of skin lesion segmentation images (10 samples) from the ISIC 2017 dataset [14]. We employed a grid search approach within a limited range (0 - 3) to identify the optimal values for \(\lambda_{1}=1.2\), \(\lambda_{2}=0.3\), and \(\lambda_{3}=0.3\). These obtained hyperparameters were used for both datasets.
To comprehensively evaluate the impact of hyperparameters on the new dataset, we conducted a series of additional experiments. The primary objective was to determine the optimal hyperparameters specifically tailored to the lung segmentation dataset, utilizing a subset of ten samples for this purpose. Through an iterative process, we identified the values of \(\lambda_{1}=1\), \(\lambda_{2}=0.5\), and \(\lambda_{3}=0.6\) as the most effective configuration. Subsequently, we assessed the performance of the model using these updated hyperparameters and observed a slight improvement compared to the original configuration, with a 0.5% increase in the DSC.
**Impact of Affine Transformation:** Our network architecture incorporates a second branch dedicated to modeling robustness against affine transformations and providing a supervisory signal to learn invariant feature representations. To evaluate the specific contribution of this module, we conducted an experiment excluding the affine consistency loss. The results are summarized in Table 2. The omission of the affine consistency loss led to a 2.1% decrease in the DSC score compared to our main strategy. From a qualitative standpoint, as shown in Figure 4, it can be observed that the absence of this loss function resulted in challenges related to accurate boundary separation. Additionally, the absence weakened the condition of multi-scale feature agreement within the network, leading to the incorrect merging of small clusters with neighboring clusters.
**Impact of Spatial Consistency:** Next, we examined the effect of spatial consistency loss on the clustering process. Quantitative results are presented in Table 2, where it is evident that our model without the spatial consistency loss performed poorly across various metrics. This observation underscores the significance of spatial consistency for segmentation purposes. Notably, our spatial consistency approach incorporates edge information in both vertical and horizontal directions to effectively model local consistency. Moreover, visual evidence presented in Figure 4 shows that the absence of spatial consistency resulted in non-consistent cluster predictions and hindered cluster merging. This effect becomes more pronounced when the algorithm deals with complex surfaces.
**Impact of Inception and Deformable convolution:** In order to assess the impact of the inception module in our LKA and the deformable convolution in our architecture, we con
\begin{table}
\begin{tabular}{c||c c c||c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c||}{**PH\({}^{2}\)**} & \multicolumn{3}{c}{**Lung Segmentation**} \\ \cline{2-7} & **DSC \(\uparrow\)** & **HM \(\downarrow\)** & **XOR \(\downarrow\)** & **DSC** \(\uparrow\)** & **HM \(\downarrow\)** & **XOR \(\downarrow\)** \\ \hline \(k\)-means & 71.3 & 130.8 & 41.3 & 92.7 & 10.6 & 12.6 \\ DeepCluster [8] & 79.6 & 35.8 & 31.3 & 87.5 & 16.1 & 18.8 \\ IIC [22] & 81.2 & 35.3 & 29.8 & - & - & - \\ SGSCN[1] & 83.4 & 32.3 & 28.2 & 89.1 & 16.1 & 34.3 \\ MS-Former [25] & 86.0 & 23.1 & 25.9 & 94.6 & **8.1** & 14.8 \\ \hline
**Our Method** & **88.0** & **20.4** & **22.0** & **94.7** & 8.8 & **13.1** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of the proposed method is compared to the SOTA approaches on the PH\({}^{2}\) and Lung datasets.
\begin{table}
\begin{tabular}{c c c|c c c} \hline \hline \(\mathcal{L}_{\text{ee}}\) & \(\mathcal{L}_{\text{AT}}\) & \(\mathcal{L}_{\text{S}}\) & **DSC \(\uparrow\)** & **HM \(\downarrow\)** & **XOR \(\downarrow\)** \\ \hline \(\checkmark\) & ✗ & ✗ & 86.1 & 22.8 & 25.6 \\ \(\checkmark\) & ✓ & ✗ & 86.4 & 22.2 & 24.6 \\ \(\checkmark\) & ✗ & ✓ & 85.9 & 22.7 & 25.2 \\ \(\checkmark\) & ✓ & ✓ & **88.0** & **20.4** & **22.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Impact of individual loss functions on model performance. The experiments were conducted using the PH\({}^{2}\) dataset.
Figure 3: Visual comparison of different methods on the PH\({}^{2}\) skin lesion segmentation and Lung datasets.
ducted an experimental analysis by excluding each of these modules individually and replacing them with a simple convolution block. Our results on the PH\({}^{2}\) dataset showed a slight decrease of 0.6% in the DSC score when the inception module was removed. Similarly, the absence of the deformable convolution led to a 0.8% DSC reduction in performance. These results showcase the importance of both the inception module and the deformable convolution.
**Limitations:** Our proposed method has consistently outperformed SOTA approaches on both datasets, as evidenced by the experimental results. To gain deeper insights into the efficacy of our self-supervised segmentation strategy and to identify potential challenges, we have conducted additional visualizations. Figure 5 illustrates cases where our proposed method struggles to accurately predict regions of interest, particularly when there is a significant overlap between the object of interest and background regions. This limitation becomes more prominent when dealing with complex clustering scenarios, which poses challenges for the model to precisely locate the boundaries of the lesions. Additionally, the presence of noisy annotations in the ground truth masks further impedes the model's ability to generate accurate segmentation maps.
## 5 Conclusion
This paper introduces a novel SSL appraoch that combines the I-LKA module with deformable convolution to enable semantic segmentation directly from the image itself. Additionally, our network incorporates invariance to affine transformations and spatial consistency, providing a promising solution for pixel-wise image content clustering. Experimental results along with the ablation study demonstrate the remarkable performance of our method for skin lesion and organ segmentation tasks.
**Acknowledgments** This work was funded by by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) - project number 455548460.
|
2309.09601 | On cyclicity in de Branges-Rovnyak spaces | We study the problem of characterizing the cyclic vectors in de
Branges-Rovnyak spaces. Based on a description of the invariant subspaces we
show that the difficulty lies entirely in understanding the subspace
$(aH^{2})^{\perp}$ and give a complete function theoretic description of the
cyclic vectors in the case $\dim (aH^{2})^{\perp} < \infty$. Incidentally, this
implies analogous results for certain generalized Dirichlet spaces
$\mathcal{D}(\mu)$. Most of our attention is directed to the infinite case
where we relate the cyclicity problem to describing the exposed points of
$H^{1}$ and provide several sufficient conditions. A necessary condition based
on the Aleksandrov-Clark measures of $b$ is also presented. | Alex Bergman | 2023-09-18T09:16:18Z | http://arxiv.org/abs/2309.09601v1 | # On cyclicity in de Branges-Rovnyak spaces
###### Abstract
We study the problem of characterizing the cyclic vectors in de Branges-Rovnyak spaces. Based on a description of the invariant subspaces we show that the difficulty lies entirely in understanding the subspace \((aH^{2})^{\perp}\) and give a complete function theoretic description of the cyclic vectors in the case \(\dim(aH^{2})^{\perp}<\infty\). Incidentally, this implies analogous results for certain generalized Dirichlet spaces \(\mathcal{D}(\mu)\). Most of our attention is directed to the infinite case where we relate the cyclicity problem to describing the exposed points of \(H^{1}\) and provide several sufficient conditions. A necessary condition based on the Aleksandrov-Clark measures of \(b\) is also presented.
## 1 Introduction
This article is concerned with cyclic vectors in de Branges-Rovnyak spaces \(\mathcal{H}(b)\). Let \(\mathbb{D}=\left\{z\in\mathbb{C}:\left|z\right|<1\right\}\) be the open unit disk in the complex plane and equip its boundary \(\mathbb{T}=\left\{\zeta\in\mathbb{C}:\left|\zeta\right|=1\right\}\) with the normalized Lebesgue measure, \(m\). For \(0<p<\infty\) the Hardy class \(H^{p}\) is the set of analytic functions on \(\mathbb{D}\) for which the norm (in the case \(1\leq p<\infty\))
\[\|f\|_{p}^{p}=\sup_{0<r<1}\int_{\mathbb{T}}\lvert f(r\zeta)\rvert^{p}dm(\zeta),\]
is finite. For \(p=\infty\) we let \(H^{\infty}\) be the class of bounded analytic functions on \(\mathbb{D}\). We can in the usual way identify \(H^{p}\) with a subspace of \(L^{p}=L^{p}(\mathbb{T})\) via non-tangential limits. In this setting \(H^{p}\) (with \(p\geq 1\)) consists of all \(L^{p}\) functions, whose Fourier spectrum is contained in the nonnegative integers. Denote by \(P_{+}\) the orthogonal projection from \(L^{2}\) onto \(H^{2}\). For a symbol \(U\in L^{\infty}\) we define the Toeplitz operator \(T_{U}f=P_{+}(Uf)\), \(f\in H^{2}\).
Let \(b\) be a nonconstant function in the unit ball of \(H^{\infty}\). The de Branges-Rovnyak space \(\mathcal{H}(b)\) is defined as the operator range of \((I-T_{b}T_{\overline{b}})^{1/2}\). Our main reference for
the basic theory of \({\cal H}(b)\) is [22], see also the recent two-part monograph [7, 8]. The space \({\cal H}(b)\) is contractively contained inside the usual Hardy space, \(H^{2}\). The theory of \({\cal H}(b)\) spaces splits into two cases depending on whether or not \(b\) is an extreme point of the unit ball of \(H^{\infty}\). De Branges-Rovnyak spaces have been studied intensively. In particular, the problem of smooth approximation has received attention in the case of extreme \(b\) in recent years, see for example [2, 3, 15, 16, 17]. In the non-extreme case polynomials form a dense subset and so the question of smooth approximation is trivial. However, non-extreme de Branges-Rovnyak spaces are forward shift invariant and hence the question of characterizing the cyclic vectors is meaningful in this case. For a function \(f\) in a Hilbert space of analytic functions \(H\) invariant under the forward shift operator, \(M_{z}f=zf\), the cyclic subspace generated by \(f\) is defined as the closure of the linear span of polynomial multiples of \(f\). It will be denoted \([f]\) and a function \(f\) is called cyclic (for \(H\)) if \([f]=H\). Before stating our goals we will need some definitions.
If \({\cal H}(b)\) is invariant under the forward shift operator defined by \(M_{z}f=zf\) it is known that \(b\) is a non-extreme point of the unit ball of \(H^{\infty}\), that is
\[\int_{\mathbb{T}}\log(1-|b(\zeta)|)dm(\zeta)>-\infty.\]
Thus the problem of classifying the cyclic vectors for \({\cal H}(b)\) is meaningful only in the non-extreme case.
For a non-extreme \(b\) we define the unique outer function \(a(0)>0\) satisfying \(|b|^{2}+|a|^{2}=1\) a.e. on \(\mathbb{T}\). The space \(aH^{2}=\{ah:h\in H^{2}\}\) is contractively contained inside \({\cal H}(b)\). The problem of classifying the cyclic vectors in de Branges-Rovnyak spaces was raised by Fricain in [4]. Also in [9] the cyclic vectors in the case \(b=(1+z)/2\) were determined. In Section 5 we generalize this considerably by giving a complete function theoretic characterization of the cyclic vectors in the case \(\dim(aH^{2})^{\perp}<\infty\) (the symbol \((aH^{2})^{\perp}\) denotes the orthogonal complement of \(aH^{2}\) in \({\cal H}(b)\)).
**Theorem 1**.: _Let \({\cal H}(b)\) be a non-extreme de Branges-Rovnyak space and suppose \(\dim(aH^{2})^{\perp}<\infty\). Denote by \(\overline{\lambda}_{1},\overline{\lambda}_{2},...,\overline{\lambda}_{s}\) the eigenvalues of \(M_{z}^{*}\) restricted to \((aH^{2})^{\perp}\). Then_
1. _each_ \(\overline{\lambda}_{j}\) _lies on_ \(\mathbb{T}\)_,_
2. _every function_ \(h\in{\cal H}(b)\) _extends non-tangentially to_ \(\lambda_{j}\)_,_
3. \(f\in{\cal H}(b)\) _is cyclic if and only if_ \(f\) _is outer and_ \(f(\lambda_{j})\neq 0\)_, for all_ \(j=1,2,...,s\)_._
_Remark_.: Sarason has shown that \(\dim(aH^{2})^{\perp}<\infty\) if and only if the function \(\phi=a/(1-b)\) is of the form \(\phi=Fp\), where \(F\in H^{2}\) is an outer function, such that \(F^{2}\) is an exposed point of \(H^{1}\) (exposed points will be defined shortly) and \(p\) is a polynomial with all of its zeros on the unit circle, see (X-17) in [22].
Thus Theorem 1 contains the case of rational \(b\) often considered in the literature as a special case.
Our main efforts will go towards the case of infinite codimension, \(\dim(aH^{2})^{\perp}=\infty\). This case seems drastically more difficult than the finite case. In light of this, we settle for providing a necessary and several sufficient conditions.
Using a recent description of the invariant subspaces of \(\mathcal{H}(b)\) in [3] we prove the following basic condition for cyclicity valid for any non-extreme de Branges-Rovnyak space.
For a Borel set \(E\subset\mathbb{T}\) denote by \(\mathbb{1}_{E}\) the function that is \(1\) on \(E\) and \(0\) on \(\mathbb{T}\setminus E\).
**Theorem 2**.: _Let \(\mathcal{H}(b)\) be a non-extreme de Branges-Rovnyak space and \(f\in\mathcal{H}(b)\) an outer function. Suppose there exists Borel sets \(E,F\subset\mathbb{T}\), such that_
1. \(E\cup F=\mathbb{T}\)_,_
2. \(a^{-1}\mathbb{1}_{E}\in L^{2}\) _and_ \(f^{-1}\mathbb{1}_{F}\in L^{\infty}\)_._
_Then \(f\) is cyclic._
More precise results require a detailed analysis of the space \((aH^{2})^{\perp}\). The case \(\dim(aH^{2})^{\perp}=\infty\) has appeared indirectly in a conjecture of Sarason on the exposed points of the unit ball of \(H^{1}\) (see Chapter X of [22]) and in the negative answer to that conjecture in [13, 14, 18]. As we shall see there is an intimate connection between the cyclic vectors in \(\mathcal{H}(b)\) and the exposed points of the unit ball of \(H^{1}\).
To motivate what is to come we briefly describe exposed points of the unit ball of \(H^{1}\). For a convex set \(K\) in a linear space \(X\) a point \(x\in K\) is called an exposed point of \(K\) if there exists a real linear functional \(\ell\), such that \(\ell(x)>\ell(k)\), for all \(k\in K\setminus\{x\}\). Exposed points of the unit ball of \(H^{1}\) will be called exposed points if no confusion can arise. Exposed points are extreme points and so if \(f\in H^{1}\) is an exposed point of the unit ball it is an outer function [6]. In \(H^{1}\) there is an alternative characterization in terms of the argument of the boundary function. An outer function \(f\in H^{1}\) of unit norm is an exposed point if and only if the only functions in \(H^{1}\) with the same argument a.e. on \(\mathbb{T}\) are positive multiples of \(f\), here the argument function is the principal branch \(\arg(z)\in[0,2\pi)\). Also, an outer function of \(f\in H^{1}\) of unit norm is exposed if and only if the Toeplitz operator \(T_{\overline{f}/f}\) has trivial kernel, see [22]. There is an extensive literature on exposed points in \(H^{1}\), see for example [13, 14, 19, 20, 21]. Despite this, there is no characterization of exposed points based on the modulus of the function on the unit circle.
Exposed points are connected to cyclicity in \(\mathcal{H}(b)\) in the following way: the function \(\phi=a/(1-b)\) is an outer function and, after suitable normalization of \(b\), it is of unit norm. For such functions we shall consider a set \(\sigma(\phi)\subset\mathbb{T}\) measuring in some
sense how far away \(\phi^{2}\) is from being an exposed point (For the rigorous definition see Definition 1 in Section 4). A heuristic principle is that an outer function \(f\in\mathcal{H}(b)\) is cyclic if it is "not too small" on \(\sigma(\phi)\). In Section 5 we prove the following concrete manifestation of this principle.
**Theorem 3**.: _Let \(\mathcal{H}(b)\) be a non-extreme de Branges-Rovnyak space, \(b(0)=0\), and set \(\phi=a/(1-b)\). Without loss of generality we may assume \(\phi\) is of unit norm. Let, in addition, \(f\in\mathcal{H}(b)\) be an outer function. Suppose that for each point \(\zeta\in\sigma(\phi)\) there exists an open arc \(\zeta\in I_{\zeta}\subset\mathbb{T}\) and number \(\eta_{\zeta}>0\) with \(|f|>\eta_{\zeta}\) a.e. on \(I_{\zeta}\). Then \(f\) is cyclic._
We shall also prove a sharper version of the above theorem which depends only on the behavior of \(f\) at each point in \(\sigma(\phi)\) (and not in a neighborhood). Before stating the result we will comment on our method. For an outer function \(\phi\) consider the kernel of the Toeplitz operator \(T_{\overline{\phi}/\phi}\) and let \(J_{\phi}=\phi^{-1}\ker(T_{\overline{\phi}/\phi})\). Functions in \(J_{\phi}\) possess the remarkable property of analytic pseudocontiuation, that is: \(J_{\phi}\) consists of functions \(f\) analytic in \(\mathbb{C}_{\infty}\setminus\mathbb{T}\), such that their non-tangential limits from inside and outside the unit disk coincide a.e. We shall realize the space \((aH^{2})^{\perp}\) as a space of normalized Cauchy transforms of functions in \(J_{\phi}\). Thus the difference in the case \(\dim(aH^{2})^{\perp}<\infty\) and the infinite case reflects the difference between describing finite and infinite dimensional Toeplitz kernels. We proceed to show that the cyclicity problem in \(\mathcal{H}(b)\) is intimately related to the question of analytic continuation in \(J_{\phi}\). For the next theorem denote by \(V\) the operator
\[Vh(z)=(1-b(z))\int_{\mathbb{T}}\frac{h(\zeta)|\phi(\zeta)|^{2}dm(\zeta)}{1-z \overline{\zeta}},\,h\in L^{2}(|\phi|^{2}dm).\]
The following sharpening of Theorem 3 is proved in Section 5.
**Theorem 4**.: _Let \(\mathcal{H}(b)\) be a non-extreme de Branges-Rovnyak space, \(b(0)=0\), and set \(\phi=a/(1-b)\). Without loss of generality we may assume \(\phi\) is of unit norm. Let, in addition, \(g\in H^{\infty}\) and write \(f=Vg\). Denote by \(\theta\) the inner factor of \(f\) and set \(F=f/\theta\). If \(\sigma(F)\cap\sigma(\phi)=\varnothing\), then \(F\) is cyclic._
Note that it is a part of the conclusion of Theorem 4 that \(F=f/\theta\in\mathcal{H}(b)\). Theorem 4 is more precise than Theorem 3 since it requires information only at each point of \(\sigma(\phi)\) and not in a neighborhood of every point, however, it applies to fewer functions since it requires \(f=Vg\), with \(g\in H^{\infty}\) instead of merely \(g\in H^{2}/\phi\).
We end with a section on examples and applications. In particular, we use our results to give necessary and sufficient conditions for the cyclicity of \(b\) and the kernel elements \(k_{\lambda}^{b}\). We also describe the cyclic vectors in certain generalized Dirichlet spaces. Finally, we consider the case \(b(z)=(1+\theta)/2\), where \(\theta\) is a non-constant inner function. These give examples for which \(\dim(aH^{2})^{\perp}=\infty\), but we can still give a necessary and sufficient function theoretic condition for cyclicity.
### Acknowledgements
The author expresses his deep gratitude to Alexandru Aleman for having shared the problems investigated here and for several helpful discussions.
## 2 Preliminaries
### The space \(\mathcal{H}(b)\)
To each bounded analytic function \(b:\mathbb{D}\rightarrow\mathbb{D}\) we associate the Hilbert space of analytic functions, \(\mathcal{H}(b)\), defined as the range \((I-T_{b}T_{b}^{*})^{1/2}H^{2}\) with induced inner product
\[\langle(I-T_{b}T_{b}^{*})^{1/2}f,(I-T_{b}T_{b}^{*})^{1/2}g\rangle_{\mathcal{H}( b)}=\langle f,g\rangle_{2},\]
for \(f,g\perp\ker((I-T_{b}T_{b}^{*})^{1/2})\). Equivalently \(\mathcal{H}(b)\) can be seen as the reproducing kernel Hilbert space associated with the reproducing kernel \(k_{\lambda}^{b}=(1-\overline{b(\lambda)}b(z))/(1-\overline{\lambda}z)\), \(\lambda\in\mathbb{D}\). As we shall ultimately be interested in cyclic vectors of the forward shift, \(M_{z}f=zf\), we shall confine our attention to those \(b\) satisfying \(z\mathcal{H}(b)\subset\mathcal{H}(b)\). It can be shown (see [22]) that this happens if and only if \(b\) is a non-extreme point of the unit ball of \(H^{\infty}\), by [6], this happens if and only if \(b\) satisfies
\[\int_{\mathbb{T}}\log(1-|b|)dm>-\infty.\]
We shall also, in an attempt to simplify formulas, assume \(b(0)=0\). Since \(b\) is non-extreme we can introduce the unique outer function \(a\) satisfying \(|a|^{2}+|b|^{2}=1\), a.e. on \(\mathbb{T}\) and \(a(0)>0\). Now for \(\alpha\in\mathbb{T}\) the function \((1+\overline{\alpha}b)/(1-\overline{\alpha}b)\) has nonnegative real part in the unit disk and hence can be represented via the Herglotz integral formula
\[\frac{1+\overline{\alpha}b}{1-\overline{\alpha}b}=\int_{\mathbb{T}}\frac{ \zeta+z}{\zeta-z}d\mu_{\alpha}(\zeta), \tag{1}\]
for some Borel probability measure \(\mu_{\alpha}\). The measures \(\mu_{\alpha}\) are the Aleksandrov-Clark measures of \(b\). Taking real parts and using properties of the Poisson kernel we see that \((1-|b|^{2})/|\alpha-b|^{2}\) is the Radon-Nikodym derivative of the absolutely continuous part of \(\mu_{\alpha}\) with respect to normalized Lebesgue measure. It follows that the measure \(\mu_{\alpha}\) has the following decomposition in terms of its absolutely continuous and singular parts
\[d\mu_{\alpha}=|\phi_{\alpha}|^{2}dm+d(\mu_{\alpha})_{s},\]
where \(\phi_{\alpha}=a/(1-\overline{\alpha}b)\in H^{2}\) is an outer function. Let \(P^{2}(\mu_{\alpha})\) denote the closure of the analytic polynomials, \(\mathrm{Span}(\left\{z^{n}:n\geq 0\right\})\), in \(L^{2}(\mu_{\alpha})\). Then \(P^{2}(\mu_{\alpha})\) decomposes
as
\[P^{2}(\mu_{\alpha})=\frac{H^{2}}{\phi_{\alpha}}\oplus L^{2}((\mu_{\alpha})_{s}), \tag{2}\]
where \(H^{2}/\phi_{\alpha}=\{f/\phi_{\alpha}:f\in H^{2}\}\). Since \(|b(\zeta)|<1\) for almost every \(\zeta\in\mathbb{T}\) an application of Fubini's Theorem
\[\int_{\mathbb{T}}(\mu_{\alpha})_{a}(\mathbb{T})dm(\alpha)=\int_{ \mathbb{T}}\int_{\mathbb{T}}\frac{1-|b(\zeta)|^{2}}{|\alpha-b(\zeta)|}dm(\zeta )dm(\alpha)\] \[=\int_{\mathbb{T}}\int_{\mathbb{T}}\frac{1-|b(\zeta)|^{2}}{| \alpha-b(\zeta)|}dm(\alpha)dm(\zeta)=1,\]
shows that for non-extreme \(b\) the measures \(\mu_{\alpha}\) are absolutely continuous for a.e. \(\alpha\in\mathbb{T}\). By replacing \(b\) by \(\bar{\alpha}b\) if necessary we may assume \(\mu=\mu_{1}\) is absolutely continuous of the form \(d\mu=|\phi|^{2}dm\), with \(\phi=a/(1-b)\). In particular we are free to assume that equation (2) reduces to \(P^{2}(\mu_{1})=H^{2}/\phi\). For a Borel measure \(\nu\) on \(\mathbb{T}\) we introduce the Cauchy transform
\[C_{\nu}(z)=\int_{\mathbb{T}}\frac{d\nu(\zeta)}{1-z\overline{\zeta}},\]
and the operator
\[C_{\nu}h(z)=\int_{\mathbb{T}}\frac{h(\zeta)d\nu(\zeta)}{1-z\overline{\zeta}},h\in L^{2}(\nu).\]
Associated to the finite Borel measure, \(d\mu_{\alpha}=|\phi_{\alpha}|^{2}dm+d(\mu_{\alpha})_{s}\), we consider the normalized Cauchy transform
\[V_{\alpha}h(z)=\frac{C_{\mu_{\alpha}}h(z)}{C_{\mu_{\alpha}}(z)}=(1-\overline{ \alpha}b(z))\int_{\mathbb{T}}\frac{hd\mu_{\alpha}(\zeta)}{1-z\overline{\zeta} },h\in L^{2}(\mu_{\alpha}). \tag{3}\]
The next result is standard and can be found in [22]. We sketch a proof for the convenience of the reader.
**Proposition 1**.: _Let \(\mathcal{H}(b)\) be a de Branges-Rovnyak space and \(\mu_{\alpha}\) an Aleksandrov-Clark measure of \(b\). Then the map \(V_{\alpha}\) from (3) is a unitary operator \(V_{\alpha}:P^{2}(\mu_{\alpha})\rightarrow\mathcal{H}(b)\). Conversely, suppose \(\phi\in H^{2}\) is an outer function of unit norm. Then there exists a unique non-extreme \(b:\mathbb{D}\rightarrow\mathbb{D}\) satisfying \(b(0)=0\) and \(V_{1}(H^{2}/\phi)=\mathcal{H}(b)\)._
Proof.: Let \(k_{\lambda}=1/(1-\overline{\lambda}z)\) be the Cauchy kernel. A computation based on equation
(1) gives
\[(1-\alpha\overline{b(\lambda)})V_{\alpha}k_{\lambda}=(1-\alpha \overline{b(\lambda)})(1-\overline{\alpha}b(z))\int_{\mathbb{T}}\frac{1}{1- \overline{\lambda}\zeta}\frac{1}{1-z\overline{\zeta}}d\mu_{\alpha}(\zeta)\] \[=\frac{(1-\alpha\overline{b(\lambda)})(1-\overline{\alpha}b(z))} {2(1-\overline{\lambda}z)}\int_{\mathbb{T}}\overline{\left(\frac{\zeta+ \lambda}{\zeta-\lambda}\right)}+\frac{\zeta+z}{\zeta-z}d\mu_{\alpha}(\zeta)\] \[=\frac{(1-\alpha\overline{b(\lambda)})(1-\overline{\alpha}b(z))} {2(1-\overline{\lambda}z)}\left(\frac{1+\alpha\overline{b(\lambda)}}{1- \alpha\overline{b(\lambda)}}+\frac{1+\overline{\alpha}b(z)}{1+\overline{ \alpha}b(z)}\right)=k_{\lambda}^{b}(z).\]
Also,
\[\|k_{\lambda}^{b}\|_{b}^{2}=k_{\lambda}^{b}(\lambda)=\frac{1-|b(\lambda)|^{2} }{1-|\lambda|^{2}},\]
and by equation (1)
\[\|(1-\alpha\overline{b(\lambda)})k_{\lambda}\|_{L^{2}(\mu_{\alpha})}^{2}= \frac{|1-\alpha\overline{b(\lambda)}|^{2}}{1-|\lambda|^{2}}\frac{1-|b(\lambda )|^{2}}{|1-\overline{\alpha}b(\lambda)|^{2}}=\frac{1-|b(\lambda)|^{2}}{1-| \lambda|^{2}}.\]
Since \(\left\{(1-\alpha\overline{b(\lambda)})k_{\lambda}\right\}_{\lambda\in\mathbb{ D}}\) is complete (by complete we mean it has dense linear span) in \(P^{2}(\mu_{\alpha})\) and \(\left\{k_{\lambda}^{b}\right\}_{\lambda\in\mathbb{D}}\) is complete in \(\mathcal{H}(b)\) the above computations and a simple limiting argument show that \(V_{\alpha}\) is unitary. For the converse, it is sufficient to define \(b\) via the identity
\[\frac{1+b(z)}{1-b(z)}=\int_{\mathbb{T}}\frac{\zeta+z}{\zeta-z}|\phi|^{2}dm( \zeta).\]
The desired properties follow easily.
_Remark_.: We remark that the first part of of Proposition 1 does not require \(b\) to be non-extreme, nor does it require \(b(0)=0\). In particular if \(\theta\) is an inner function with Aleksandrov-Clark measure \(\sigma\) at \(\alpha\in\mathbb{T}\), we have \(V_{\alpha}L^{2}(\sigma)=K_{\theta}\), where \(K_{\theta}=H^{2}\cap\theta\overline{H_{0}^{2}}\) is the usual model space (here \(\overline{H_{0}^{2}}\) is the orthogonal complement of \(H^{2}\) in \(L^{2}\)). We shall need this in Section 6.
Thus there is a one-to-one correspondence between outer functions of unit norm in \(H^{2}\) and non-extreme \(\mathcal{H}(b)\) spaces (with \(b(0)=0\) and \(\mu_{1}\) absolutely continuous). Thus in the sequel, we may speak of the \(\mathcal{H}(b)\) space generated by an outer function \(\phi\in H^{2}\) of unit norm.
We end this section by recalling a Theorem of Poltoratski that we will need in the continuation, see Theorem 2.7. in [19].
**Theorem 5**.: _Let \(\sigma\) be a finite positive Borel measure on \(\mathbb{T}\) and denote its singular part by \((\sigma)_{s}\). Let \(V\) be the operator_
\[Vh(z)=C_{\sigma}(z)^{-1}\int_{\mathbb{T}}\frac{h(\zeta)d\sigma(\zeta)}{1-z \overline{\zeta}}\text{, }h\in L^{2}(\sigma).\]
_Suppose \(h\in L^{2}(\sigma)\), then \(Vh\) converges non-tangentially to \(h\)\((\sigma)_{s}\)-a.e._
### The role of \(aH^{2}\)
For a Hilbert space \(H\) and a bounded linear operator \(T:H\to H\). An element \(x\in H\) is called cyclic for \(T\) if \(\left\{T^{n}x:n\geq 0\right\}\) has dense linear span. For a Hilbert space \(X\) of analytic functions invariant under multiplication by independent variable we shall denote by \([f]\) the closure of the linear span of \(\left\{z^{n}f:n\geq 0\right\}\). The linear subspace \([f]\) is called the cyclic subspace generated by \(f\). In particular \(f\) is a cyclic vector for \(M_{z}f=zf\) if and only if \([f]=X\). A cyclic vector for \(M_{z}\) will simply be called cyclic.
Since polynomials are dense in every non-extreme \(\mathcal{H}(b)\) space it is necessary and sufficient for cyclicity that \(1\in[f]\). From the contractive inclusion \(\mathcal{H}(b)\subset H^{2}\) we see that for any polynomial \(p\) and \(f\in\mathcal{H}(b)\)
\[\|1-pf\|_{b}\geq\|1-pf\|_{2},\]
and so any cyclic vector in \(\mathcal{H}(b)\) must be cyclic for \(H^{2}\) and hence an outer function. Returning now to the unique outer function \(a\) satisfying \(|a|^{2}+|b|^{2}=1\) a.e. on \(\mathbb{T}\) and \(a(0)>0\) we may consider the space \(aH^{2}\) with norm \(\|af\|_{a}=\|f\|_{2}\). It follows from the operator inequality \(T_{a}T_{a}^{*}\leq I-T_{b}T_{b}^{*}\) that \(aH^{2}\) is contained contractively in \(\mathcal{H}(b)\). Thus \(\mathcal{H}(b)\) splits as
\[\mathcal{H}(b)=\operatorname{clos}(aH^{2})\oplus(aH^{2})^{\perp},\]
where \(\operatorname{clos}(aH^{2})\) denotes the closure of \(aH^{2}\) in \(\mathcal{H}(b)\) and \((aH^{2})^{\perp}\) denotes the orthogonal complement of \(aH^{2}\) in \(\mathcal{H}(b)\). As we shall see in a moment \(\operatorname{clos}(aH^{2})\) is fairly tame and presents little difficulty in terms of the cyclicity problem. Indeed in a sense it behaves very much like the usual space \(H^{2}\). To see this we shall need the following result, see (IV-I) in [22].
**Proposition 2**.: _Let \(b\) be a non-extreme point of the unit ball of \(H^{\infty}\). A function \(f\in H^{2}\) lies in \(\mathcal{H}(b)\) if and only if there exists \(f_{1}\in H^{2}\) satisfying_
\[T_{\overline{b}}f+T_{\overline{a}}f_{1}=0,\]
_and in this case \(\|f\|_{b}^{2}=\|f\|_{2}^{2}+\|f_{1}\|_{2}^{2}\). We denote by \(J:\mathcal{H}(b)\to H^{2}\oplus H^{2}\) the isometry \(Jf=(f,f_{1})\)._
In [3] it was shown that any closed invariant subspace \(\mathcal{M}\) of \(\mathcal{H}(b)\) is of the form
\[\mathcal{M}=\left\{g\in\mathcal{H}(b):\frac{g}{\psi},\frac{g}{\psi}\psi_{1}\in H ^{2}\right\}, \tag{4}\]
for some \(\psi\in\mathcal{H}(b)\) and \(J\psi=(\psi,\psi_{1})\). Since the right-hand side is invariant under multipliers of \(\mathcal{H}(b)\) we have \(U[f]\subset[f]\) for all multipliers \(U\) of \(\mathcal{H}(b)\).
The next result is probably well-known to experts, however, we could not find it in the literature. It concerns the so-called \((F)\) property of \(\mathcal{H}(b)\). The proof is short and relies on a classical Theorem of S.A. Vinogradov on division of Cauchy integrals by inner functions [23].
**Lemma 1**.: _Let \(f\in\mathcal{H}(b)\) and denote the inner factor of \(f\) by \(\theta\). Then \(f/\theta\in\mathcal{H}(b)\)._
Proof.: We may assume the Aleksandrov-Clark measure \(\mu_{1}\) is absolutely continuous. Let \(\phi=a/(1-b)\) and \(g\in H^{2}/\phi\) be the unique function, such that \(f=Vg\), where \(V=V_{1}\) is the unitary map from Proposition 1. Denote by \(\mu=|\phi|^{2}m\) and recall that for a Borel measure \(\nu\) we let \(C_{\nu}\) denote the Cauchy transform of \(\nu\) and for \(h\in L^{1}(\nu)\)
\[C_{\nu}h(z)=\int_{\mathbb{T}}\frac{h(\zeta)d\nu(\zeta)}{1-z\overline{\zeta}}.\]
From the equality
\[f=Vg=(1-b)C_{\mu}g,\]
and using that \((1-b)\) has no inner factor we see that \(\theta^{-1}C_{\mu}g\in H^{p}\), \(0<p<1\). Thus applying the variant of Vinogradov's theorem contained in Theorem 3.4. of [19] we have
\[f/\theta=(1-b)C_{\overline{\theta}\mu}g=(1-b)C_{\mu}(\frac{T_{\theta}^{*}( \phi g)}{\phi}),\]
where \(T_{\theta}\) is the Toeplitz operator with symbol \(\theta\). Since \(T_{\theta}^{*}(\phi g)/\phi\in H^{2}/\phi\) we have \(f/\theta\in\mathcal{H}(b)\).
_Remark_.: The author was informed by Emmanuel Fricain that this appears in [8] as Theorem 18.16 and Corollary 18.17.
We are now ready to prove that cyclic subspaces of \(\mathcal{H}(b)\) preserve inner factors and that for outer functions \(aH^{2}\subset[f]\).
**Proposition 3**.: _Let \(\mathcal{H}(b)\) be a non-extreme de Branges-Rovnyak space and \(f\in\mathcal{H}(b)\). Denote by \(\theta\) the inner factor of \(f\). Then \(a\theta H^{2}\subset[f]\subset\theta\mathcal{H}(b)\). In particular if \(f\) is outer \(aH^{2}\subset[f]\)._
Proof.: Since \(\mathcal{H}(b)\subset H^{2}\) and \(aH^{2}\subset\mathcal{H}(b)\) the function \(a\) is a multiplier of \(\mathcal{H}(b)\). Thus by the discussion following equation (4) we see that \(a[f]\subset[f]\). For any \(h\in H^{2}\) and polynomials \(p\) and \(q\) we have
\[\|a\theta h-pf\|_{b}\leq\|a\theta h-aqf\|_{b}+\|aqf-pf\|_{b}\leq\|\theta(h-qf \theta^{-1})\|_{2}+\|aqf-pf\|_{b}.\]
For \(\epsilon>0\) and using that \(f\theta^{-1}\) is an outer function we can choose \(q\), such that \(\|h-qf\theta^{-1}\|_{2}<\epsilon\). Since \(aqf\in[f]\) we can choose \(p\), such that \(\|aqf-pf\|_{b}<\epsilon\). Hence \(a\theta h\in[f]\). Also if \(p_{n}f\to g\in\mathcal{H}(b)\) for some sequence of polynomials \(p_{n}\) we also have \(p_{n}f\to g\) in \(H^{2}\) and so \(g\theta^{-1}\in H^{2}\). An application of Lemma 1 gives \(g\theta^{-1}\in\mathcal{H}(b)\).
The above result implies the following classification of cyclic vectors in the case when \(aH^{2}\) is dense in \(\mathcal{H}(b)\).
**Corollary 1**.: _Let \(\mathcal{H}(b)\) be a non-extreme de Branges-Rovnyak space and suppose \(aH^{2}\subset\mathcal{H}(b)\) is dense. Then \(f\in\mathcal{H}(b)\) is cyclic if and only if \(f\) is outer._
We note that the density of \(aH^{2}\) in \(\mathcal{H}(b)\) occurs if and only if \(\phi^{2}=(a/(1-b))^{2}\) is an exposed point of the unit ball of \(H^{1}\). Note also that a precise description of the cyclic vectors in the case \(\dim(aH^{2})^{\perp}<\infty\) can also be obtained from the above result. We defer the proof of this to Section 5.
## 3 A model for \(M_{z}^{*}\) on \((aH^{2})^{\perp}\)
In the previous section we have seen that the problem of cyclicity relies entirely on understanding the subspace \((aH^{2})^{\perp}\subset\mathcal{H}(b)\). Motivated by this we introduce a normalized Cauchy transform model for \((aH^{2})^{\perp}\). We shall consider a collection of spaces that have appeared in [14] in connection with the problem of describing the exposed points of the unit ball of \(H^{1}\). For an outer function \(\phi\in H^{2}\) consider the weight \(w=|\phi|^{2}\geq 0\), \(w\in L^{1}\), and \(\log w\in L^{1}\). The closure of the analytic polynomials, \(\mathrm{Span}(\{z^{n}:n\geq 0\})\), in the space \(L^{2}(w)\) coincides with \(H^{2}/\phi\). Similarly for the anti-analytic polynomials, \(\mathrm{Span}(\{z^{n}:n<0\})\), the closure in \(L^{2}(w)\) is \(\overline{H_{0}^{2}}/\overline{\phi}\), where \(\overline{H_{0}^{2}}\) is the orthogonal complement of \(H^{2}\) in \(L^{2}\). We shall be interested in the space
\[J_{\phi}=\frac{H^{2}}{\phi}\cap\frac{\overline{H_{0}^{2}}}{\overline{\phi}}.\]
It can be shown that \(J_{\phi}=\{0\}\) is equivalent to \(\phi^{2}\) being an exposed point of the unit ball of \(H^{1}\). Indeed from the current perspective it can be seen by observing that \(\ker(T_{\overline{\phi}/\phi})=\{h\in H^{2}:h/\phi\in J_{\phi}\}\) and so \(J_{\phi}=\{0\}\) is equivalent to triviality of the kernel of \(T_{\overline{\phi}/\phi}\). The result now follows from (X-2) in [22]. The space
consists of so-called pseudocontinuable functions. For \(f\in J_{\phi}\) we can identify \(f\) in the interior disk by its representation as a \(H^{2}/\phi\) function and in the exterior disk, \(\mathbb{D}^{e}=\left\{z\in\mathbb{C}_{\infty}:\left|z\right|>1\right\}\), via the representation as a function in \(\overline{H_{0}^{2}}/\overline{\phi}\). Thus \(J_{\phi}\) consists of functions analytic in \(C_{\infty}\setminus\mathbb{T}\), such that the non-tangential limits from outside and inside the unit disk coincide a.e. For definiteness we record the defining formulas for the extension of \(f\in J_{\phi}\) to \(\mathbb{C}_{\infty}\setminus\mathbb{T}\)
\[f(z)=\phi(z)^{-1}\int_{\mathbb{T}}P(z,\zeta)\phi(\zeta)f(\zeta)dm(\zeta),\, \text{for}\,\,z\in\mathbb{D},\]
and
\[f(z)=\overline{\phi}(1/\overline{z})^{-1}\int_{\mathbb{T}}P(1/\overline{z}, \zeta)\overline{\phi(\zeta)}f(\zeta)dm(\zeta),\,\text{for}\,\,z\in\mathbb{D}^ {e},\]
where \(P(z,\zeta)=(1-|z|^{2})/|z-\zeta|^{2}\) is the Poisson kernel. The next theorem identifies \((aH^{2})^{\perp}\) as the space of normalized Cauchy transforms of \(J_{\phi}\). We assume \(b(0)=0\) and \(\mu_{1}\) is absolutely continuous.
**Theorem 6**.: _Let \(V=V_{1}:H^{2}/\phi\to\mathcal{H}(b)\) be the unitary map from Proposition 1. The following holds:_
1. \(V\) _maps_ \(J_{\phi}\) _onto_ \((aH^{2})^{\perp}\)_,_
2. \(V^{-1}M_{z}^{*}V=L\)_, where_ \(Lf(z)=z^{-1}(f(z)-f(0))\)_._
Proof.: For \(\lambda\in\mathbb{D}\) we let \(k_{\lambda}(z)=(1-\overline{\lambda}z)^{-1}\). For \(f\in H^{2}\) there exists a unique function \(g\in H^{2}/\phi\) with \(af=Vg\). A simple computation based on Cauchy's formula gives \(af=V(f/\bar{\phi})\). Hence
\[0=\int_{\mathbb{T}}\frac{f/\overline{\phi}-g}{1-\lambda\overline{\zeta}}| \phi|^{2}dm(\zeta)=\langle\phi(f/\overline{\phi}-g),\phi k_{\lambda}\rangle_ {2},\]
for all \(\lambda\in\mathbb{D}\). Since \(\phi\) is outer the family \((\phi k_{\lambda})_{\lambda\in\mathbb{D}}\) has dense linear span in \(H^{2}\) and thus \(\phi f/\bar{\phi}-g\phi=v\in\overline{H_{0}^{2}}\). Rearranging we have \(g=f/\bar{\phi}-v/\phi\). Then for \(h\in H^{2}/\phi\) we have
\[\langle Vh,af\rangle_{b}=\langle h,g\rangle_{|\phi|^{2}dm}=\int_{\mathbb{T}}( h\bar{\phi})\bar{f}dm-\int_{\mathbb{T}}h\phi\bar{v}dm=\int_{\mathbb{T}}(h \bar{\phi})\bar{f}dm,\]
The last integral is \(0\) for all \(f\in H^{2}\) if and only if \(h\bar{\phi}\in\overline{H_{0}^{2}}\), which completes the proof of \((i)\). Part \((ii)\) is a special case of the identity \(V^{-1}M_{U}^{*}V=T_{U}^{*}\) valid for any multiplier \(U\) of \(\mathcal{H}(b)\). Indeed by the proof of Proposition 1 we have the following
\[M_{U}^{*}Vk_{\lambda}=(1-\overline{b(\lambda)})^{-1}M_{U}^{*}k_ {\lambda}^{b}=(1-\overline{b(\lambda)})^{-1}\overline{U(\lambda)}k_{\lambda}^ {b}\] \[=\overline{U(\lambda)}Vk_{\lambda}=V\overline{U(\lambda)}k_{ \lambda}=VT_{U}^{*}k_{\lambda},\,\text{for each}\,\,\lambda\in\mathbb{D}.\]
The result now follows from the completeness of the kernels.
The spectrum of the adjoint of the shift
In this section, we choose the normalization \(b(0)=0\) and \(\mu_{1}\) absolutely continuous, where \(\mu_{1}\) is the Aleksandrov-Clark measure of \(b\) at the point \(1\). By Theorem 6 in the previous section the spectrum of \(M_{z}^{*}\) restricted to \((aH^{2})^{\perp}\) is equal to the spectrum of the backwards shift \(L\) on \(J_{\phi}\), \(\phi=a/(1-b)\). It turns out that the spectrum of \(L\) on \(J_{\phi}\) is related to analytic continuation of functions in \(J_{\phi}\). This makes it easier to study the spectrum of \(M_{z}^{*}\) by considering \(L\) on \(J_{\phi}\) instead. For \(\lambda\) in the resolvent set of \(L\) and \(h\in J_{\phi}\) we have
\[(I-\lambda L)^{-1}h(z)=\frac{zh(z)-\lambda h(\lambda)}{z-\lambda}. \tag{5}\]
Also if \(k\) is the reproducing kernel at \(0\), i.e. \(h(0)=\langle h,k\rangle_{\phi}\), then
\[h(\lambda)=\langle(I-\lambda L)^{-1}h,k\rangle_{\phi}. \tag{6}\]
Actually, if all functions in \(J_{\phi}\) are analytic at a point \(\lambda\), then the resolvent is given by (5) (the uniform boundedness principle implies the point evaluation at \(\lambda\in\mathbb{T}\) is bounded), conversely, if \(\lambda\in\rho(L)^{-1}\), then every function in \(J_{\phi}\) is analytic at \(\lambda\) by (6). Thus \(\sigma(L)^{-1}\) coincides with the set of points such that at least one function in \(J_{\phi}\) does not extend analytically to \(\lambda\). Clearly \(\sigma(L)^{-1}\subset\mathbb{T}\) and so it equals \(\overline{\sigma(L)}\).
**Definition 1**.: _For an outer function \(\phi\in H^{2}\) we denote by \(\sigma(\phi)\) the set \(\overline{\sigma(L)}\) constructed above. A point \(e^{i\theta}\in\sigma(\phi)\) is called a point of local non-exposure for \(\phi\)._
Clearly, for an outer function \(\phi\in H^{2}\) we have \(\sigma(\phi)=\varnothing\) if and only if \(\phi^{2}\) is exposed, that is if it has no points of local non-exposure. Unfortunately, it seems to be very difficult in general to determine if a point \(e^{i\theta}\in\mathbb{T}\) lies in \(\sigma(\phi)\). Indeed this is equivalent to the problem of describing the exposed points of \(H^{1}\). The remainder of this section is devoted to giving criteria for inclusion and exclusion in \(\sigma(\phi)\).
If \(\phi\) is "not too small" on an arc \(I\subset\mathbb{T}\), then every function in \(J_{\phi}\) extends analytically across \(I\). The next proposition makes this statement precise.
**Proposition 4**.: _Let \(\phi\) be an outer function in \(H^{2}\)._
1. _If_ \(\phi^{-1}\) _is square summable on an arc_ \(I\subset\mathbb{T}\)_, then_ \(\sigma(\phi)\cap I=\varnothing\)_._
2. _If_ \(\phi\in A=H^{\infty}\cap C(\mathbb{T})\)_, then_ \(\sigma(\phi)\subset Z(\phi)=\{\zeta\in\mathbb{D}\cup\mathbb{T}:\phi(\zeta)=0\}\)_._
Proof.: Let \(h\in J_{\phi}\), then \(h=\phi h\phi^{-1}\) and hence \(h\) is summable on \(I\), which by Morera's theorem (see Ex. 2.12. in [11]) implies \(h\) is analytic across \(I\), this proves \((i)\). For \((ii)\) it suffices to notice that if \(\phi(\zeta)\neq 0\) for some \(\zeta\in\mathbb{T}\) then there exists \(\eta>0\) and an arc \(\zeta\in I\subset\mathbb{T}\), such that \(|\phi|>\eta\) on \(I\) and apply Morera's theorem to \(h=\phi h\phi^{-1}\), \(h\in J_{\phi}\)
The inclusion \(\sigma(\phi)\subset Z(\phi)\) can be proper as demonstrated by the function \(\phi^{2}=(1-z)\in H^{1}\), which has a \(0\) at \(1\), but \(\sigma(\phi)=\varnothing\) since it is easy to verify that \(\phi^{2}\) is an exposed point.
We now turn to the relation between the Aleksandrov-Clark measures of \(b\) and \(\sigma(\phi)\). Recall for \(\alpha\in\mathbb{T}\) that the Aleksandrov-Clark measures are Borel probability measures (we are still assuming \(b(0)=0\)) given by
\[\frac{1+\overline{\alpha}b(z)}{1-\overline{\alpha}b(z)}=\int_{\mathbb{T}}\! \frac{\zeta+z}{\zeta-z}d\mu_{\alpha}(\zeta).\]
With this notation we have \(d\mu_{1}=|\phi|^{2}dm\). By Theorem 5 the function \(V_{\alpha}h\) converges non-tangentially to \(h\) at \((\mu_{\alpha})_{s}\)-almost every point and hence for an outer function \(f\in\mathcal{H}(b)\) it is a necessary condition for cyclicity that \(f\) be nonzero \((\mu_{\alpha})_{s}\)-a.e. In light of this, it is natural to ask if this necessary condition is also sufficient. The answer to this question is negative. Indeed Poltoratski (see [18]) has produced an example of an \(\mathcal{H}(b)\) space such that all Aleksandrov-Clark measures \(\mu_{\alpha}\) are absolutely continuous, but the function \(a\) is not cyclic. However, see Proposition 12 in Section 6. The next result clarifies the relationship between the support of the singular part of the Aleksandrov-Clark measure and the points of local non-exposure. We shall need the notion of Smirnov class and unbounded Toeplitz operators.
Let \(N^{+}\) denote the Smirnov class of quotients of bounded analytic functions in \(\mathbb{D}\) with outer denominator. Functions in \(N^{+}\) have non-tangential limits a.e. and considering the boundary function we have the Smirnov maximum principle: \(H^{p}=L^{p}\cap N^{+}\), for \(0<p\leq\infty\).
For a symbol \(U\in L^{2}\) we define the unbounded Toeplitz operator \(T_{U}\) by the rule
\[T_{U}h(z)=\int_{\mathbb{T}}\frac{U(\zeta)h(\zeta)dm(\zeta)}{1-z\overline{ \zeta}},\text{ for }h\in H^{2},\]
which clearly agrees with the usual definition if \(U\in L^{\infty}\).
The proof of the next result is based on ideas from [21].
**Proposition 5**.: _Let \(\phi\in H^{2}\) be an outer function of unit norm and \((\mu_{\alpha})_{\alpha\in\mathbb{T}}\) the associated Aleksandrov-Clark measures. Then \(\text{supp}(\mu_{\alpha})_{s}\subset\sigma(\phi)\)._
Proof.: Let \((\mu_{\alpha})_{s}\) be the singular part of the Aleksandrov-Clark measure \(\mu_{\alpha}\). Without loss of generality, we may assume \(\alpha\neq 1\) since \(\mu_{1}\) is absolutely continuous. Let \(\theta\) be the inner function defined by
\[\frac{1+\theta}{1-\theta}=\int_{\mathbb{T}}\frac{\zeta+z}{\zeta-z}d(\mu_{ \alpha})_{s}(\zeta).\]
The function \(f=i(1+\theta)/(1-\theta)\in N^{+}\) and is real-valued on \(\mathbb{T}\). Suppose, for a moment, that we can show \(\phi/(1-\theta)\in H^{2}\), then \(\phi f\in H^{2}\) and since \(f\) is real-valued on \(\mathbb{T}\) we also have \(f\overline{\phi}\in\overline{H^{2}}\). Thus after subtracting an appropriate constant \(f\in J_{\phi}\). Since \(f\) has a singularity at every point in the support of \((\mu_{\alpha})_{s}\) the conclusion follows.
Thus it remains to show \(\phi/(1-\theta)\in H^{2}\). It follows from the identity
\[\frac{1-\overline{\alpha}b}{1-\theta}=(1-\overline{\alpha}b)\int_{\mathbb{T}} \frac{1}{1-z\overline{\zeta}}d(\mu_{\alpha})_{s}(\zeta),\]
that \((1-\overline{\alpha}b)/(1-\theta)\in\mathcal{H}(b)\). Hence there exists \(h\in H^{2}/\phi\), such that \(Vh=(1-\overline{\alpha}b)/(1-\theta)\). From this we see
\[\frac{\phi}{1-\theta}=\frac{\phi Vh}{1-\overline{\alpha}b}=T_{\phi_{\alpha}}T _{\overline{\phi}}(\phi h),\]
where we recall that \(\phi_{\alpha}=a/(1-\overline{\alpha}b)\). Thus if we can show that the operator \(T_{\phi_{\alpha}}T_{\overline{\phi}}\) maps \(H^{2}\) into \(H^{2}\) we are done. Precisely this was shown in [21], see also (IV-16) in [22].
Poltoratski's example shows that the inclusion \(\cup_{\alpha\in\mathbb{T}}\mathrm{supp}(\mu_{\alpha})_{s}\subset\sigma(\phi)\) can be proper.
We end with a Proposition on the point spectrum of \(M_{z}^{*}\) on \((aH^{2})^{\perp}\), which is certainly well known. However, we give a new short proof based on Theorem 6 to keep the discussion more self-contained. For \(\gamma>1\) and \(\zeta\in\mathbb{T}\) we define the usual non-tangential cone
\[\Gamma_{\gamma}(\zeta)=\left\{z\in\mathbb{D}:|z-\zeta|<\gamma(1-|z|)\right\}.\]
**Proposition 6**.: _Suppose \(\zeta\in\mathbb{T}\) and \(\overline{\zeta}\) is an eigenvalue of \(M_{z}^{*}:(aH^{2})^{\perp}\to(aH^{2})^{\perp}\). Then_
1. _for each_ \(f\in\mathcal{H}(b)\) _the non-tangential limit_ \[\lim_{\begin{subarray}{c}z\to\zeta\\ z\in\Gamma_{\gamma}(\zeta)\end{subarray}}f(z),\] _exists and is finite._
2. _The function_ \(k_{\zeta}^{b}(z)=(1-\overline{b(\zeta)}b(z))/(1-\overline{\zeta}z)\) _belongs to_ \(\mathcal{H}(b)\) _and for each_ \(f\in\mathcal{H}(b)\) _we have_ \(f(\zeta)=\langle f,k_{\zeta}^{b}\rangle\)_, where_ \(f(\zeta)\) _is defined to be the non-tangential limit above._
3. _The function_ \(k_{\zeta}^{b}(z)=(1-\overline{b(\zeta)}b(z))/(1-\overline{\zeta}z)\) _belongs to_ \(\mathcal{H}(b)\) _and for each_ \(f\in\mathcal{H}(b)\) _we have_ \(f(\zeta)=\langle f,k_{\zeta}^{b}\rangle\)_, where_ \(f(\zeta)\) _is defined to be the non-tangential limit above._
Proof.: Let \(k_{\lambda}\) denote the Cauchy kernel. Since \(\overline{\zeta}\) is an eigenvalue of \(M_{z}^{*}:(aH^{2})^{\perp}\to(aH^{2})^{\perp}\) we deduce from Theorem 6 that \(\overline{\zeta}\) is an eigenvalue of \(L:J_{\phi}\to J_{\phi}\). A simple algebraic computation reveals that any eigenvector of \(L\) corresponding to the eigenvalue \(\overline{\zeta}\) must be a constant multiple of the Cauchy kernel \(k_{\zeta}\). Hence \(k_{\zeta}\in J_{\phi}\). For \(\lambda\in\Gamma_{\gamma}(\zeta)\) and \(z\in\mathbb{T}\) we have
\[\gamma|1-\overline{\lambda}z|\geq\gamma(1-|\lambda|)>|\lambda-\zeta|=|\lambda- z+z-\zeta|\geq|z-\zeta|-|\lambda-z|.\]
From this we deduce \((1+\gamma)|1-\overline{\lambda}z|\geq|1-\overline{\zeta}z|\), for \(\lambda\in\Gamma_{\gamma}(\zeta)\) and \(z\in\mathbb{T}\). Hence \(k_{\lambda}\) converges to \(k_{\zeta}\) non-tangentially in the norm of \(H^{2}/\phi\). We have previously seen that \(Vk_{\lambda}=(1-b(\lambda))^{-1}k_{\lambda}^{b}\), for \(\lambda\in\mathbb{D}\) and hence for \(z\in\mathbb{D}\)
\[\frac{f(z)}{1-b(z)}=\langle f,(1-\overline{b(z)})^{-1}k_{z}^{b}\rangle_{b}= \langle VV^{-1}f,Vk_{z}\rangle_{b}=\langle V^{-1}f,k_{z}\rangle_{H^{2}/\phi}.\]
Thus \((1-b(z))^{-1}f\) converges non-tangentially as \(z\to\zeta\) for each \(f\in\mathcal{H}(b)\). Letting \(f\equiv 1\) shows that \(b\) converges non-tangentially at \(\zeta\) and thus it follows that \(f\) does as well for each \(f\in\mathcal{H}(b)\).
## 5 Proofs of the main results
In this section, we establish the main results stated in the introduction. We begin by giving a complete function theoretic characterization of cyclicty in the case \(\dim(aH^{2})^{\perp}<\infty\) based on Proposition 3.
Proof of Theorem 1.: Recall that we have assumed \(\dim(aH^{2})^{\perp}<\infty\) and \(\overline{\lambda_{1}}\), \(\overline{\lambda_{2}}\),... \(\overline{\lambda_{s}}\) denotes the eigenvalues of \(M_{z}^{*}\) restricted to \((aH^{2})^{\perp}\). Let \(\mu=\mu_{1}\) be the Aleksandrov-Clark measure of \(b\) associated to the point \(1\). By a simple algebraic computation the eigenspaces of the backwards shift, \(L\), on \(P^{2}(\mu)\) are easily seen to be of dimension \(1\). Since by Proposition 1 and part \((ii)\) of Theorem 6 the operator \(M_{z}^{*}\) is unitarily equivalent to the backwards shift on \(P^{2}(\mu)\) the eigenspaces of \(M_{z}^{*}\) are also of dimension \(1\).
To see \((i)\) it suffices to note that if \(\overline{\lambda}\in\mathbb{D}\) is an eigenvalue of \(M_{z}^{*}\) restricted to \((aH^{2})^{\perp}\), then since the reproducing kernel \(k_{\lambda}^{b}\) is an eigenvector it belongs to \((aH^{2})^{\perp}\). Thus we have
\[0=\langle a,k_{\lambda}^{b}\rangle_{b}=a(\lambda),\]
contradicting that \(a\) is outer. Statement \((ii)\) follows from Proposition 6. Also the kernel functions \(k_{\lambda_{j}}^{b}\) belong to \(\mathcal{H}(b)\) and \(f(\lambda_{j})=\langle f,k_{\lambda_{j}}^{b}\rangle\), for all \(f\in\mathcal{H}(b)\). This takes care of the necessity in statement \((iii)\).
We now turn to sufficiency of \((iii)\). Suppose \(f\in{\cal H}(b)\) is outer and \(f(\lambda_{j})\neq 0\), for all \(j=1,2,...,s\). Let \(h\in{\cal H}(b)\) be arbitrary. It will suffice to show that \(\langle z^{n}f,h\rangle=0\), for all \(n\geq 0\) implies \(h\equiv 0\). We claim that we can assume \(h\in(aH^{2})^{\perp}\), indeed decompose \(h=h_{a}+\tilde{h}\), with \(h_{a}\in\mbox{clos}(aH^{2})\) and \(\tilde{h}\in(aH^{2})^{\perp}\). Proposition 3 implies that \(aH^{2}\subset[f]\) and so \(\langle ag,h_{a}\rangle=0\), for all \(g\in H^{2}\) and hence \(h_{a}\equiv 0\). Let \(n=\dim(aH^{2})^{\perp}\) and \(m_{1},m_{2},...,m_{s}\) be the algebraic multiplicities of the eigenvalues \(\overline{\lambda}_{j}\), \(j=1,2,...,s\). We have \(\sum_{j}m_{j}=n\). For each \(1\leq j\leq s\) and \(1\leq l\leq m_{j}\) choose and element \(k_{\lambda_{j}}^{l}\) of norm \(1\), such that
\[k_{\lambda_{j}}^{l}\in\ker(\overline{\lambda}_{j}-M_{z}^{*})^{l},\]
and \(k_{\lambda_{j}}^{l}\notin\ker(\overline{\lambda}_{j}-M_{z}^{*})^{k}\), for \(1\leq k<l\). This is possible by since, as noted earlier, \(M_{z}^{*}\) has simple spectrum. Note that \(k_{\lambda_{j}}^{1}\) is a constant multiple of the kernel element \(k_{\lambda_{j}}^{b}\). Since \((aH^{2})^{\perp}\) is finite dimensional and \(M_{z}^{*}\) acts on \((aH^{2})^{\perp}\) its root vectors (generalized eigenvectors) \((k_{\lambda_{j}}^{l})_{j,l}\) form a basis for \((aH^{2})^{\perp}\) (this is the Jordan normal form). Thus \(h\) can be expressed
\[h(z)=\sum_{j,l}c_{j,l}k_{\lambda_{j}}^{l}(z),\]
for some complex numbers \((c_{j,l})_{j,l}\). We will show \(c_{j,l}=0\) for all \(j\) and \(l\) which will complete the proof. For \(1\leq k\leq s\) let \(p_{k}\) be the polynomial
\[p_{k}(z)=(\lambda_{k}-z)^{m_{k}-1}\prod_{j\neq k}^{s}(\lambda_{j}-z)^{m_{j}}.\]
Then
\[0=\langle p_{k}f,h\rangle=\langle\prod_{j\neq k}(\lambda_{j}-z)^{m_{j}}f,( \overline{\lambda}_{k}-M_{z}^{*})^{m_{k}-1}c_{k,m_{k}}k_{\lambda_{k}}^{m_{k}}\rangle.\]
Since \((\overline{\lambda}_{k}-M_{z}^{*})^{m_{k}-1}k_{\lambda_{k}}^{m_{k}}\) is a nonzero element of \(\ker(\overline{\lambda}_{k}-M_{z}^{*})\) it is of the form \(a_{k}k_{\lambda_{k}}^{b}\), for some nonzero \(a_{k}\). Thus
\[0=\langle\prod_{j\neq k}(\lambda_{j}-z)^{m_{j}}f,a_{k}c_{k,m_{k}}k_{\lambda_{ k}}^{b}\rangle=\left(\prod_{j\neq k}(\lambda_{j}-\lambda_{k})^{m_{j}}\right)a_{k}c_ {k,m_{k}}f(\lambda_{k}),\]
and so \(c_{k,m_{k}}=0\), for all \(1\leq k\leq s\). Replacing \(m_{j}\) by \(m_{j}-1\) (if \(m_{j}>1\)) and iterating the above (finite) process shows \(c_{j,l}=0\) for all \(1\leq j\leq s\) and \(1\leq l\leq m_{j}\). Hence \(h\equiv 0\).
In the remainder of this section we focus on the case \(\dim(aH^{2})^{\perp}=\infty\). Let \(\phi\in H^{2}\) be an outer function of unit norm and consider the de Branges-Rovnyak space \({\cal H}(b)\) generated by \(\phi\). Recall the heuristic principle stated in the introduction: an outer
function \(f\in\mathcal{H}(b)\) is cyclic if it is "not too small" on the set \(\sigma(\phi)\). We turn to establishing the concrete realizations of this principle discussed in the introduction.
Recall that \(N^{+}\) denotes the Smirnov class of quotients of bounded analytic functions in \(\mathbb{D}\) with outer denominator. The proof of Theorem 2 is a direct consequence of the description of the invariant subspaces of \(\mathcal{H}(b)\) in equation (4).
Proof of Theorem 2.: Let \(J\) be the map from Theorem 2 and \(\psi\) the extremal function of \([f]\), that is
\[[f]=\left\{g\in\mathcal{H}(b):\frac{g}{\psi},\frac{g}{\psi}\psi_{1}\in H^{2} \right\},\]
where \(J\psi=(\psi,\psi_{1})\). Recall that since \(f\) is outer \(aH^{2}\subset[f]\). Hence \(ag\psi^{-1}\in H^{2}\) for all \(g\in H^{2}\) which implies \(a\psi^{-1}\in H^{\infty}\) and hence \(\psi^{-1}\mathbb{1}_{E}\in L^{2}\). Similarly \(f\psi^{-1}\in L^{2}\) and so \(\psi^{-1}\mathbb{1}_{F}\in L^{2}\). Since \(E\cup F=\mathbb{T}\) we see \(\psi^{-1}\in L^{2}\) and hence since \(\psi^{-1}\in N^{+}\) we have \(\psi^{-1}\in H^{2}\). The same argument applies to \(\psi^{-1}\psi_{1}\) and so \(1\in[f]=\mathcal{H}(b)\).
Theorems 3 and 4 will require some preliminary lemmas.
**Lemma 2**.: _Let \(\phi\) be an outer function in \(H^{2}\), \(g\in H^{2}/\phi\) and \(h\in J_{\phi}\). Suppose \(\langle g,L^{n}h\rangle_{|\phi|^{2}dm}=0\) for all \(n\geq 0\). Then for \(|\lambda|<1\)_
\[\overline{h(1/\overline{\lambda})}\int_{\mathbb{T}}\frac{g|\phi|^{2}dm(\zeta )}{1-\lambda\overline{\zeta}}=\int_{\mathbb{T}}\frac{\overline{h}g|\phi|^{2}dm (\zeta)}{1-\lambda\overline{\zeta}}.\]
Proof.: Since \(h\in J_{\phi}\) it has analytic psuedocontinuation to the exterior disk and it is in this sense that \(h(1/\overline{\lambda})\) should be understood. For \(|w|>1\) we have by the assumption
\[0=\langle g,(I-wL)^{-1}Lh\rangle_{|\phi|^{2}dm}=\int_{\mathbb{T}}\overline{ \left(\frac{h(\zeta)-h(w)}{\zeta-w}\right)}g|\phi|^{2}dm(\zeta).\]
Rearranging and using \(\zeta-w=-w(1-\zeta/w)\) we have
\[\overline{h(w)}\int_{\mathbb{T}}\frac{1}{1-\overline{\zeta}/\overline{w}}g| \phi|^{2}dm(\zeta)=\int_{\mathbb{T}}\frac{1}{1-\overline{\zeta}/\overline{w}} \overline{h}g|\phi|^{2}dm(\zeta).\]
The result now follows by setting \(\lambda=1/\overline{w}\).
We denote by \(L_{0}^{1,\infty}\) the class of functions \(h\in L^{1,\infty}\) satisfying
\[m(\left\{e^{i\theta}\in\mathbb{T}:|h(e^{i\theta})|>t\right\})=o(1/t),t\to\infty.\]
The set \(L_{0}^{1,\infty}\) is a closed subspace of \(L^{1,\infty}\). We define its analytic subspace \(H_{0}^{1,\infty}=N^{+}\cap L_{0}^{1,\infty}\). It is a result of Kolmogorov that for \(h\in L^{1}\) the Cauchy integral \(Ch\) belongs to \(H_{0}^{1,\infty}\), \(Ch\in H_{0}^{1,\infty}\). The next Lemma is essentially due to Aleksandrov. Indeed the it is a direct Corollary to Theorem 6 of [1], see also Lemma 5.13 and Lemma 5.22 in [10].
**Lemma 3**.: _Suppose \(f,\overline{f}\in H^{1,\infty}_{0}\). Then \(f\) is constant._
With this in hand, we are ready for the proof of Theorem 3.
Proof of Theorem 3.: Recall that \(f\in\mathcal{H}(b)\) is an outer function, such that for each point \(\zeta\in\sigma(\phi)\) there exists an open arc \(\zeta\in I_{\zeta}\subset\mathbb{T}\) and a number \(\eta_{\zeta}>0\), such that \(|f|>\eta_{\zeta}\) a.e. on \(I_{\zeta}\). Let \(V=V_{1}:P^{2}(\mu_{1})\to\mathcal{H}(b)\) be the unitary map from Proposition 1 and recall that we can without loss of generality assume that \(\phi=a/(1-b)\) is of unit norm and hence \(P^{2}(\mu_{1})=H^{2}/\phi\). Suppose \(\langle z^{n}f,Vh\rangle_{b}=0\) for all \(n\geq 0\) and some \(h\in H^{2}/\phi\). It will be sufficient to show \(h\equiv 0\). Since \(\operatorname{clos}(aH^{2})\subset[f]\) we can without loss of generality restrict to the case \(Vh\in(aH^{2})^{\perp}\) or equivalently \(h\in J_{\phi}\). Let \(g\in H^{2}/\phi\) the unique function satisfying \(Vg=f\). By Lemma 2 we have for \(|\lambda|<1\)
\[\overline{h(1/\lambda)}f(\lambda)=(1-b(\lambda))\int_{\mathbb{T}}\frac{ \overline{h}g|\phi|^{2}}{1-\lambda\overline{\zeta}}dm(\zeta).\]
Since \(\overline{h}g|\phi|^{2}\in L^{1}\) we have \(\overline{h}f\in L^{1,\infty}_{0}\). The set \(\sigma(\phi)\) is compact, hence there exists a number \(\eta>0\) and a finite collection of \(I_{\zeta_{j}}\), \(j=1,2,...,N\) such that \(\sigma(\phi)\subset\cup_{j=1}^{N}I_{\zeta_{j}}=I\), \(|f|>\eta>0\) a.e. on \(I\) and \(h\) is analytic in a neighborhood of \(\mathbb{T}\setminus I\). From this we see \(\overline{h}\in L^{1,\infty}_{0}\). Recall that \(J_{\phi}\subset N^{+}\) and so \(h,\overline{h}\in N^{+}\). Hence \(h,\overline{h}\in H^{1,\infty}_{0}\). Since \(h(\infty)=0\) we have \(h\equiv 0\) by Lemma 3.
With the above tools in hand, we can prove the following necessity theorem for annihilators of the *-cyclic subspace generated by \(h\in J_{\phi}\).
**Proposition 7**.: _Let \(\mathcal{H}(b)\) be a non-extreme de Branges-Rovnyak space, with \(b(0)=0\). We can without loss of generality assume that \(\phi=a/(1-b)\) is of unit norm. Let \(g\in H^{\infty}\) and write \(f=Vg\), where \(V\) is the unitary map from Proposition 1. Denote by \(\theta\) the inner factor of \(f\) and set \(F=f/\theta\). Suppose \(h\in J_{\phi}\setminus\{0\}\) and \(g\) annihilates the *-cyclic subspace generated by \(h\), that is \(\langle g,L^{n}h\rangle_{J_{\phi}}=0\), for all \(n\geq 0\). Then \(h\) has a singularity at \(\sigma(F)\)._
Proof.: From Lemma 2 we have for \(|\lambda|<1\)
\[\overline{h(1/\overline{\lambda})}f(\lambda)=(1-b(\lambda))\int_{\mathbb{T}} \overline{\overline{h(\zeta)}g(\zeta)|\phi|^{2}dm(\zeta)}{1-\lambda\overline{ \zeta}}.\]
Since \(g\) is bounded \(\overline{h}g\in H^{2}/\phi\) and hence \(\overline{h}f\in H^{2}\). Since \(\theta\) is the inner factor of \(f\) and \(\overline{h}(0)=0\) we have \(\overline{h}F=\overline{h}f/\theta\in H^{2}_{0}\). Since \(hF\in N^{+}\) is also square summable on \(\mathbb{T}\) we have
\[h\in\frac{H^{2}}{F}\cap\overline{\overline{H^{2}_{0}}}=J_{F}.\]
Thus \(h\) has a singularity at \(\sigma(F)\).
Theorem 4 follows immediately from the above result.
Examples
In this section, we apply the main results to give examples of cyclic vectors in certain \(\mathcal{H}(b)\) spaces. We begin with a result valid for all non-extreme de Branges-Rovnyak spaces.
**Proposition 8**.: _Let \(b\) be a non-extreme function in the unit ball of \(H^{\infty}\) and \(\mathcal{H}(b)\) the corresponding de Branges-Rovnyak space. Then_
1. _For each_ \(\lambda\in\mathbb{D}\) _the kernel_ \(k_{\lambda}^{b}(z)=(1-\overline{b(\lambda)}b(z))/(1-\overline{\lambda}z)\) _is a cyclic vector._
2. _The function_ \(b\) _is cyclic if and only if it is outer._
Proof.: Part \((i)\) can be seen by appealing to Theorem 4. Indeed the Cauchy kernel \(k_{\lambda}\) belongs to \(H^{\infty}\) and \(k_{\lambda}^{b}=(1-\overline{b(\lambda)})Vk_{\lambda}\). Since \(|k_{\lambda}^{b}(\zeta)|\geq 2^{-1}|1-|b(\lambda)|>0\) for almost every \(\zeta\in\mathbb{T}\) we have \(J_{k_{\lambda}^{b}}=\varnothing\), by Proposition 4.
To see part \((ii)\) it suffices to notice that the identity \(|a|^{2}+|b|^{2}=1\) valid a.e. on \(\mathbb{T}\) prevents \(a\) and \(b\) from being small simultaneously (up to a set of measure \(0\)) and hence by Theorem 2 the function \(b\) is a cyclic vector if it is outer.
Theorem 3 is useful for studying cyclic vectors in \(\mathcal{H}(b)\) spaces, where \(\phi=a/(1-b)\) can be written as \(\phi=F\phi_{1}\) and \(F^{2}\) is, after a possible normalization, an exposed point of the unit ball of \(H^{1}\). In this case one can often discard \(F\) as a "trivial factor". We give an example where the function \(\phi_{1}\) has only one point of local non-exposure.
We say that a function a measurable function \(f:\mathbb{T}\to\mathbb{C}\) is separated from \(0\) at a point \(\zeta\in\mathbb{T}\) if there exists a constant \(\epsilon\) and an open arc \(\zeta\in I\subset\mathbb{T}\), such that \(|f|>\epsilon\) a.e. on \(I\).
**Proposition 9**.: _Let \(\phi=F\phi_{1}\in H^{2}\) be an outer function of unit norm and \(\mathcal{H}(b)\) be the de Branges-Rovnyak space generated by \(\phi\). Let, in addition, \(\phi_{1},F\in H^{2}\) be outer functions and suppose \(F^{2}/\|F^{2}\|_{1}\) is an exposed point of the unit ball of \(H^{1}\). Suppose that for each open arc \(I\subset\mathbb{T}\) containing \(1\) there exists a constant \(\epsilon=\epsilon(I)>0\), such that \(|\phi_{1}|>\epsilon\) a.e. on \(\mathbb{T}\setminus I\). Then \(\sigma(\phi)\subset\{1\}\). Moreover, if \(f\in\mathcal{H}(b)\) is an outer function and \(f\) is separated from \(0\) at the point \(1\), then \(f\) is a cyclic vector in \(\mathcal{H}(b)\)._
Proof.: Let \(h\in J_{\phi}\setminus\{0\}\) and suppose for a contradiction that \(h\) is analytic across some open arc \(I\) containing \(1\), then
\[Fh=\mathbb{1}_{\mathbb{T}\setminus I}(F\phi_{1}h)\phi_{1}^{-1}+\mathbb{1}_{I}Fh.\]
Since \(\phi_{1}^{-1}\in L^{\infty}(\mathbb{T}\setminus I)\) and \(h\) is analytic across \(I\) we have \(Fh\in L^{2}\). Since also \(h\overline{F}=\overline{F\phi_{1}}h\overline{\phi_{1}}^{-1}=\overline{\phi}h \overline{\phi_{1}}^{-1}\in\overline{N^{+}}\) it follows that
\[h\in\frac{H^{2}}{F}\cap\overline{\overline{H^{2}}}.\]
Since \(F^{2}\) is an exposed point of \(H^{1}\) this implies \(h\equiv 0\) giving a contradiction. In particular, each function in \(J_{\phi}\setminus\{0\}\) must have a singularity at the point \(1\). We show that this implies \(\sigma(\phi)\subset\{1\}\). Suppose for a contradiction \(\zeta\neq 1\) and \(\zeta\in\sigma(\phi)\). By Theorem 3 in [14] there exists a function \(h\in J_{\phi}\), such that all of its singularities are contained inside an open arc \(\zeta\in U\subset\mathbb{T}\) and \(1\notin U\) contradicting that \(h\) must have a singularity at \(1\). The second part of the Proposition follows from Theorem 3 and the fact that \(\sigma(\phi)\subset\{1\}\).
Let \(A=C(\mathbb{T})\cap H^{\infty}\) be the disk algebra. For \(f\in A\) we let \(Z(f)=\{\zeta\in\mathbb{D}\cup\mathbb{T}:f(\zeta)=0\}\) denote its zero set.
**Proposition 10**.: _Let \(f\in\mathcal{H}(b)\cap A\) be an outer function. If \(Z(f)\cap\sigma(\phi)=\varnothing\), then \(f\) is cyclic._
Proof.: By Theorem 3 it suffices to show that \(f\) is separated from zero in a neighborhood of each point \(\zeta\in\sigma(\phi)\). By assumption \(f\) is continuous and non-zero on \(\sigma(\phi)\) and hence it follows immediately.
Let us now give an application of our results to generalized Dirichlet spaces. Let \(\mu\) be a positive finitely supported measure on \(\mathbb{T}\). The Dirichlet space \(\mathcal{D}(\mu)\) associated with \(\mu\) is the set of \(f\in H^{2}\) that have non-tangential limits \(\mu\)-a.e. and such that the generalized Dirichlet integral
\[\mathcal{D}_{\mu}(f)=\int_{\mathbb{T}}\int_{\mathbb{T}}\left|\frac{f(z)-f(w)} {z-w}\right|d\mu(w)dm(z),\]
is finite. The norm on \(\mathcal{D}(\mu)\) is given by \(\|f\|^{2}=D_{\mu}(f)+\|f\|_{2}^{2}\). In [5] it was shown that \(\mathcal{D}(\mu)=\mathcal{H}(b)\) with equivalent norms for some polynomials \(b\) and \(a\) where \(a\) has a simple zero at each point in the support of \(\mu\) and no other zeros. Combining this with Theorem 1 result yields the following Proposition which was obtained directly in [12].
**Proposition 11**.: _Let \(\mu\) be a finitely supported measure on \(\mathbb{T}\) and \(\mathcal{D}(\mu)\) the associated Dirichlet space. Denote by \(\lambda_{1}\), \(\lambda_{2}\),..., \(\lambda_{s}\) the support of \(\mu\). Then \(f\in\mathcal{D}(\mu)\) is cyclic if and only if \(f\) is outer and \(f(\lambda_{j})\neq 0\), for all \(j=1,2,...,s\)._
We now consider a case with \(\dim(aH^{2})^{\perp}=\infty\) where we can give simple necessary and sufficient conditions for cyclicity. Let \(\theta\) be a non-constant inner function and \(b=(1+\theta)/2\). In the literature the choice \(\theta=z\) is common. We remark that in this case, one does not have \(b(0)=0\). One can consider an equivalent version with \(b(0)=0\), however, to stay consistent with the literature we prefer to consider the non-normalized version. Since \(b(0)\neq 0\) in the next Proposition it is important to
recall that the Aleksandrov-Clark measures will not be probability measures in this case.
For an inner function \(\theta\) we define the usual model space \(K_{\theta}=H^{2}\cap\theta\overline{H_{0}^{2}}\).
**Proposition 12**.: _Let \(\theta\) be a non-constant inner function and \(b=(1+\theta)/2\). Then_
1. \(\mathcal{H}(b)=\frac{1}{2}(1-\theta)H^{2}\oplus K_{\theta}\)_, where the orthogonal sum is taken with respect to the norm in_ \(\mathcal{H}(b)\)_._
2. _All functions in_ \(\mathcal{H}(b)\) _have non-tangential limits_ \(\sigma\)_-a.e., where_ \(\sigma\) _is the measure, such that_ \[\frac{1-|\theta|^{2}}{|1-\theta|^{2}}=\int_{\mathbb{T}}\frac{1-|z|^{2}}{|\zeta -z|^{2}}d\sigma(\zeta).\]
3. _Let_ \(f\in\mathcal{H}(b)\) _be an outer function. Then_ \(f\) _is cyclic if and only if_ \(f\) _is non-zero_ \(\sigma\)_-a.e._
Proof.: A simple computation gives
\[\frac{1+b}{1-b}=\frac{3+\theta}{1-\theta}=1+2\frac{1+\theta}{1-\theta}.\]
Taking real parts we see
\[\frac{1-|b|^{2}}{|1-b|^{2}}=\int_{\mathbb{T}}\frac{1-|z|^{2}}{|\zeta-z|^{2}}d (m+2\sigma)(\zeta).\]
Thus \(\mu=m+2\sigma\) is the Aleksandrov-Clark measure of \(b\) associated to the point \(1\). Note that \(\mu\) is not absolutely continuous and not of total mass \(1\). It is easy to see that \(a=(1-\theta)/2\), hence \(\phi=a/(1-b)=1\) and \(J_{\phi}=\varnothing\). It follows from the Remark following Proposition 1 that
\[\mathcal{H}(b)=VP^{2}(m+2\sigma)=VH^{2}\oplus VL^{2}(2\sigma)=\frac{1-\theta} {2}H^{2}\oplus K_{\theta}.\]
Thus part \((i)\) is proved. Part \((ii)\) follows from Poltoratski's Theorem on boundary convergence of normalized Cauchy transforms. Now let \(f\in\mathcal{H}(b)\) be an outer function which is non-zero \(\sigma\)-a.e. Let \(Vh\in\mathcal{H}(b)\) and suppose \(\langle z^{n}f,Vh\rangle=0\) for all \(n\geq 0\). We must show \(Vh\equiv 0\). We may suppose \(Vh\in(aH^{2})^{\perp}\). Since \(aH^{2}=\frac{1-\theta}{2}H^{2}\) it must be that \((aH^{2})^{\perp}=VL^{2}(2\sigma)=K_{\theta}\), and thus \(Vh\in(aH^{2})^{\perp}\) is equivalent to \(h\in L^{2}(\sigma)\). Since \(f\) converges to \(V^{-1}f\) non-tangentially \(\sigma\)-a.e. we see that \(z^{n}f\) converges non-tangentially to \(\zeta^{n}V^{-1}f\)\(\sigma\)-a.e. Thus
\[0=\langle z^{n}f,Vh\rangle_{b}=\int_{\mathbb{T}}\zeta^{n}V^{-1}f\overline{h}d \sigma(\zeta),\,\text{for all $n\geq 0$}.\]
Since polynomials are dense in \(L^{2}(\sigma)\) (the measure is singular) we have that \(V^{-1}\bar{f}h=0\)\(\sigma\)-a.e. Since \(V^{-1}f\) is non-zero \(\sigma\)-a.e. we see \(h=0\)\(\sigma\)-a.e. and thus \(Vh\equiv 0\). Conversely, if \(V^{-1}f\) is not non-zero \(\sigma\)-a.e. it follows from the above computation that \(f\) cannot be cyclic and hence the result is proved.
|
2302.14244 | Dynamic Transition From Mach to Regular Reflection Over a Moving Wedge | The design of supersonic and hypersonic air-breathing vehicles is influenced
by the transition between the Mach Reflection (MR) and Regular Reflection (RR)
phenomena. The purpose of this study is to investigate the dynamic transition
of unsteady supersonic flow from MR to RR over a two-dimensional wedge
numerically. The trailing edge of the wedge moves downstream along the
$x$-direction with a velocity, $V(t)$ at a free-stream Mach number of $3$. An
unsteady compressible inviscid flow solver is used to simulate the phenomenon.
Further, the Arbitrary Lagrangian-Eulerian (ALE) technique is applied to deform
the mesh during the wedge motion. The dynamic transition from MR to RR is
defined by two criteria, the sonic and the Von-Neumann. Moreover, the lag in
the dynamic transition from the steady-state condition is studied using various
reduced frequencies, $\kappa$, in the range of [0.1-2]. The lag effect in the
shock system is remarkable at the high values of the reduced frequency,
$\kappa=1.5$ and $2.0$. Furthermore, because the shock is bent upstream during
the fast motion of the wedge, the transition from MR to RR happens below the
Dual Solution Domain (DSD). | Lubna Margha, Ahmed A. Hamada, Ahmed Eltaweel | 2023-02-28T01:58:51Z | http://arxiv.org/abs/2302.14244v1 | # Dynamic Transition From Mach to Regular Reflection Over a Moving Wedge
###### Abstract
The design of supersonic and hypersonic air-breathing vehicles is influenced by the transition between the Mach Reflection (MR) and Regular Reflection (RR) phenomena. The purpose of this study is to investigate the dynamic transition of unsteady supersonic flow from MR to RR over a two-dimensional wedge numerically. The trailing edge of the wedge moves downstream along the \(x\)-direction with a velocity, \(V(t)\) at a free-stream Mach number of 3. An unsteady compressible inviscid flow solver is used to simulate the phenomenon. Further, the Arbitrary Lagrangian-Eulerian (ALE) technique is applied to deform the mesh during the wedge motion. The dynamic transition from MR to RR is defined by two criteria, the sonic and the Von-Neumann. Moreover, the lag in the dynamic transition from the steady-state condition is studied using various non-dimensional angular velocities, \(\kappa\), in the range of [0.1-2]. The lag effect in the shock system is remarkable at the high values of the non-dimensional angular velocity, \(\kappa=1.5\) and 2.0. Furthermore, because the shock is bent upstream during the fast motion of the wedge, the transition from MR to RR happens below the Dual Solution Domain (DSD).
Unsteady; Regular reflection; Mach reflection; Moving wedge; Dynamic shock waves; Supersonic flow; Dual solution domain. +
Footnote †: slugcomment: Conference Paper
## 1 Introduction
The accurate predictions of the Regular and Mach reflections transition phenomena are indispensable in many engineering applications, such as supersonic and hypersonic vehicles, the explosion gas dynamics, and shock wave focusing. When a supersonic flow impinges a symmetric sharp wedge of a fixed small deflection angle, (\(\theta\)), an incident straight oblique shock wave generates and reflects at the top symmetric plane, forming a regular reflected shock wave (RR case, see Figure 1 (a)) for weak shock waves. While, a Mach stem height is initiated when the incident shock wave deflects by an enough large fixed wedge angle, generating a three shock wave configuration and slip-line at the triple point, which is known as the Mach reflection shock wave (MR case, see Figure 1 (b)). The structure of shock reflections over a symmetric wedge is sensitive to the incoming shock wave's Mach numbers (\(M_{\infty}\)), compression ramp angles (\(\theta\)), and initial boundary conditions. Several scholars clearly investigated the hysteresis of the transition between RR and MR over a stationary wedge, such as Ben-Dor [1, 2, 3], Ivanov et al. [4, 5], and Yan et al. [6, 7].
The unsteady dynamic transition from MR and RR is conducted with different types of wedge motion. Naidoo and Skews [8, 9] studied numerically and experimentally the unsteady supersonic flow over an impulsive rotating symmetric wedge at different pivot points locations, such as leading edge and trailing edge. They rapidly rotate the leading/trailing edge point at different rates and discussed the effect of choosing the rotating rate on the unsteady flow features and the dynamics of the shock system. Their results concluded that the transitional point and the Mach stem height highly depend on the rotation speed. Moreover, the rotating pivot location affects the development of the flow field. The transition wave angle from MR to RR was delayed and happened below the theoretical von-Neumann limit in the Dual Solution Domain (DSD). Additionally, their results mentioned that the MR configuration can exist for a while at a zero wedge deflection angle at a sufficiently large rotation speed.
Figure 1: Schematic of the Supersonic Flow. FIELD OVER A WEDGE Showing THE RR AND MR SHOCK STRUCTURES.
Furthermore, a new mechanism was proposed by Margha et. al. [10] to control the transition between the RR and MR by changing the wedge deflection angle,\(\theta\) at constant wedge height, \(h\). They numerically studied the RR to MR transition by fixing the wedge leading edge and moving the trailing edge point upstream with a velocity, \(V(t)\), and various frequencies, \(\kappa\). Their results showed the lag effects of using \(\kappa\) on the transition flow parameters. To extend our work, this paper is written to investigate in detail the dynamic effects of \(\kappa\) on the transition from MR to RR. A two-dimensional symmetric wedge with an initial inclination angle of \(\theta=23^{\circ}\) is exposed to a free-stream Mach number, \(M_{\infty}=3\), and the trailing edge point is moved horizontally downstream with a range of \(\kappa=\) [0.1-2].
## 2 Computational model
Two-dimensional unsteady Euler equations were used to set the initial steady-state solution at a wedge angle of \(\theta=23^{\circ}\). Then, the horizontal downstream wedge motion was modeled with different values of non-dimensional angular velocities, \(\kappa\). The motion was added to the solver by applying the Arbitrary Lagrangian-Eulerian (ALE) technique to compute the new wedge location at each time step. Details of the numerical method, its verification, and the grid generation were provided in the previous work [10].
### Model Description
The wedge's motion and the two possible flow structures are described in Figure 2. A wedge of fixed height, \(h\), and varying length and wedge's angle with time, \(L(t)\) and \(\theta(t)\), respectively are exposed to a supersonic flow with a Mach number of 3. The half-height of the computational domain is, \(H\), and the total length of both the wedge and the following flat plate is constant during the motion as \(L_{t}\). The motion is started with a steady-state wedge of \(\theta_{i}=23^{\circ}\) with the Mach Reflection MR shock configuration. Then, the trailing edge is suddenly moved downstream, decreasing the wedge angle, with velocity, \(V(t)\), and at different constant rates, \(\kappa=\) [\(0.1,0.5,1.0,1.5,2.0\)]. During the motion \(h\) and \(L_{t}\) are kept fixed, while \(L(t)\) is increasing and \(\theta(t)\) is decreasing with time. All important parameters are given in Table 1.
### Governing Equations
Two-dimensional unsteady Euler equations for supersonic flows are used to compute the flow over the wedge and are expressed as:
\[\frac{\partial Q}{\partial t}+\frac{\partial F}{\partial x}+\frac{\partial G}{ \partial y}=0, \tag{1}\]
where
\[Q=\begin{bmatrix}\rho\\ \rho u\\ \rho v\\ \rho e\end{bmatrix},\quad F=\begin{bmatrix}\rho u\\ \rho u^{2}+p\\ \rho uv\\ u(\rho e+p)\end{bmatrix},\quad G=\begin{bmatrix}\rho v\\ \rho uv\\ \rho v^{2}+p\\ v(\rho e+p)\end{bmatrix} \tag{2}\]
The static pressure is obtained from
\[p=(\gamma-1)\left(\rho e-\rho\frac{\mu^{2}+v^{2}}{2}\right) \tag{3}\]
where \(p\), \(\rho\), and \(e\) are the flow field pressure, density, and internal energy, respectively. \(u\) and \(v\) are the velocity components in the Cartesian coordinates \(x\) and \(y\), respectively, and \(\gamma\) is the gas-specific heat ratio, which is set for a perfect gas of 1.4.
\begin{table}
\begin{tabular}{c c} \hline Initial wedge’s chord, \(w(0)\) & \(0.833m\) \\ \hline Initial wedge angle, \(\theta(0)\) & \(23^{\circ}\) \\ Final wedge angle, \(\theta(t_{f})\) & \(10.5^{\circ}\) \\ Wedge height to half domain height, \(\frac{h}{H}\) & \(0.3617\) \\ Initial wedge length to half domain height, \(\frac{L(0)}{H}\) & \(0.85221\) \\ Total wedge length to half domain height, \(\frac{L_{t}}{H}\) & \(2\) \\ Free-stream Mach number, \(M_{\infty}\) & \(3\) \\ Reduce frequency, \(\kappa\) & \(\begin{cases}0.1,0.5,\\ 1.0,1.5,\\ 2.0\end{cases}\) \\ \hline \end{tabular}
\end{table}
Table 1: SYSTEM PROPERTIES AND PARAMETERS.
Figure 2: THE FLOW CONFIGATION OF THE HORIZONTAL MOTION OF TWO SYMMETRICAL WEDGES IN A SUPERSONIC FLOW.
### Equations of Motion
The wedge's trailing edge point is moved horizontally downstream with velocity \(V(t)\) and a constant wedge angular velocity, \(\omega=d\theta/dt\) (\(sec^{-1}\)). Accordingly, the velocity of the wedge's trailing edge is expressed as:
\[V_{t}(t)=\omega\;h\sqrt{1+cot^{2}(\theta(t))} \tag{4}\]
where \(h\) is the height of the wedge and it is kept fixed, and \(\theta(t)\) is the decreasing wedge angle.
Further, \(\kappa\) is the non-dimensional angular velocity which is normalized using the free-stream velocity, \(V_{\infty}\), and the initial wedge stream-wise length,\(L(0)\).
\[\kappa=\frac{\omega\;L(0)}{U_{\infty}} \tag{5}\]
Additionally, the time dependant wedge angle as a function of non-dimensional time, \(\tau\), is defined as:
\[\theta(\tau)=\theta(0)+\kappa\tau \tag{6}\]
where \(\theta(0)\) is the initial wedge angle at \(\tau=0\), and the non-dimensional time is defined as:
\[\tau=\frac{t\;U_{\infty}}{L(0)} \tag{7}\]
### Computational Domain
The unsteady supersonic flow with a free-stream Mach number of 3 over a moving wedge was simulated using the _rhoCentralDyMFoam_ solver. It is a density-based solver in the free open-source CFD toolbox, OpenFOAM(r)-v2006. Kurganov and Tadmor's [11; 12; 13] semi-discrete and upwind-central non-staggered techniques were implemented in the solver. A dynamic mesh was conducted during the computational time using the Arbitrary Lagrangian-Eulerian (ALE) technique [14], due to using the _"DyM"_ solver. Due to the obvious symmetry of the flow behavior and geometry, half of the computational domain was considered. Nine structured and curved blocks with quadrilateral cells were used to discretize the domain and to improve the orthogonality of the cells, as shown in Figure 3. Moreover, this figure shows the boundary conditions over the faces of the domain. The mesh was refined to the level that the absolute percentage error of the Mach stem height, MS, reached 3.4%. This is an accepted margin of error, especially since the percentage of error in the tangent wave angle is 0.17%, as discussed in detail in our previous work [10]. The number of cells of the selected grid is \(2624\times 720\) cells. Further details about the solver, the mesh generation, and the verification were discussed by Margha et al. [10].
## 3 Results and Discussion
The dynamic transition from Mach to Regular reflection was investigated using a horizontal downstream moving wedge at a free-stream Mach number of \(M_{\infty}=3\). The problem was simulated by starting with MR in a steady flow at \(\theta=23^{\circ}\), then the deflection wedge angle was suddenly reduced to \(10.5^{\circ}\). The transition from MR to RR was determined using both the sonic and Von-Neumann criteria. The lag effect in the transition angles, \(\theta_{t}\) and \(\beta_{t}\), and the Mach stem height, MS, was studied using different non-dimensional angular velocities, \(\kappa=[0.1,0.5,1.0,1.5,2.0]\).
### Dynamic Transition from RR to MR
The impulsive motion of the trailing edge point was started from the steady state at \(\theta=23^{\circ}\), where the MR configuration exists. The tested non-dimensional angular velocities, \(\kappa\) affect the unsteady shock wave configuration causing an obvious lag from the steady-state values. This lag resulted in curvature in the incident shock wave angle as shown in Figure 4. That's why the tangent wave angle, \(\beta_{max}\), was also measured during the unsteady simulations and was compared with the straight wave angle, \(\beta_{p}\) at the same \(\theta\) and \(\kappa\).
Figure 4: The PRESSURE GRADIENT AT \(\kappa=2.0\) AND \(\theta=15^{\circ}\), TO SHOW THE CURVATURE IN THE INCIDENT WAVE AND THE DIFFERENCE BETWEEN \(\beta_{P}\) AND \(\beta_{t_{\rm{merg}}}\).
Figure 3: Schematic of the computational domain, THE BOUNDARY, AND INITIAL CONDITIONS, FOR MESH 1
The Sonic and von-Neumann criteria were used to define the transition from MR to RR. The Sonic condition was measured when the flow behind the reflected shock becomes sonic, \(M_{\infty}=1\). While the von-Neumann condition occurred at the point when the slip-line starts to disappear and the flow beyond the Mach stem turns parallel to the mid-plane of symmetry. In the steady state, the von-Neumann condition is known as the physical limit for the steady Mach reflection. Figure 5 shows a close view of the used two criteria in measuring the transition from MR to RR at \(\kappa=0.5\). The Sonic and von-Neumann limits happened at wedge angles \(\theta_{{}_{\rm S}}=14.9878\) and \(\theta_{{}_{\rm tsv}}=14.9607\), as shown in Figures 5 (b) and 5 (d), respectively.
The lag in the transition wedge and wave angles was summarized in Table 2 using the Sonic transition criteria and in Table 3 using the von-Neumann transition criteria. The results show that the difference between the dynamic transition angles using the two criteria was within a degree. The steady-state von-Neumann transition for \(M_{\infty}=3\) occurs at a wedge angle of \(\theta_{{}_{\rm S}c}=19.66^{\circ}\), and an incident wave angle of \(\beta_{{}_{\rm S}c}=39.34^{\circ}\). Decreasing the wedge deflection angle suddenly with different non-dimensional angular velocities, \(\kappa\) resulted in a deviation of these angles from the von-Neumann steady-state transition angles. This deviation increased with the increase in the value of \(\kappa\), as indicated in Tables 2 and 3. Furthermore, moving the trailing edge of the wedge with a small value of \(\kappa=0.1\) caused a deviation in the transition wedge angle, \(\theta_{t}-\theta_{{}_{\rm S}c}\), within a degree and in the transition wave angle, \(\beta_{t}-\beta_{{}_{\rm S}c}\) of \(2.73^{\circ}\). For a relatively higher \(\kappa=1.0\), the lag increased, reaching around \(8^{\circ}\) in the wedge angle and within \(8.9^{\circ}\) in the wave angle. At \(\kappa>1.0\), such as the tested values of \(\kappa=(1.5,\ 2.0)\), the transition happens at a deflection wedge angle \(\theta<10.5^{\circ}\). This can not be studied with the current geometries, as the possible minimum wedge angle with the fixed wedge height, \(h\), is \(10.25^{\circ}\) as shown in Figure 13, using the velocity gradient contours for \(\kappa=2.0\). As \(\theta\) decreased at a low value of \(\kappa=0.1\), the MS moved downstream decreasing its height until vanished and transition to RR occurred as shown in Figure 11. Further, Figure 12 shows the transition at a relatively high value of \(\kappa=1.0\), while decreasing the wedge incident angle, the MS height increased then abruptly decreased till the transition happened at the end of the motion, \(\theta_{t}=10.7^{\circ}\).
mation from the wedge apex to the reflected/triple point. The results indicated that there was no variation in the wave angle, \(\beta_{p}=41.478\), at the beginning period of motion, from \(\theta=23^{\circ}\) to a couple of lower angles. This is because the lag was developing during the period due to the sudden motion of the wedge with certain \(\kappa\). For example, at high value of \(\kappa=2.0\), \(\beta_{p}\) stayed constant at \(41.4^{\circ}\) during decreasing the wedge angle from \(23^{\circ}\) to \(20^{\circ}\). The effect of the lag decreased using lower values of \(\kappa\). Moreover, Figure 8 assures the concept of the lag during the wedge's motion, by comparing the straight and tangent wave angles at the reflection/triple point, \(\beta_{p}\) and \(\beta_{tang}\), respectively. The deviation between the two wave angles grew with the increase of \(\kappa\) from within \(4^{\circ}\) in the case of \(\kappa=0.1\) to within \(9^{\circ}\) in the case of \(\kappa=2.0\) at the final time of the motion of \(\theta=10.5^{\circ}\). Further, Figure 7 showed that the variation in the value of \(\kappa\) insignificantly affected the lagged tangent wave angle as the curves were almost the same.
state wedge angle of \(\theta_{i}=23^{\circ}\).
### Shock Reflection Domain
Analytically, when a sharp compression ramp opposes a supersonic flow, there are many possible shock wave systems, including the RR and the MR as summarized by Mouton [15]. The wedge angle and the free-stream Mach number influence the formation of a certain configuration. Figure 10 shows a comparison between the dynamic von-Neumann transition wave angle using different motion speeds to the theoretical transition criteria on the Duel Solution Domain, DSD. As \(\theta\) decreased with a certain value of \(\kappa\), the wave angle decreased, placing the dynamic transition tangent wave angle, \(\beta_{t_{\text{max}}}\), beyond the theoretical von-Neumann condition. For low motion rates, such as \(\kappa=0.1\), the \(\beta_{t_{\text{max}}}\) was just below the analytical limit. Using higher non-dimensional angular velocities, such as \(\kappa=1.0\), resulted in increasing the gap between the transition wave angle and the physical steady-state limit. This was because of the dynamic shock system's extreme lag.
## 4 Conclusion
The aim of the current paper is to numerically study the dynamic transition from MR to RR for an inviscid supersonic flow of \(M_{\infty}=3\) over a moving wedge. The motion was achieved, by decreasing the wedge angle with different non-dimensional angular velocities, \(\kappa\), and keeping the wedge height fixed. The dynamic transitional angles and the dynamic hysteresis were analyzed to investigate the phenomenon. This research work concludes that the non-dimensional angular velocity of the motion lagged the shock system, causing the occurrence of the transitional angle below the theoretical von-Neumann criterion in the Dual-Solution Domain. Moreover, the strength of the lag is clearly observed in the Mach stem height, where the transition did not happen at \(\kappa=1.5\) and \(2.0\) with our current geometric limits. Further, the Sonic and von-Neumann criteria happened very close to each other as the difference in the transitional angles was within a degree.
|
2309.15228 | On plus-one generated conic-line arrangements with simple singularities | In this paper we study plus-one generated arrangements of conics and lines in
the complex projective plane with simple singularities. We provide several
degree-wise classification results that allow us to construct explicit examples
of such arrangements. | Anca Măcinic, Piotr Pokora | 2023-09-26T19:45:16Z | http://arxiv.org/abs/2309.15228v2 | # On Plus-One Generated Conic-Line Arrangements with Simple Singularities
###### Abstract.
In this paper we study plus-one generated arrangements of conics and lines in the complex projective plane with simple singularities. We provide several degree-wise classification results that allow us to construct explicit examples of such arrangements.
## 1. Introduction
Recently introduced by Abe in [1], the class of plus-one generated (POG) arrangements of hyperplanes proved to be strongly connected to the class of free hyperplane arrangements, a connection that we will explain shortly.
Let us recall that an arrangement of hyperplanes is called _free_ if its associated module of derivations is a free module, over the coordinate ring (see [14] for a comprehensive survey on the freeness for hyperplane arrangements). For a POG arrangement, the associated module of derivations is no longer free, but it admits a very simple minimal free resolution, so it is, in a way, a natural step away from freeness. These definitions are easily rephrased for curves, via associated modules of derivations, see Definitions 2.1, 2.3 in Section 2 for details.
In the case of projective line arrangements, the POG property appears in close relation to freeness, in the sense that, if one deletes a line from a free arrangement, then the resulting arrangement is either free or POG, and the same goes for the addition of a line, see Theorem 2.6. When passing to higher dimensions, i.e., for arrangements of hyperplanes, the result of the deletion from a free arrangement is still either free or POG, but the addition of a hyperplane to a free arrangement does not produce free and POG arrangements only, see [1]. However, in [3], a 'dual' notion of POG is introduced, based on the algebra of logarithmic differential forms on a hyperplane arrangement, and the addition behaves well with respect to this dual notion.
One can wonder whether the same kind of behavior occurs for reduced plane curves, or at least for certain types of reduced curves, such as conic-line (CL) arrangements. Our examples so far seem to support the hypothesis that deleting a line from a CL-arrangement results in either a free or a POG CL-arrangement. Addition does not follow the same pattern, as illustrated by Example 2.8.
In Section 2 we make an inventory of notions and results in the field that we rely on, and present a set of examples relevant to the relation free-POG for CL-arrangements.
In Section 3 we present classification results for POG conic-line arrangements, under some restrictions on the types of singularities and value of the defect (see Section 2 for the definition). One finds that such restrictions limit severely the cardinal of the arrangement and we can compare these results with similar ones on free and nearly-free arrangements from [4, 13]. In Theorem 3.1, we consider the case of POG arrangements of conics with simple singularities and defect 2, which is the smallest possible value for a defect that does not produce a nearly-free curve, and prove that such an arrangement can contain at most 4 conics. In Theorem 3.4, we work with CL-arrangements, having at least one line and one conic, with simple singularities and defect 2, to reach the conclusion that the cardinal of such an arrangement can be at most 9. The added value of our work is that a number of examples of POG CL-arrangements that appear in our proofs.
## 2. Preliminaries
Let us start with a general introduction. Denote by \(f\in S=\mathbb{C}[x,y,z]\) the defining polynomial of a reduced plane curve \(\mathcal{C}\,:\,f=0\) in \(\mathbb{P}^{2}_{\mathbb{C}}\) such that \(f=f_{1}\cdots f_{k}\) and \(\mathrm{GCD}(f_{i},f_{j})=1\) for \(i\neq j\). It means that \(\mathcal{C}\) consists of \(k\) irreducible components \(\mathcal{C}=\{C_{1},...,C_{k}\}\) with \(C_{i}\,:\,f_{i}=0\). Let \(\mathrm{Der}(S)=S\cdot\partial_{x}\oplus S\cdot\partial_{y}\oplus S\cdot \partial_{z}\), we define \(D(\mathcal{C})\) to be the derivation module associated with \(\mathcal{C}\), namely
\[D(\mathcal{C})=\{\theta\in\mathrm{Der}(S)\,:\,\theta(f)\in\langle f\rangle\}.\]
Since for every element \(\theta\in\mathrm{Der}(S)\) we have
\[\theta(f_{1}\cdots f_{k})=f_{1}\theta(f_{2}\cdots f_{k})+f_{2}\cdots f_{k} \theta(f_{1}),\]
then by the inductive application of the above identity, we obtain also
\[D(\mathcal{C})=\{\theta\in\mathrm{Der}(S)\,:\,\theta(f_{i})\in\langle f_{i} \rangle,\,\,i=1,\ldots,k\}. \tag{1}\]
We have two isomorphic submodules of \(D(\mathcal{C})\) (the isomorphism can be defined just as in the case of arrangements of lines in \(\mathbb{P}^{2}_{\mathbb{C}}\), see for instance [2, Proposition 2.11]), namely
\[D_{0}(\mathcal{C}):=\{\theta\in Der(S)\,:\,\,\theta(f)=0\}.\]
and
\[D_{L}(\mathcal{C}):=\{\theta\in D(\mathcal{C})\,:\,\,\theta(\alpha_{L})=0\},\]
where \(\alpha_{L}\) is the linear form that defines \(L\).
It is well-known that
\[D(\mathcal{C})=S\cdot\delta_{E}\oplus D_{0}(\mathcal{C}),\]
where \(\delta_{E}\) denotes the Euler derivation.
**Definition 2.1**.: We say that a reduced plane curve \(\mathcal{C}\,:\,f=0\) with \(f\in S_{d}\) for \(d\geq 1\) is **free** if \(D_{0}(\mathcal{C})=S(-d_{1})\oplus S(-d_{2})\) with \(d_{1}\leq d_{2}\) and \(d_{1}+d_{2}=d-1\). The pair \(\exp(\mathcal{C})=(d_{1},d_{2})\) is called the exponents of \(\mathcal{C}\).
In order to introduce the second most important class of reduced curves in our investigations, we need the following general definition.
**Definition 2.2**.: We say that a reduced plane curve \(\mathcal{C}\) is an \(m\)-syzygy curve when the associated Milnor algebra \(M(f)\) has the following minimal graded free resolution:
\[0\rightarrow\bigoplus_{i=1}^{m-2}S(-e_{i})\rightarrow\bigoplus_{i=1}^{m}S(1- d-d_{i})\to S^{3}(1-d)\to S\to M(f)\to 0\]
with \(e_{1}\leq e_{2}\leq...\leq e_{m-2}\) and \(1\leq d_{1}\leq...\leq d_{m}\). The \(m\)-tuple \((d_{1},...,d_{m})\) is called the exponents of \(\mathcal{C}\).
**Definition 2.3**.: A reduced curve \(\mathcal{C}\) in \(\mathbb{P}^{2}_{\mathbb{C}}\) is called **plus-one generated** (POG) with the exponents \((d_{1},d_{2})\) and level \(d_{3}\) if \(D_{0}(\mathcal{C})\) admits a minimal resolution of the form:
\[0\to S(-d_{3}-1)\to S(-d_{3})\oplus S(-d_{2})\oplus S(-d_{1}) \to D_{0}(\mathcal{C})\to 0\]
**Remark 2.4**.:
1. A \(3\)-syzygy reduced curve \(\mathcal{C}\) in \(\mathbb{P}^{2}_{\mathbb{C}}\) of degree \(d\) such that \(d_{1}+d_{2}=d\) and \(d_{3}\geq d_{2}\) is precisely a plus-one generated curve of level \(d_{3}\) and the exponents \((d_{1},d_{2})\).
2. If \(\mathcal{C}\) is a plus-one generated curve with \(d_{2}=d_{3}\), then \(\mathcal{C}\) is called **nearly-free**.
We will need the following characterization of plus-one generated reduced plane curves that comes from [7]. Here by \(\tau(\mathcal{C})\) we denote the total Tjurina number of a given reduced curve \(\mathcal{C}\subset\mathbb{P}^{2}_{\mathbb{C}}\).
**Proposition 2.5** (Dimca-Sticlaru).: Let \(\mathcal{C}:f=0\) be a reduced \(3\)-syzygy curve of degree \(d\geq 3\) with the exponents \((d_{1},d_{2},d_{3})\). Then \(\mathcal{C}\) is plus-one generated if and only if
\[\tau(\mathcal{C})=(d-1)^{2}-d_{1}(d-d_{1}-1)-(d_{3}-d_{2}+1).\]
The number \(\nu(\mathcal{C}):=(d_{3}-d_{2}+1)\) is called the defect of \(\mathcal{C}\). It clear from the definitions of free and nearly-free curves that if \(\mathcal{C}\) is plus-one generated with \(d_{3}>d_{2}\), then \(\nu(\mathcal{C})\geq 2\). Here in the paper we will be focused on the case when \(\nu(\mathcal{C})=2\).
Assume from now on that \(\mathcal{C}\) is a CL-arrangement consisting of \(k\) smooth conics and \(d\) lines. Moreover, we will work with the case where all singularities of \(\mathcal{C}\) are quasi-homogeneous, i.e., for every singular point \(p\in\mathrm{Sing}(\mathcal{C})\) one has \(\tau_{p}=\mu_{p}\), where \(\tau_{p}\) denotes the local Tjurina number and \(\mu_{p}\) denotes the local Milnor number.
In the theory of line arrangements we know natural techniques that allow to construct new examples of either free or plus-one generated arrangements, namely addition-deletion techniques. More precisely, we have the following result proved by Abe.
**Theorem 2.6**.: _[_1_, Theorem 1.11]_ _Let \(\mathcal{A}\) be a free arrangement of lines in \(\mathbb{P}^{2}_{\mathbb{C}}\). Then:_
1. _For_ \(L\in\mathcal{A}\)_, the subarrangement_ \(\mathcal{A}\setminus\{L\}\) _is either free or POG._
2. _Let_ \(L\) _be a line in_ \(\mathbb{P}^{2}_{\mathbb{C}}\)_. Then the arrangement_ \(\mathcal{A}\cup\{L\}\) _is either free or POG._
In fact, a deletion type result as above holds in arbitrary dimension for hyperplane arrangements, see [1, Theorem 1.4]. However, this is no longer the case for addition, see for instance [1, Example 7.4].
In the world of conic-line arrangements in the plane, it is very natural to wonder whether we can use similar addition-deletion techniques. The main difference is based on the fact that we can add/delete either a conic or a line. After many numerical experiments we heuristically observed that if we have a free conic-line arrangement and we delete a line, then the resulting arrangement is either free or plus-one generated. However, we are not able to prove this result and we hope to come back to this problem as soon as possible. On the other hand, with the addition method, we can get the whole spectrum of possibilities. Let us illustrate this with the following two examples.
**Example 2.7**.: Let us consider the conic-line arrangement \(\mathcal{XR}\subset\mathbb{P}^{2}_{\mathbb{C}}\) given by the following defining polynomial:
\[Q(x,y,z)=(x^{2}+2xy+y^{2}-xz)(x^{2}+xy+2yz-z^{2})(x^{2}+xz+yz)(x^ {2}+xy+z^{2})\cdot\\ (x^{2}+2xy-xz+yz)(x^{2}-y^{2}+xz+2yz)y(x+z)(x+y-z)(x+y+z)(x-z) \bigg{(}x+\frac{1}{2}y\bigg{)}.\]
This arrangement has 9 ordinary sixfold and 12 nodal intersections. It is also worth noticing that all singularities are quasi-homogeneous. Using SINGULAR we can check that the arrangement \(\mathcal{XR}\) is free with the exponents \((d_{1},d_{2})=(4,13)\). Now we perform the addition trick. If we add to \(\mathcal{XR}\) the line \(\ell:x+2y+4z=0\), then the arrangement is plus-one generated such that \((d_{1},d_{2},d_{3})=(5,14,17)\). Checking more precisely, by adding the line \(\ell\) we introduce 18 additional double points to the arrangement.
However, using the same addition trick, we can obtain a free arrangement of conics and lines, and this can be achieved by adding to \(\mathcal{XR}\), for instance, the line \(\ell^{\prime}:z=0\).
**Example 2.8**.: Let us consider the conic-line arrangement \(\mathcal{CL}\subset\mathbb{P}^{2}_{\mathbb{C}}\) given by the following defining polynomial:
\[Q(x,y,z)=(x^{2}+y^{2}-z^{2})(y-z)(x^{2}-z^{2}).\]
This arrangement is known to be free [5, Example 4.14], it has exactly 3 nodes and 3 tacnodes as singularities with \(\tau(\mathcal{CL})=12\). If we add the conic \(C:y^{2}-xz=0\) to \(\mathcal{CL}\), then using SINGULAR we can check that the resulting arrangement is only 4-syzygy.
As we will see in the next section, the addition-deletion techniques are not sufficient in our classification considerations, mainly due to the fact that there are not that many known examples of free CL-arrangements with simple singularities. For example, there are no free conic arrangements with only nodes and tacnodes as singularities [4, Proposition 1.5], but there are POG conic arrangements with nodes and tacnodes. This is the main reason why we are forced to use different techniques to obtain our results. On the
other hand, our combinatorial techniques can be applied more generally, so we hope that they will be useful in further research.
## 3. Plus-one generated conic arrangements with certain ADE singularities
Our aim here is to provide a degree-wise characterization of plus-one generated with some prescribed ADE singularities. We start with arrangements of conics in the plane and our result is motivated by a recent paper due to Dimca, Janasz, and the second author devoted to conic arrangements in the plane admitting nodes and tacnodes [4].
**Theorem 3.1**.: _Let \(\mathcal{C}\subset\mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement of \(k\geq 2\) smooth conics such that they admit \(n_{2}\) nodes, \(t_{2}\) tacnodes, and \(n_{3}\) ordinary triple points. Assume that \(\mathcal{C}\) is plus-one generated with the defect \(\nu(\mathcal{C})=2\), then \(k\in\{2,3,4\}\)._
Proof.: Using Proposition 2.5, if \(\mathcal{C}\) is plus-one generated of degree \(d=2k\) with \(k\geq 2\) and \(\nu(\mathcal{C})=2\), then we have
\[d_{1}^{2}-d_{1}(2k-1)+(2k-1)^{2}=\tau(\mathcal{C})+\nu(\mathcal{C})=n_{2}+3t_ {2}+4n_{3}+2.\]
Recall that we have the following combinatorial count:
\[4\cdot\binom{k}{2}=n_{2}+2t_{2}+3n_{3}. \tag{2}\]
Combining these two equations we get
\[d_{1}^{2}-d_{1}(2k-1)+(2k-1)^{2}=t_{2}+n_{3}+2(k^{2}-k+1).\]
Simple manipulations lead us to
\[d_{1}^{2}-d_{1}(2k-1)+2k^{2}-2k-1-(t_{2}+n_{3})=0.\]
If \(\mathcal{C}\) is plus-one generated, then
\[\triangle_{d_{1}}=(2k-1)^{2}-4\bigg{(}2k^{2}-2k-1-(t_{2}+n_{3})\bigg{)}\geq 0.\]
This gives us
\[t_{2}+n_{3}\geq k^{2}-k-\frac{5}{4}.\]
Observe that
\[4\cdot\binom{k}{2}=2(k^{2}-k)=n_{2}+n_{3}+2(t_{2}+n_{3})\geq n_{2}+n_{3}+2(k^ {2}-k)-\frac{5}{2},\]
and we finally get
\[0\leq n_{2}+n_{3}\leq 2.\]
On the other hand,
\[4\cdot\binom{k}{2}=n_{2}+2t_{2}+3n_{3}\leq 2t_{2}+3(n_{2}+n_{3})\leq 2t_{2}+6,\]
so we have
\[t_{2}\geq k(k-1)-3. \tag{3}\]
Using these combinatorial constraints, we see that for \(k=2\) one has \(t_{2}\geq 0\) and \(n_{2}+n_{3}\leq 2\), and we will return to this case in a moment. Assume now that \(k\geq 3\). Recall that by Miyaoka's result [11], we have that
\[t_{2}\leq\frac{4}{9}k^{2}+\frac{4}{3}k, \tag{4}\]
and we arrive at the following chain of inequalities:
\[k(k-1)-3\leq t_{2}\leq\frac{4}{9}k^{2}+\frac{4}{3}k.\]
This gives us that \(k\leq 5\). Let us consider the case \(k=5\). Using our bound on the number of tacnodes, we obtain
\[t_{2}\geq k(k-1)-3=17.\]
By (4) we see that for \(k=5\) we have \(t_{2}\leq 17\), so from now on we will assume that \(\mathcal{C}\) is plus-one generated such that \(t_{2}=17\). Note that the following weak combinatorics can only occur:
\[(n_{2},t_{2},n_{3})\in\{(6,17,0),(3,17,1),(0,17,2)\},\]
so in order to get a plus-one generated example one needs to decide whether there exists an arrangement of \(k=5\) conics with \(t_{2}=17\) and \(n_{3}=2\). By [12, Theorem B], the following Hirzebruch-type inequality holds (when \(k\geq 3\)):
\[8k+n_{2}+\frac{3}{4}n_{3}\geq\frac{5}{2}t_{2}. \tag{5}\]
However, if we plug \((k;n_{2},t_{2},n_{3})=(5;0,17,2)\) into (5) then we get a contradiction, which means that such an arrangement cannot exist.
To complete our proof, we need to show that for each \(k\in\{2,3,4\}\) we have an example of a plus-one generated arrangement.
1. In this case we should have \(n_{2}+n_{3}\leq 2\) and \(t_{2}\geq 0\). If we assume that \(t_{2}=2\), then the arrangement is nearly-free by [4], so we can exclude this case. The next possible case is \(t_{2}=1\) and \(n_{2}=2\), and in this situation we have \(\tau(\mathcal{C})=5\). We show that such a weak combinatorics leads to a plus-one generated example. Let us take \[\mathcal{Q}(x,y,z)=(x^{2}+y^{2}-z^{2})\cdot\bigg{(}x^{2}-\frac{13}{10}xz+\frac {36}{10}y^{2}-\frac{23}{10}z^{2}\bigg{)}.\] as the defining equation of our arrangement. We can check using SINGULAR that \((d_{1},d_{2},d_{3})=(2,2,3)\), and this implies that \(\mathcal{C}\) is plus-one generated.
* Using [10, Proposition 5], if we have an arrangement of \(3\) conics with \(5\) tacnodes, then these three conics are projectively equivalent to the three conics given by the equations: \[x^{2}+y^{2}-z^{2}=0,\ \ell^{2}x^{2}+(\ell^{2}+1)y^{2}-2\ell yz=0,\ m^{2}x^{2}+(m^{2 }+1)y^{2}-2myz=0,\] where \(\ell,m\in\mathbb{C}\setminus\{0,\pm 1\}\), \(\ell\neq m\), \(\ell m\neq 1\). Let us take \(\ell=2\), \(m=-2\), and denote by \(Q(x,y,z)\) the defining equations of these three conics. Then \(\tau(\mathcal{C})=17\), since \(t_{2}=5\), \(n_{2}=2\), and \((d_{1},d_{2},d_{3})=(3,3,4)\), so \(\mathcal{C}\) is plus-one generated.
* Using [10, Proposition 7 b)], if we have an arrangement of \(4\) conics with \(11\) tacnodes, then these four conics are projectively equivalent to the conics given by the following equations: \[x^{2}+y^{2}+z^{2}=0,\ (x^{2}/r^{2})+y^{2}-z^{2}=0,\ x^{2}+(r^{2}+1)y^{2}\pm 2 ryz,\] where \(r\in\mathbb{C}\setminus\{0,\pm 1,\pm\imath\}\). Let us take \(r=2\) and denote by \(Q(x,y,z)\) the defining equation of our arrangement \(\mathcal{C}\). We have \(\tau(\mathcal{C})=35\), since \(t_{2}=11\) and \(n_{2}=2\). Using Singular we can check that \((d_{1},d_{2},d_{3})=(4,4,5)\), which tells us that \(\mathcal{C}\) is plus-one generated.
This completes the proof.
If we allow to have arrangements of conics and lines, then we can find more examples of plus-one generated arrangements. Let us start with the case when we have only double intersection points.
**Proposition 3.2**.: Let \(\mathcal{CL}\subset\mathbb{P}^{2}_{\mathbb{C}}\) with \(k\geq 1\) conics and \(d\geq 1\). Assume that \(\mathcal{CL}\) has only \(n_{2}\) double intersection points and is plus-one generated with \(d_{3}>d_{2}\). Then \((k,d;n_{2})=(1,2;5)\). In other words, we have exactly one weak combinatorics for conic-line arrangements with only double intersection points that leads to a plus-one generated arrangement.
Proof.: Denote by \(m=2k+d\) the degree of the arrangement. Since \(\mathcal{CL}\) is plus-one generated with the exponents \((d_{1},d_{2})\) and level \(d_{3}\) such that \(d_{3}>d_{2}\), \(d_{1}\leq d_{2}\leq d_{3}\), and \(d_{1}+d_{2}=m\), then by [6, Theorem 2.1] one has
\[\frac{m}{2}\geq d_{1}\geq m-2,\]
and this follows from the fact that the log-canonical threshold for nodes is equal to \(1\). This implies that \(3\leq m\leq 4\). If \(m=3\), then we have \(k=1\) and \(d=1\), and an easy inspection shows us that the only possible case is to have \(n_{2}=2\). However, such an arrangement is nearly-free, i.e., \(d_{3}=d_{2}\), so we exclude this case. Let us pass to the case when \(m=4\). It means that we have \(k=1\) and \(d=2\). Using the Bezout theorem, we must have exactly five nodes. Now we are going to give a geometric realization of the weak combinatorics \((k,d;n_{2})=(1,2;5)\). Let us consider the arrangement \(\mathcal{C}\) defined by
the following polynomial:
\[Q(x,y,z)=xy(x^{2}+y^{2}-z^{2}).\]
We have exactly \(n_{2}=5\) nodes, and using SINGULAR we can check that
\[(d_{1},d_{2},d_{3})=(2,2,3),\]
so \(\mathcal{C}\) is plus-one generated.
Now we pass to arrangements of \(k\geq 1\) conics and \(d\geq 1\) lines such that these admit \(n_{2}\) nodes, \(t_{2}\) tacnodes, and \(n_{3}\) ordinary triple points. We will need the following general result.
**Proposition 3.3**.: Let \(C\,:f=0\) be a \(3\)-syzygy curve of degree \(m\) admitting only nodes, tacnodes, and ordinary triple points. Then
\[\frac{m}{2}\geq d_{1}\geq\frac{2}{3}m-2.\]
In particular, we have that \(m\leq 12\).
Proof.: It follows from [5, Proposition 4.7].
Using this result, we can provide a degree-wise classification of certain plus-one generated conic-line arrangements.
**Theorem 3.4**.: _Let \(\mathcal{CL}\subset\mathbb{P}_{\mathbb{C}}^{2}\) be an arrangement of \(k\geq 1\) conics and \(d\geq 1\) lines admitting only \(n_{2}\) nodes, \(t_{2}\) tacnodes, and \(n_{3}\) ordinary triple points. Assume furthermore that \(\mathcal{CL}\) is plus-one generated with \(\nu(\mathcal{CL})=2\). Then \(m:=2k+d\in\{4,5,6,7,8,9,10\}\), possibly except the cases \(m=9\) or \(m=10\)._
Proof.: Using Proposition 3.3 and Proposition 3.2 we see that \(m\in\{4,...,12\}\). We start with a degree-wise characterization. For \(m=4\) we have a plus-one generated arrangement described in Proposition 3.2, so we are going to present constructions of plus-one generated conic-line arrangements in degrees \(m\in\{5,6,7,8\}\) with types of singularities prescribed above.
1. Let \(\mathcal{CL}_{5}\) be defined by the following polynomial: \[Q(x,y,z)=xy(x-y)(x^{2}+y^{2}-z^{2}).\] Here we have \(n_{3}=1\) and \(n_{2}=6\), so \(\tau(\mathcal{CL}_{5})=10\). Then we can check directly, using SINGULAR, that \((d_{1},d_{2},d_{3})=(2,3,4)\), so \(\mathcal{CL}_{5}\) is plus-one generated.
2. Let \(\mathcal{CL}_{6}\) be defined by the following polynomial: \[Q(x,y,z)=x(y-x)(y+x)(x^{2}+y^{2}-z^{2})\bigg{(}y-\frac{\sqrt{2}}{2}z\bigg{)}.\] Here we have \(n_{3}=3\) and \(n_{2}=5\), so \(\tau(\mathcal{CL}_{6})=17\). Then we can check directly, using SINGULAR, that \((d_{1},d_{2},d_{3})=(3,3,4)\), so \(\mathcal{CL}_{6}\) is plus-one generated.
* Let \(\mathcal{CL}_{7}\) be defined by the following polynomial: \[Q(x,y,z)=x(y-x)(y+x)(x^{2}+y^{2}-z^{2})\bigg{(}y-\frac{\sqrt{2}}{2}z\bigg{)} \bigg{(}y+\frac{\sqrt{2}}{2}z\bigg{)}.\] Here we have \(n_{3}=5\) and \(n_{2}=5\), so \(\tau(\mathcal{CL}_{6})=25\). Then we can check directly, using SINGULAR, that \((d_{1},d_{2},d_{3})=(3,4,5)\), so \(\mathcal{CL}_{7}\) is plus-one generated.
* Let us consider the arrangement \(\mathcal{CL}_{8}\) that is given by the following defining polynomial: \[Q(x,y,z)=(x-y)(x+y)(x-z)(x+z)(y-z)(y+z)(x^{2}+y^{2}-z^{2}).\] It is easy to see that we have \(n_{2}=7\), \(t_{2}=4\), and \(n_{3}=4\), which gives us \(\tau(\mathcal{CL}_{8})=35\). Using SINGULAR we can check that \((d_{1},d_{2},d_{3})=(4,4,5)\), so \(\mathcal{CL}_{8}\) is plus-one generated.
Now we are going to exclude the existence of arrangements with \(m\in\{11,12\}\).
* Using Proposition 3.3, we see that \[\frac{11}{2}\geq d_{1}\geq\frac{22}{3}-2,\] and since \(d_{1}\in\mathbb{Z}_{\geq 0}\), we arrive at a contradiction.
* We are going to use two important combinatorial constraints. First of all, we have the naive combinatorial count: (6) \[\binom{12}{2}-k=\binom{m}{2}-k=n_{2}+2t_{2}+3n_{3}.\] Next, we can use the following Hirzebruch-type inequality [8, Proposition 4.4]: (7) \[8k+n_{2}+n_{3}\geq 8k+n_{2}+\frac{3}{4}n_{3}\geq d+\frac{5}{2}t_{2}.\] By the assumption our arrangements are plus-one generated, so using Proposition 3.3 we see that \[6=\frac{m}{2}\geq d_{1}\geq\frac{2}{3}\cdot 12-2=6,\] so we arrive at the case \(d_{1}=d_{2}=6\) and \(d_{3}>6\). By Proposition 2.5, we have the following: (8) \[89=d_{1}^{2}-d_{1}(m-1)+(m-1)^{2}-\nu(\mathcal{CL})=n_{2}+3t_{2}+4n_{3}.\] Combining this with the (naive) combinatorial count, we arrive at (9) \[t_{2}+n_{3}=23+k,\quad n_{2}+n_{3}=20-3k.\] Then (10) \[5k+20\leq 8k+n_{2}+n_{3}\geq d+\frac{5}{2}t_{2},\]
so we have found the following upper-bound on the number of tacnodes:
\[t_{2}\leq\frac{2}{5}\bigg{(}5k+20\bigg{)}-\frac{2}{5}d=2k+8-\frac{2}{5}d.\]
Using the above constraints, we have the following possibilities:
\[\begin{array}{|c|c|c|c|c|}k&d&n_{3}\leq&t_{2}\leq&n_{3}+t_{2}\leq\\ \hline\hline 1&10&17&6&23\\ 2&8&14&8&22\\ 3&6&11&11&22\\ 4&4&8&14&22\\ 5&2&5&17&22\\ \hline\end{array}.\]
Since \(t_{2}+n_{3}=23+k>24\), we arrive at a contradiction.
In the case of \(m\in\{9,\,10\}\), our methods are not sufficient to decide on the existence/non-existence of such conic-line arrangements since, as usually, such boundary cases are very difficult to handle. For instance, if we assume that \(k=1\) and \(d=7\), then one has to decide whether the following weak combinatorics are realizable over the complex numbers:
\[(n_{2},t_{2},n_{3})\in\bigg{\{}(3,1,10),(4,2,9),(5,3,8),(6,4,7),(7,5,6)\bigg{\}}.\]
Here we show how to exclude the existence of the weak combinatorics \((k,d;n_{2},t_{2},n_{3})=(1,7;7,5,6)\). In order to do so, we need the following general result.
**Theorem 3.5**.: _Let \(C\subset\mathbb{P}^{2}_{\mathbb{C}}\) be a reduced curve of degree \(m\geq 9\) admitting \(n_{2}\) nodes, \(t_{2}\) tacnodes, and \(n_{3}\) ordinary triple points, then for a real number \(\alpha\in[1/3,2/3]\) one has_
\[n_{2}\cdot(6\alpha-3\alpha^{2})+t_{2}\cdot\bigg{(}-6\alpha^{2}+15\alpha-\frac {3}{8}\bigg{)}+n_{3}\cdot\bigg{(}-\frac{27}{4}\alpha^{2}+18\alpha\bigg{)}\leq (3\alpha-\alpha^{2})m^{2}-3\alpha m. \tag{11}\]
Proof.: We are going to use directly an orbifold version of the Bogomolov-Miyaoka inequality. We will work with the pair \(\bigg{(}\mathbb{P}^{2}_{\mathbb{C}},\alpha D\bigg{)}\) which is going to be log-canonical and effective. In order to be effective, one requires that \(\alpha\geq\frac{3}{\deg(C)}=\frac{3}{m}\), and in order to be log-canonical, \(\alpha\) should be less than or equal to the minimum of log-canonical thresholds of our singular points, which means that \(\alpha\leq\min\bigg{\{}1,\frac{3}{4},\frac{2}{3}\bigg{\}}\). Summing up, based on the first part of our discussion, let \(\alpha\in[3/m,2/3]\). We are going to use Langer's inequality proved in [9], namely
\[\sum_{p\in\text{Sing}(C)}3\bigg{(}\alpha\bigg{(}\mu_{p}-1\bigg{)}+1-e_{orb} \bigg{(}p,\mathbb{P}^{2}_{\mathbb{C}},\alpha D\bigg{)}\bigg{)}\leq(3\alpha- \alpha^{2})m^{2}-3\alpha m, \tag{12}\]
where \(\mu_{p}\) is the local Tjurina number of \(p\in\text{Sing}(C)\), and \(e_{\text{orb}}\bigg{(}p,\mathbb{P}^{2}_{\mathbb{C}},\alpha D\bigg{)}\) denotes the local orbifold Euler number of \(p\in\text{Sing}(C)\). In the case of our selection of singularities, we have the following values:
From now on we assume that \(\alpha\in[1/3,2/3]\), then our inequality follows from plugging the collected data above into (12).
**Corollary 3.6**.: _There does not exists a conic-line arrangement \(\mathcal{CL}\) in \(\mathbb{P}^{2}_{\mathbb{C}}\) having the weak combinatorics \((k,d;n_{2},t_{2},n_{3})=(1,7;7,5,6)\)._
Proof.: It follows from Theorem 3.5, namely we can take \(\alpha=\frac{4}{10}\) and then we can check that (11) does not hold for \((k,d;n_{2},t_{2},n_{3})=(1,7;7,5,6)\).
## Acknowledgments
We would like to thank Xavier Roulleau for sharing his example of a conic-line arrangement that was used in Example 2.7 and for his help with symbolic computations in MAGMA.
Piotr Pokora was partially supported by The Excellent Small Working Groups Programme **DNWZ.711/IDUB/ESWG/2023/01/00002** at the Pedagogical University of Cracow.
Anca Macinic was partially supported by a grant of the Romanian Ministry of Education and Research, CNCS - UEFISCDI, project number **PN-III-P4-ID-PCE-2020-2798**, within PNCDI III.
|
2309.13881 | Skip-Connected Neural Networks with Layout Graphs for Floor Plan
Auto-Generation | With the advent of AI and computer vision techniques, the quest for automated
and efficient floor plan designs has gained momentum. This paper presents a
novel approach using skip-connected neural networks integrated with layout
graphs. The skip-connected layers capture multi-scale floor plan information,
and the encoder-decoder networks with GNN facilitate pixel-level
probability-based generation. Validated on the MSD dataset, our approach
achieved a 93.9 mIoU score in the 1st CVAAD workshop challenge. Code and
pre-trained models are publicly available at
https://github.com/yuntaeJ/SkipNet-FloorPlanGe. | Yuntae Jeon, Dai Quoc Tran, Seunghee Park | 2023-09-25T05:20:57Z | http://arxiv.org/abs/2309.13881v2 | # Skip-Connected Neural Networks with Layout Graphs for
###### Abstract
With the advent of AI and computer vision techniques, the quest for automated and efficient floor plan designs has gained momentum. This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-connected layers capture multi-scale floor plan information, and the encoder-decoder networks with GNN facilitate pixel-level probability-based generation. Validated on the MSD dataset, our approach achieved a 93.9 mIoU score in the 1st CVAAD workshop challenge. Code and pre-trained models are publicly available at [https://github.com/yuntae/JSkipNet-FloorPlanGen](https://github.com/yuntae/JSkipNet-FloorPlanGen).
## 1 Introduction
Floor Plan auto-generation refers to the use of computational algorithms and tools to automatically design and optimize the spatial layout of a building or structure. Traditional floor plan design often requires substantial time, expertise, and manual iteration to balance both functional needs and aesthetic considerations. The auto-generation of floor plans offers a solution to this challenge by providing rapid, objective-driven designs that can maximize space utilization, enhance occupant comfort, and reduce design overhead.
In recent years, numerous studies have been conducted on floor plan auto-generation based on computer vision and deep learning. RPLAN [2] suggests encoder-decoder networks for locating room and constructs 80k floor plans dataset from real residential buildings. Graph2Plan [1] suggests graph neural networks(GNN) and convolutional neural networks(CNN) for graph-based floor plan generation using RPLAN dataset. There also exist GAN-based study [3] with a bubble diagram for input. However, there are still challenges that are hard to solve such as: **1) Scalability Issue**: Almost recent studies have been limited by exclusively using the RPLAN dataset, which is comprised of residential floor plans. This poses a limitation when attempting to apply to buildings with different purposes, such as office buildings, and also proves challenging for larger scale buildings. **2) Graph Utilization Issue**: In the boundary-based approach like Graph2Plan, nodes in the graph can only be used if they are placed correctly inside the boundary. On the other hand, studies utilizing the graph as a bubble diagram offer too much freedom, rendering the use of boundaries as input unfeasible.
We suggest encoder-decoder networks with skip connections for floor plan auto-generation. Our model inputs both a boundary image containing exterior information and a graph looks like bubble diagram showed in Fig. 1. We tested on Modified Swiss Dwellings (MSD) dataset [4] provided by 1st Computer Vision Aided Architectural Design (CVAAD) workshop on ICCV 2023. Our main contributions can be summarized as follows:
1. We utilized skip-connected layers to better comprehend various scale information of floor plans and validated this approach on the MSD dataset, which contains a diverse range of scales.
2. We inferred bubble diagram-style graphs using GNN and concatenated the acquired graph features prior to the upsampling phase, enabling floor plan generation based on pixel-level probabilities.
Figure 1: **Visualization** of floor plan auto-generation. The input is a struct(boundary info) and a graph(room types and connection), and the output is a generated floor plan called full.
## 2 Method
### Boundary Image Pre-Processing
Our pre-processing of boundary image begins by applying Mobile-SAM [5], which is segment anything model for mobile. We generate segmentation masks and prioritize the largest one, to get the exterior part of the building structure. After that, we can structure a processed image composed of three channels: 'in-wall-out', taking a value of 1 for the interior, 0.5 for the boundary and 0 for the exterior; 'in-out', excluding wall information from previous channel; and the 'raw-boundary'. This structure is inspired by RPLAN [2] dataset.
### Skip-Connected Neural Networks
Our model employs a skip-connected architecture designed to preserve spatial details across various scales. The architecture comprises two central components: the encoder and the decoder, both supplemented with skip connections to ensure information flow across layers. The encoder component plays a role in extracting features from the input boundary image. Through a series of convolutional layers, it progressively down-samples the input while concurrently amplifying its feature dimensionality. This process enables the network to capture intricate patterns and semantics from the image at various scales. However, as the spatial dimensions are reduced, the risk of losing granular details increases.
The decoder acts as the counterbalance to the encoder. Tasked with the up-sampling of the condensed feature maps, the decoder employs skip-connections that bridge layers together. These connections reintroduce the lost spatial details from the encoding phase by directly linking the outputs of the encoder's layers to the decoder. In a strategic enhancement, our design also fuses the resized input boundary image at each decoding step. This novel integration ensures the generated floor plans are not just detailed but also strictly adhere to the input boundary constraints, ensuring the fidelity and accuracy of the generated outputs.
The combined effect of this encoder-decoder architecture, when fortified by the skip-connections, results in a more accurate and detail-preserving output. The network is equipped to understand and maintain the input boundary constraints efficiently across different scales, leading to enhanced consistency and fidelity in the generated floor plans.
### Graph Neural Networks
We captures layout graph constraints using GNN, to ensure functionally feasible floor plans. We employ GCNConv layers for node representation learning, refining and aggregating the features to produce a 2D feature map. These graph features are then concatenated with the deepest outputs of the encoder, intertwining spatial details with layout graph constraints. As this merged data proceeds through the decoding process, the model seamlessly integrates both the spatial and topological information, yielding a floor plan that effectively combines visual precision with architectural layout constraints.
## 3 Results & Discussion
1st CVAAD workshop at ICCV 2023 provided MSD dataset [4], which includes boundary images, layout graphs and ground truth floor plans of single-as well as multi-unit building complexes across Switzerland, with 4167 floor plans for training and 1390 for testing. We will evaluate our model using Intersection over Union (IoU) that calculates the average intersection over union of predicted and ground truth segments across all classes. The training and inference processes were conducted on one NVIDIA A6000 GPU with PyTorch 2.0.0.
Figure 2: **Architecture** of our proposed SkipNet-FloorPlanGen
### Quantitative & Qualitative Results
Table 1 displays the competition leaderboard, demonstrating that the encoder-decoder model, enhanced with skip-connections and concatenation with resized boundary images, is a robust method. Fig. 3 shows the qualitative result of our method. We separated a validation dataset from the train set, and Fig. 3 illustrates the visualization results on the validation set.
### Discussion
This paper presents a novel approach using skip-connected neural networks integrated with layout graphs. The skip-connected layers capture multi-scale floor plan information, and the encoder-decoder networks with GNN facilitate pixel-level probability-based generation with layout constraints. Our proposed method has been evaluated on the 1st CVAAD workshop MSD dataset [4] on ICCV 2023, and demonstrated its robust.
In the future, we will focus on transforming boundary images into graph diagrams or vectorized forms for enhanced deep learning applications. The transition could mitigate limitations in scalable representations. Additionally, we aim to construct hierarchical or probabilistic graphs considering inter-room characteristics in layout graphs, aiming to pioneer a novel approach in handling spatial representations for more robust and scalable model architectures.
|
2309.13659 | A Novel Quantum Visual Secret Sharing Scheme | Inspired by Naor et al.'s visual secret sharing (VSS) scheme, a novel n out
of n quantum visual secret sharing (QVSS) scheme is proposed, which consists of
two phases: sharing process and recovering process. In the first process, the
color information of each pixel from the original secret image is encoded into
an n-qubit superposition state by using the strategy of quantum expansion
instead of classical pixel expansion, and then these n qubits are distributed
as shares to n participants, respectively. During the recovering process, all
participants cooperate to collect these n shares of each pixel together, then
perform the corresponding measurement on them, and execute the n-qubit XOR
operation to recover each pixel of the secret image. The proposed scheme has
the advantage of single-pixel parallel processing that is not available in the
existing analogous quantum schemes and perfectly solves the problem that in the
classic VSS schemes the recovered image has the loss in resolution. Moreover,
its experiment implementation with the IBM Q is conducted to demonstrate the
practical feasibility. | Wenjie Liu, Yinsong Xu, Maojun Zhang, Junxiu Chen, Ching-Nung Yang | 2023-09-24T14:55:44Z | http://arxiv.org/abs/2309.13659v1 | # A novel quantum visual secret sharing scheme
###### Abstract
Inspired by Naor _et al._'s visual secret sharing (VSS) scheme, a novel \(n\) out of \(n\) quantum visual secret sharing (QVSS) scheme is proposed, which consists of two phases: sharing process and recovering process. In the first process, the color information of each pixel from the original secret image is encoded into an \(n\)-qubit superposition state by using the strategy of quantum expansion instead of classical pixel expansion, and then these \(n\) qubits are distributed as shares to \(n\) participants, respectively. During the recovering process, all participants cooperate to collect these \(n\) shares of each pixel together, then perform the corresponding measurement on them, and execute the \(n\)-qubit _XOR_ operation to recover each pixel of the secret image. The proposed scheme has the advantage of single-pixel parallel processing that is not available in the existing analogous quantum schemes, and perfectly solves the problem that in the classic VSS schemes the recovered image has the loss in resolution. Moreover, its experiment implementation with IBM Q is conducted to demonstrate the practical feasibility.
**Keywords:**\(n\)-qubit superposition state, \(n\)-qubit _XOR_ operation, quantum expansion, quantum visual secret sharing, single-pixel parallel processing, visual cryptography
## 1 Introduction
In order to prevent the secret from being too concentrated and achieve the purpose of spreading risk and tolerating intrusion, the idea of secret sharing [1, 2] has been proposed. Secret sharing refers to methods for distributing a secret amongst a group of participants, each of whom is distributed a share of the secret. The secret can be reconstructed only when a sufficient number of shares are combined together, but individual shares are of no use on their own. This
method provides an effective way for the security protection and fair use of secret keys. As an important branch of modern cryptography, secret sharing plays an important role in the preservation, transmission and utilization of data.
Inspired by secret sharing, many experts and scholars have devoted themselves to explore the research of visual cryptography, i.e., how to utilize the idea of secret sharing to solve the problem of image encryption and decryption. In 1995, the first visual secret sharing (VSS) scheme based on pixel expansion was proposed by Naor _et al._[3], where a binary secret image is shared to generate a plurality of noise-like shared images by dividing each of the secret pixels into pixel blocks composed of \(m\) sub-pixels, and the shared image is \(m\) times the secret image. Due to the presence of pixel expansion, the visual quality of the recovered secret image may be not ideal. Since then, most of the research has been carried out around reducing pixel expansion [4; 5; 6; 7; 8].
With the continuous development of quantum computing technology, many researchers try to apply quantum mechanics in many fields, such as quantum key distribution (QKD) [9; 10], quantum key agreement (QKA) [11; 12], quantum secret sharing (QSS) [13; 14], quantum secure direct communication (QSDC) [15; 16], quantum remote state preparation (QRSP) [17; 18], quantum steganography (QS) [19; 20; 21], delegating quantum computation (DQC) [22; 23], quantum private query (QPQ) [24; 25] and even quantum machine learning [26; 27; 28]. Among them, QSS is an important research area [13; 14; 29; 30; 31; 32; 33; 34], and it can be viewed as the generalization of classical secret sharing to the setting of quantum information. In 1999, Hillery _et al._ proposed the first QSS scheme by using the Greenberger-Horne-Zeilinger (GHZ) state. In the scheme, a GHZ triplet is split and each of the other two participants gets a particle. Both participants are allowed to measure their particles in either \(x\) or \(y\) direction, and their results are combined to determine the dealer's measurement result. This allows dealer to establish a joint key with two participants which dealer can then use to send message. At the same year, a threshold QSS scheme [14] was proposed by adopting quantum error correcting code theory. Since then, various kinds of QSS schemes are constantly being proposed [29; 30; 31; 32; 33; 34]. All these work can be divided into three main categories, QSS of classical messages [29; 30], QSS of quantum information [31; 32] where the secret is an arbitrary unknown state, and QSS of both them [33; 34].
As far as we know, the literatures on how to use quantum mechanisms to solve VSS schemes are rare [35; 36]. In 2014, Song _et al._ proposed a flexible (\(2^{k}\), \(2^{k}\)) quantum image secret sharing (QISS) scheme [35], where the whole secret image is repeatedly encoded into a quantum state, and then split into sub-images as shares with multiple measurement operations. Although the size of each share (the part of the original quantum image) becomes smaller, it requires a mass of multi-qubit superposition states to produce shares by measurement, and also loses the characteristic of single-pixel parallel processing (i.e., one pixel as a unit for parallel processing) in VSS. In order to solve the problem, we propose a novel \(n\) out of \(n\) quantum visual secret sharing (QVSS) scheme based on Naor _et al._'s scheme. In our scheme, the color information of each pixel from the original secret image is encoded into an \(n\)-qubit quantum superposition state, so the advantage
of single-pixel parallel processing can be preserved. Besides, we use the quantum expansion strategy to perfectly solve the problem that the recovered image has the loss in resolution in the classical VSS schemes, i.e., the recovered image is as same as the original secret image.
The rest of this paper is organized as follows. Sect. 6 provides some preliminary knowledge about quantum computation and Naor _et al._'s VSS scheme. The proposed (\(n\), \(n\)) QVSS scheme, which consists of sharing process and recovering process, is explicated in Sect. 3, and an example, (3, 3) QVSS scheme, is illustrated in Sect. 4. And then, the correctness is verified in Sect. 5. We also compare and discuss with other analogous schemes in Sect. 6. Moreover, its experiment implementation with IBM Q is conducted to demonstrate the practical feasibility in Sect. 7. Finally, Sect. 8 gives the discussion and conclusion of this paper.
## 2 Preliminaries
### Quantum computation
As we know, the bit is the fundamental concept of classical information, and has a state, either \(0\) or \(1\). Similar to the classical bit, the quantum bit (called qubit) [37] is the basic unit of quantum information and has two possible states \(\left|0\right\rangle\) and \(\left|1\right\rangle\), which is often referred to as quantum superposition state,
\[\left|\varphi\right\rangle=\alpha\left|0\right\rangle+\beta\left|1\right\rangle, \tag{1}\]
where \(\alpha\), \(\beta\) are complex numbers, and \(\left|\alpha\right|^{2}+\left|\beta\right|^{2}=1\). \(\left|0\right\rangle\) and \(\left|1\right\rangle\) can be represented by vectors,
\[\left|0\right\rangle=\left[\begin{array}{c}1\\ 0\end{array}\right],\ \ \ \ \ \left|1\right\rangle=\left[\begin{array}{c}0\\ 1\end{array}\right]. \tag{2}\]
Then, \(\left|\varphi\right\rangle\) can be expressed in vector form \(\left|\varphi\right\rangle=\left(\begin{smallmatrix}\alpha\\ \beta\end{smallmatrix}\right)\).
Analogous to the way that a classical computer is built from an electrical circuit containing wires and logic gates, a quantum computer is built from a quantum circuit containing wires and elementary quantum gates to carry around and manipulate the quantum information. Single-qubit gates, such as _Pauli-X_, _Pauli-Z_, and \(H\) (_Hadamard_), are the simplest form of quantum gates, and they can be described as \(2\times 2\) unitary matrices as below,
\[X=\left[\begin{array}{c}0\ 1\\ 1\ 0\end{array}\right],\ \ Z=\left[\begin{array}{c}1\ \ 0\\ 0\ -1\end{array}\right],\ \ H=\frac{1}{\sqrt{2}}\left[\begin{array}{c}1\ \ 1\\ 1\ -1\end{array}\right]. \tag{3}\]
Multi-qubit gates are also the important units in a quantum circuit. The prototypical multi-qubit quantum logic gate is _controlled-NOT_ (i.e., _CNOT_) gate (shown in Fig. 1), which has two input qubits, known as the control qubit and the target qubit, respectively. If the control qubit is set to \(0\), then the target qubit is left alone. If the control qubit is set to \(1\), then the target qubit is flipped.
Besides _CNOT_ gate, _Toffoli_ gate is another frequently used multi-qubit gate. As illustrated in Fig. 2, _Toffoli_ gate has three input bits and three output bits: two of the bits are control bits that are unaffected by the action of the _Toffoli_ gate; the third bit is a target bit that is flipped if both control bits are set to 1, and otherwise is left alone.
### Naor et al.'s Visual secret sharing scheme
The first visual secret sharing scheme was proposed by Naor _et al._[3], where the secret image consists of a collection of black and white pixels. As shown in Fig. 3, each original pixel generates \(n\) shares, and each share has a collection of \(m\) black and white subpixels (the process is named pixel expansion). These \(n\times m\) subpixels can be described by an \(n\times m\) Boolean matrix \(S=[s_{ij}]\), where \(i\in\{1,2,\cdots,n\}\), \(j\in\{1,2,\cdots,m\}\), and \(s_{ij}\in\{0,1\}\). If \(s_{ij}=1\), the \(j\)th subpixel in the \(i\)th share is black, otherwise, it white.
In the (\(k\), \(n\)) VSS scheme, it mainly consists of two matrix sets \(C_{0}\) and \(C_{1}\) which are composed of \(n\times m\) Boolean matrices \(S\). To share a white pixel, the dealer randomly chooses one \(S\) in \(C_{0}\), and to share a black pixel, the dealer randomly chooses one \(S\) in \(C_{1}\). The chosen matrix \(S\) defines the colour of the \(m\) subpixels in each one of the \(n\) shares. The scheme is considered valid if the following two conditions are met:
(1) \(\forall S\in\ C_{0}\), \(\{i_{1},i_{2},\cdots,i_{p}\}\subseteq\{1,2,\cdots,n\}\,(p\geqslant k)\), vector \(V=S[i_{1}]+S[i_{2}]+\cdots+S[i_{k}]\), so \(H(V)\leqslant d-\alpha m\), where "+" means "_OR_" logical operation, \(S[i]\) indicates the \(i\)th row of \(S\), \(d\) and \(\alpha\) are threshold and relative difference
Figure 1: Matrix representation and quantum circuit of _CNOT_ gate.
Figure 2: Truth table and quantum circuit of _Toffoli_ gate.
respectively, and \(H(V)\) represents the Hamming weight of \(V\). \(\forall\,S\,\in\,C_{1}\), \(H(V)\geqslant d\).
(2) \(\forall\,\{i_{1},i_{2},\cdots,i_{q}\}\subseteq\{1,2,\cdots,n\}\,(q<k)\), the two collections of \(q\times m\) matrices \(D_{t}\) for \(t\in\{0,1\}\) obtained by restricting each \(n\times m\) matrix in \(C_{t}\) (\(t\in\{0,1\}\)) to rows \(i_{1},i_{2},\cdots,i_{q}\) are indistinguishable in the sense that they contain the same matrices with the same frequencies.
When the secret image needs to be recovered, any \(k\) participants just print their shares on transparencies respectively and stack their transparencies to decrypt secret information by the human visual system. But any \(k-1\) of them gain no information about secret information.
As mentioned above, the pixel is treated as a separate basic unit throughout the scheme, so this single-pixel based VSS scheme is well suitable for parallel computing of images. In order to make individual participant get no information, most VSS schemes use the strategy of pixel expansion to confuse the information of subpixels. However, it may cause the loss in resolution from the original image to the recovered one, i.e., the size of the shared image is \(m\) times the original secret image. Therefore, some researchers tried to reduce pixel expansion [4, 5], and even implement non-expansion [6, 7, 8].
## 3 A novel (\(n\), \(n\)) quantum visual secret sharing scheme
Suppose the dealer Alice wants to share her secret image, which consists of \(s\) black or white pixels, to \(n\) participants \(\{Bob_{1},\,Bob_{2},\cdots,Bob_{n}\}\), and the black and white pixels are respectively represented by \(|1\rangle\) and \(|0\rangle\). The secret image can be recovered by \(n\) participants, but it cannot be less than \(n\) participants. The specific (\(n\), \(n\)) QVSS scheme is mainly composed of the sharing process and the recovering process.
Figure 3: The pixel expansion process in Naor _et al._’s VSS scheme
### Sharing process
During the sharing process of \((n,\,n)\) QVSS scheme, the most important operation is to encode the color information of each pixel from the secret image, e.g., the \(l\)th pixel, into an \(n\)-qubit superposition state \(\left|C_{b}\right\rangle_{l}\),
\[\left|C_{b}\right\rangle_{l}=\sum\limits_{i}^{\overset{n}{\oplus}x_{i}^{j}=b} \frac{\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{n}\right\rangle}{\sqrt{2^{(n-1)}}}, \tag{4}\]
where \(l\in\{1,2,\cdots,s\}\), \(i\in\left\{1,2,\cdots,2^{n-1}\right\}\), \(j\in\{1,2,\cdots,n\}\), \(b\in\{0,1\}\) and \(x_{i}^{j}\in\{0,1\}\). \(i\) stands for the \(i\)th possible case where \(\overset{n}{\oplus}x_{i}^{j}=b\). \(j\) represents \(j\)th qubit and also corresponds to \(j\)th participant \(Bob_{j}\). The specific process of encoding color information of \(l\)th pixel into \(\left|C_{b}\right\rangle_{l}\) is shown in Fig. 4. If the \(l\)th pixel is white, then \(b=0\) (\(\left|C_{0}\right\rangle_{l}\)); if the \(l\)th pixel is black, then \(b=1\) (\(\left|C_{1}\right\rangle_{l}\)). The process of encoding the color information of each pixel into a quantum superposition state can be viewed as quantum expansion, which is analogous to pixel expansion in most classical VSS schemes. Different from the pixel expansion, quantum expansion not only makes the recovered image have no loss in resolution, but also confuses the color information in each share (it makes it impossible for an attacker to directly determine the color information of the pixel).
The specific steps of the \((n,\,n)\) QVSS sharing process are as follows (also shown in Fig. 5).
**Step 1**: According to the previous context, Alice encodes the color information of one pixel, e.g., the \(l\)th pixel, into an \(n\)-qubit superposition state \(\left|C_{b}\right\rangle_{l}\).
**Step 2**: Alice distributes \(n\) qubits \(q_{l}^{1}\), \(q_{l}^{2}\), \(\cdots\), \(q_{l}^{n}\) which compose the state \(\left|C_{b}\right\rangle_{l}\), as shares to \(Bob_{1}\), \(Bob_{2}\), \(\cdots\), \(Bob_{n}\), respectively.
**Step 3**: Alice repeats Step 1 and 2 to handle other \(s\)-1 pixels.
Figure 4: The process of encoding color information into quantum superposition state \(\left|C_{b}\right\rangle_{l}\) (\(b=0,1\))
### Recovering process
When all participants want to recover the secret image, we assume that \(Bob_{j}\) is primarily responsible for performing specific operations. Then, he needs to follow the steps below, which is shown in Fig. 7.
**Step 1**: \(Bob_{j}\) collects \(n\) shares \(q_{l}^{1}\), \(q_{l}^{2}\), \(\cdots\), \(q_{l}^{n}\) which correspond to the \(l\)th pixel, from remaining \(n-1\) participants.
**Step 2**: \(Bob_{j}\) selects a measurement base \(\left\{\left|0\right\rangle,\left|1\right\rangle\right\}\) to measure the \(n\)-qubit superposition state \(\left|C_{b}\right\rangle_{l}\) which consists of \(q_{l}^{1}\), \(q_{l}^{2}\), \(\cdots\), \(q_{l}^{n}\). \(\left|C_{b}\right\rangle_{l}\) collapses into \(n\)-qubit certain state \(\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{n}\right\rangle\).
**Step 3**: \(Bob_{j}\) performs _XOR_ operation (its quantum circuit is illustrated in Fig. 6) on the input quantum state \(\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{n}\right\rangle\) and then get a result state \(\left|x_{i}^{1}\oplus x_{i}^{2}\oplus\cdots\oplus x_{i}^{n}\right\rangle\).
Figure 5: The whole sharing process of our proposed QVSS scheme
Figure 6: The \(n\)-qubit _XOR_ circuit which is composed of multiple _CNOT_ quantum gates
**Step 4**: \(Bob_{j}\) makes judgment about the result state \(\left|x_{i}^{1}\oplus x_{i}^{2}\oplus\cdots\oplus x_{i}^{n}\right\rangle\), if \(\left|x_{i}^{1}\oplus x_{i}^{2}\oplus\cdots\oplus x_{i}^{n}\right\rangle=\left| 0\right\rangle\), it means that the \(l\)th pixel is white, otherwise, the \(l\)th pixel is black.
**Step 5**: \(Bob_{j}\) repeats Step 1 to 4 until the original secret image is recovered.
In our scheme, different from pixel expansion, quantum expansion does not bring about the actual expansion of the shared image size, i.e., the shared image is the same size as the original secret image. So, our scheme does not have loss in resolution and can completely recover the original secret image.
## 4 An example: (3, 3) QVSS scheme
Suppose Alice wants to share a secret image to \(Bob_{1}\), \(Bob_{2}\), \(Bob_{3}\) and the secret image consists of 4 pixels, i.e., \(n=3\) and \(s=4\), and the color information of each pixel is shown in Table 1.
\begin{table}
\begin{tabular}{c c} \hline Pixel & Color \\ \hline
1th pixel & white \\
2th pixel & black \\
3th pixel & black \\
4th pixel & white \\ \hline \end{tabular}
\end{table}
Table 1: The color information of each pixel
Figure 7: The whole recovering process of our proposed QVSS scheme
During the sharing process, Alice encodes the color information of each pixel into 3-qubit quantum superposition state \(\left|C_{b}\right\rangle_{l}\). Then, Alice distributes 3 shares \(q_{l}^{1}\) (the first qubit), \(q_{l}^{2}\)(the second qubit), and \(q_{l}^{3}\) (the third one) which compose \(\left|C_{b}\right\rangle_{l}\), to \(Bob_{1}\), \(Bob_{2}\), \(Bob_{3}\), respectively, where \(l\in\{1,2,3,4\}\). The quantum superposition states and distributed shares of all pixels in the secret image are listed in Table 2.
Suppose \(Bob_{2}\) wants to recover the secret image in the recovering process. In order to recover the first pixel, he should collect all shares \(q_{1}^{1}\), \(q_{1}^{3}\) from \(Bob_{1}\) and \(Bob_{3}\). Then, he selects a measurement base \(\{\left|0\right\rangle,\left|1\right\rangle\}\) to measure the 3-qubit superposition state \(\left|C_{0}\right\rangle_{1}\) which is composed of \(q_{1}^{1}\), \(q_{1}^{2}\), \(q_{1}^{3}\). Suppose \(\left|C_{0}\right\rangle_{1}\) collapses into \(\left|0_{1}^{1}0_{1}^{2}0_{1}^{3}\right\rangle\). After that, \(Bob_{2}\) performs _XOR_ operation on \(\left|0_{1}^{1}0_{1}^{2}0_{1}^{3}\right\rangle\) and get a result state \(\left|0_{1}^{1}\oplus 0_{1}^{2}\oplus 0_{1}^{3}\right\rangle\). So, he can determine that the color of the first pixel is white. Similarly, \(Bob_{2}\) can recover the remaining pixels in the same way, and all the cases for \(Bob_{2}\) are listed in Table 3. By comparing the initial pixel information before the sharing process (see Table 1) and the final pixel information after the recovering process (see the last two columns in Table 3), it can be clearly found that our scheme can completely recover the secret image.
\begin{table}
\begin{tabular}{l c} \hline Pixel & \(\left|C_{b}\right\rangle_{l}\) & Shares \\ \hline
1th pixel \(\left|C_{0}\right\rangle_{1}=\frac{1}{2}(\left|0_{1}^{1}0_{1}^{2}0_{1}^{3} \right\rangle+\left|0_{2}^{1}1_{2}^{2}1_{2}^{3}\right\rangle+\left|1_{3}^{1}0 _{3}^{2}1_{3}^{3}\right\rangle+\left|1_{4}^{1}1_{4}^{2}0_{4}^{3}\right\rangle)\)\(q_{1}^{1}\), \(q_{1}^{2}\), \(q_{1}^{3}\) \\
2th pixel \(\left|C_{1}\right\rangle_{2}=\frac{1}{2}(\left|1_{1}^{1}1_{2}^{2}1_{3}^{3} \right\rangle+\left|0_{2}^{1}0_{2}^{2}1_{2}^{3}\right\rangle+\left|0_{1}^{3}1_{ 2}^{2}0_{3}^{3}\right\rangle+\left|1_{4}^{1}0_{2}^{2}0_{4}^{3}\right\rangle)\)\(q_{2}^{2}\), \(q_{2}^{2}\), \(q_{2}^{2}\) \\
3th pixel \(\left|C_{1}\right\rangle_{3}=\frac{1}{2}(\left|1_{1}^{1}1_{2}^{2}1_{1}^{3} \right\rangle+\left|0_{2}^{1}0_{2}^{2}1_{2}^{3}\right\rangle+\left|0_{1}^{3}1_ {2}^{2}0_{3}^{3}\right\rangle+\left|1_{4}^{1}0_{2}^{2}0_{4}^{3}\right\rangle)\)\(q_{3}^{1}\), \(q_{3}^{2}\), \(q_{3}^{3}\) \\
4th pixel \(\left|C_{0}\right\rangle_{4}=\frac{1}{2}(\left|0_{1}^{1}0_{1}^{2}0_{1}^{3} \right\rangle+\left|0_{2}^{1}1_{2}^{2}1_{2}^{3}\right\rangle+\left|1_{3}^{1}0_ {3}^{2}1_{3}^{3}\right\rangle+\left|1_{4}^{1}1_{4}^{2}0_{4}^{3}\right\rangle)\)\(q_{4}^{1}\), \(q_{4}^{2}\), \(q_{4}^{3}\) \\ \hline \end{tabular}
\end{table}
Table 2: The quantum superposition states and distributed shares of all pixel in the sharing process of (3, 3) QVSS scheme
\begin{table}
\begin{tabular}{l c c c} \hline Shares & \(\left|C_{b}\right\rangle_{l}\) & Collapsed state & Result state & Color & Pixel \\ \hline \(q_{1}^{1}\), \(q_{1}^{2}\), \(q_{1}^{3}\)\(\left|C_{0}\right\rangle_{1}\) & \(\left|0_{1}^{1}0_{1}^{2}0_{1}^{3}\right\rangle\) & \(\left|0_{1}^{1}\oplus 0_{1}^{2}\oplus 0_{1}^{3}\right\rangle=\left|0\right\rangle\) white 1th pixel \\ \(q_{2}^{1}\), \(q_{2}^{2}\), \(q_{2}^{3}\)\(\left|C_{1}\right\rangle_{2}\) & \(\left|1_{1}^{1}1_{1}^{2}1_{1}^{3}\right\rangle\) & \(\left|1_{1}^{1}\oplus 1_{1}^{2}\oplus 1_{1}^{3}\right\rangle=\left|1\right\rangle\) black 2th pixel \\ \(q_{3}^{1}\), \(q_{3}^{2}\), \(q_{3}^{3}\)\(\left|C_{1}\right\rangle_{3}\) & \(\left|1_{4}^{1}0_{2}^{2}0_{4}^{3}\right\rangle\) & \(\left|1_{4}^{1}\oplus 0_{4}^{2}\oplus 0_{4}^{3}\right\rangle=\left|1\right\rangle\) black 3th pixel \\ \(q_{4}^{1}\), \(q_{4}^{2}\), \(q_{4}^{3}\)\(\left|C_{0}\right\rangle_{4}\) & \(\left|0_{2}^{1}1_{2}^{2}1_{2}^{3}\right\rangle\) & \(\left|0_{2}^{1}\oplus 1_{2}^{2}\oplus 1_{2}^{3}\right\rangle=\left|0\right\rangle\) white 4th pixel \\ \hline \end{tabular}
\end{table}
Table 3: All the cases for \(Bob_{2}\) in the recovering process of (3, 3) QVSS scheme
## 5 Correctness analysis
Here, we mainly prove that the correctness of our scheme, which consists of two criteria: (1) \(n\) participants can cooperate to recover the original secret image, (2) less than \(n\) participants cannot recover the original one.
Theorem 5.1: _The proposed (\(n\), \(n\)) QVSS scheme can recover the original secret image when all the participants work together._
_Proof_: Alice encodes each pixel's color information into superposition state \(\left|C_{b}\right\rangle_{l}\), and distributes \(n\) qubits which compose the \(\left|C_{b}\right\rangle_{l}\) as shares to all participants respectively. When all participants want to recover the original secret image, they need to collect all \(n\) shares corresponding to the same pixel and select \(Z\) basis to measure them. According to the definition of \(\left|C_{b}\right\rangle_{l}\), there are \(2^{n-1}\) possible cases where make \(\left|x_{i}^{1}\oplus x_{i}^{2}\oplus\cdots\oplus x_{i}^{n}\right\rangle=\left| b\right\rangle\), that is, all possible cases go through Step 3 of the recovering process and only one result state \(\left|b\right\rangle\) is obtained. It indicates that the color of the recovered pixel is as same as the original one. So, with \(n\) participants involved in the recovering process, the scheme can recover the original secret image.
Theorem 5.2: _The proposed QVSS scheme does not work when any \(k\) (\(k<n\)) participants cooperate to recover the original secret image._
_Proof_: When \(k\) participants want to recover the original secret image, we assume that first \(k\) participants collect their \(k\) shares to recover the \(l\)th pixel and \(k\) is an even number. After Step 2 of the recovering process, these \(k\) shares collapse into \(k\)-qubit certain state \(\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{k}\right\rangle\). The state \(\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{k}\right\rangle\) has \(2^{k}\) possible cases. Among these \(2^{k}\) cases, there are \(C_{k}^{0}+C_{k}^{2}+\cdots+C_{k}^{k}\) cases that the number of \(1\) among \(\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{k}\right\rangle\) is even, and there are \(C_{k}^{1}+C_{k}^{3}+\cdots+C_{k}^{k-1}\) cases that the number of \(1\) among \(\left|x_{i}^{1}x_{i}^{2}\cdots x_{i}^{k}\right\rangle\) is odd. Then, each of \(2^{k}\) cases goes through Step 3 of recovering process. There are \(C_{k}^{0}+C_{k}^{2}+\cdots+C_{k}^{k}\) cases that \(\left|x_{i}^{1}\oplus x_{i}^{2}\oplus\cdots\oplus x_{i}^{k}\right\rangle=\left|0\right\rangle\), and there are \(C_{k}^{1}+C_{k}^{3}+\cdots+C_{k}^{k-1}\) cases that \(\left|x_{i}^{1}\oplus x_{i}^{2}\oplus\cdots\oplus x_{i}^{k}\right\rangle=\left| 1\right\rangle\). According to the nature of combination number, \(C_{k}^{0}+C_{k}^{2}+\cdots+C_{k}^{k}=C_{k}^{1}+C_{k}^{3}+\cdots+C_{k}^{k-1}\). So, the probabilities of getting \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are both equal to \(\frac{1}{2}\). Similarly, when \(k\) is an odd number, the probabilities of getting \(\left|0\right\rangle\) and \(\left|1\right\rangle\) are also equal to \(\frac{1}{2}\). We can see that it is impossible to determine whether the \(l\)th pixel is white or black.
Overall, we can see that with respecting the two criteria, the proposed scheme is a well-defined (\(n\), \(n\)) QVSS scheme.
## 6 Comparison and Discussion
In order to evaluate our scheme, we chose a classical VSS scheme [3] and two quantum VSS schemes [35, 36] as references, and compare our QVSS scheme with them from the following aspects: single-pixel parallel processing, pixel expansion,
and the loss in resolution. For a more intuitive representation, the results of the comparison are shown in Table 4.
As mentioned in Sect., the pixel is treated as a separate basic unit throughout Naor _et al._'s scheme. So, this scheme has the advantage of single-pixel parallel processing. In order to make individual participant get no information from shares, the scheme use the strategy of pixel expansion to confuse the information of subpixels in each share. However, the size of the shared image is \(m\) times the original secret image and cause the loss in resolution from the original image to the recovered one.
Then, we compare with two quantum VSS schemes, Song _et al._'s scheme [35] and Das _et al._'s scheme [36]. In the Song _et al._'s scheme, the whole secret image is repeatedly encoded into a quantum state. So, the scheme does not retain the advantage of single-pixel parallel processing, i.e., it is not suitable for parallel computing of images. But, the scheme splits the original secret image into sub-images as shares with multiple measurement operations. Therefore, it does not need the strategy of pixel expansion. And the restored image is exactly the same as the original secret image. Different from Song _et al._'s scheme, one pixel is treated as a unit for parallel processing in the Das _et al._'s scheme. However, the scheme still uses the strategy of pixel expansion and apply the characteristics of quantum mechanics to determine the color of each sub-pixel in each share, where each share has multiple sub-pixels. So, the recovered image has the loss in resolution in the Das _et al._'s scheme.
As same as Naor _et al._'s scheme and Das _et al._'s scheme [36], our scheme retains the advantage of single-pixel parallel processing, i.e., the color information of each pixel is encoded into \(n\)-qubit superposition state. Different from them in other aspects, we use the strategy of quantum expansion instead of pixel expansion. Therefore, the size of the shared image is as same as the original one. In the recovering process, all participants cooperate to measure the qubits they hold and execute the \(n\)-qubit _XOR_ operation to recover each pixel of the secret image. The recovered image is the same as the original secret image and has no loss in resolution.
\begin{table}
\begin{tabular}{c c c c} \hline Schemes & Naor _et al._’s scheme [3] & Song _et al._’s scheme [35] & Das _et al._’s scheme [35] \\ \hline Single-pixel parallel processing & Yes & No & Yes \\ Pixel expansion & Yes & No & Yes \\ The loss in resolution & Yes & No & No \\ \hline \end{tabular}
\end{table}
Table 4: Comparison among analogous quantum schemes, Naor _et al._’s scheme and our scheme
## 7 Experiment implementation with IBM Q
IBM Q [38] is an online platform that gives users in the general public access to a set of IBM's prototype quantum processors via the network. In this section, we use IBM Q to demonstrate the practical feasibility of this scheme.
For the sake of brevity, we assume that \(n=6\), i.e., the (6, 6) QVSS scheme is the experimental object. In Step 1 of the sharing process, to share a white pixel, we encode color information of the pixel into the 6-qubit superposition state \(|C_{0}\rangle\) which is composed of 6 qubits. \(|C_{0}\rangle\) is composed of the base states in Table 5, and each base state in the quantum superposition state \(|C_{0}\rangle\) is with equal probability.
\[|C_{0}\rangle=\sum_{i}^{\stackrel{{\phi}}{{\phi}}} \frac{|x_{i}^{j}=0}{\sqrt{32}} \tag{5}\]
Since IBM Q is just a single quantum computer, which does not support the transfer of quantum information between multiple nodes, so we ignore the process of qubits transmission in our scheme (i.e., Step 2 of the sharing process and Step 1 of the recovering process) in our quantum experiment, and directly implement the reminder in IBM Q. In the sharing process, we firstly design a quantum circuit (shown in Fig. 8) to construct \(|C_{0}\rangle\), and run it on the IBM Q platform. We measure the state \(|C_{0}\rangle\) and then get the probability of each base state in \(|C_{0}\rangle\), where the result is shown in Fig. 9. We can find that the probability of each state is almost equal, and the probability amplitudes of all base states float above and below \(\frac{1}{\sqrt{32}}\).
After the state \(|C_{0}\rangle\) is measured, the quantum superposition state may collapse into one certain state from 32 base states. We assume that \(|C_{0}\rangle\) collapses
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Number & Base state & Number & Base state & Number & Base state & Number & Base state \\ \hline
1 & \(|000000\rangle\) & 9 & \(|010001\rangle\) & 17 & \(|100001\rangle\) & 25 & \(|101110\rangle\) \\
2 & \(|000011\rangle\) & 10 & \(|010010\rangle\) & 18 & \(|100010\rangle\) & 26 & \(|110011\rangle\) \\
3 & \(|000101\rangle\) & 11 & \(|010100\rangle\) & 19 & \(|100100\rangle\) & 27 & \(|110101\rangle\) \\
4 & \(|000110\rangle\) & 12 & \(|010111\rangle\) & 20 & \(|100111\rangle\) & 28 & \(|110110\rangle\) \\
5 & \(|001001\rangle\) & 13 & \(|011000\rangle\) & 21 & \(|101000\rangle\) & 29 & \(|111001\rangle\) \\
6 & \(|001010\rangle\) & 14 & \(|011011\rangle\) & 22 & \(|101011\rangle\) & 30 & \(|111010\rangle\) \\
7 & \(|001100\rangle\) & 15 & \(|011101\rangle\) & 23 & \(|101101\rangle\) & 31 & \(|111100\rangle\) \\
8 & \(|001111\rangle\) & 16 & \(|011110\rangle\) & 24 & \(|101110\rangle\) & 32 & \(|111111\rangle\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: All base states of the quantum superposition state \(|C_{0}\rangle\)
Figure 8: Implementation circuit of preparing the quantum superposition state \(|C_{0}\rangle\)
Figure 9: The probability of each base state in the quantum superposition state \(|C_{0}\rangle\)
into one certain state \(|101000\rangle\). Then, we perform _XOR_ operation on the state \(|101000\rangle\) to get a result state on the IBM Q, where the circuit is shown in Fig. 10 (a). The result state is \(|0\rangle\) and its probability is shown in Fig. 10 (b). Finally, according to Step 4 of the recovering process, we can determine that the pixel is white.
To share a black pixel, the process of experiment is similar to the above. We encode color information of the pixel into the quantum superposition state \(|C_{1}\rangle\). \(|C_{1}\rangle\) can be composed of each base state in Table 6. Each base state in the quantum superposition state \(|C_{1}\rangle\) is also with equal probability, so
\[|C_{1}\rangle=\sum_{i}^{\oplus}\sum_{i}^{x_{i}^{j}=1}\frac{\left|x_{i}^{1}x_{ i}^{2}\cdots x_{i}^{6}\right\rangle}{\sqrt{32}}. \tag{6}\]
Figure 10: The process of performing _XOR_ operation on \(|101000\rangle\)
The process of constructing \(|C_{1}\rangle\) and the probability of each base state in \(|C_{1}\rangle\) are shown in Fig. 11 and Fig. 12, respectively. After the state \(|C_{1}\rangle\) is measured, we assume that \(|C_{1}\rangle\) collapses into one certain state \(|101010\rangle\). The process of _XOR_ operation and the probability of the result state are shown in Fig. 13 (a) and (b) respectively. Finally, according to Step 4 of the recovering process, we can determine that the pixel is black.
superposition state, to preserve the advantage of single-pixel parallel processing. Moreover, our scheme uses the strategy of quantum expansion instead of classical pixel expansion. This strategy solves the problem that the recovered image has loss in resolution which caused by pixel expansion. Moreover, compared with other classical VSS scheme and two quantum VSS schemes, we can see that our scheme preserves the advantage of single-pixel parallel processing and solves two problems, which are pixel expansion and the loss in resolution. Besides, the proposed scheme is able to meet two criteria (shown in Sect. 5) and has practical feasibility. Of course, \((t,\,n)\) scheme is a more application-oriented scheme. How to construct a threshold scheme is one of our future work.
## Acknowledgment
The authors would like to express heartfelt gratitude to the anonymous reviewers and editor for their comments that improved the quality of this paper. And the support of all the members of the quantum research group of NUIST is especially acknowledged, their professional discussions and advice have helped us a lot.
|
2309.13895 | An existence theory for superposition operators of mixed order subject
to jumping nonlinearities | We consider a superposition operator of the form $$ \int_{[0, 1]} (-\Delta)^s
u\, d\mu(s),$$ for a signed measure $\mu$ on the interval of fractional
exponents $[0,1]$, joined to a nonlinearity whose term of homogeneity equal to
one is "jumping", i.e. it may present different coefficients in front of the
negative and positive parts. The signed measure is supposed to possess a
positive contribution coming from the higher exponents that overcomes its
negative contribution (if any). The problem taken into account is also of
"critical" type, though in this case the critical exponent needs to be
carefully selected in terms of the signed measure $\mu$. Not only the operator
and the nonlinearity considered here are very general, but our results are new
even in special cases of interest and include known results as particular
subcases. The possibility of considering operators "with the wrong sign" is
also a complete novelty in this setting. | Serena Dipierro, Kanishka Perera, Caterina Sportelli, Enrico Valdinoci | 2023-09-25T06:20:41Z | http://arxiv.org/abs/2309.13895v1 | # An existence theory
###### Abstract.
We consider a superposition operator of the form
\[\int\limits_{[0,1]}(-\Delta)^{s}u\,d\mu(s),\]
for a signed measure \(\mu\) on the interval of fractional exponents \([0,1]\), joined to a nonlinearity whose term of homogeneity equal to one is "jumping", i.e. it may present different coefficients in front of the negative and positive parts.
The signed measure is supposed to possess a positive contribution coming from the higher exponents that overcomes its negative contribution (if any).
The problem taken into account is also of "critical" type, though in this case the critical exponent needs to be carefully selected in terms of the signed measure \(\mu\).
Not only the operator and the nonlinearity considered here are very general, but our results are new even in special cases of interest and include known results as particular subcases.
The possibility of considering operators "with the wrong sign" is also a complete novelty in this setting.
## 1. Introduction
The aim of this paper is to address the study of critical problems involving a nonlocal operator obtained through the linear superposition of fractional operators of different orders.
Specifically, we consider two nonnegative finite (Borel) measures \(\mu^{+}\) and \(\mu^{-}\) in \([0,1]\), as well as the corresponding signed measure \(\mu:=\mu^{+}-\mu^{-}\).
The main operator of interest for us takes the form
\[A_{\mu}u:=\int\limits_{[0,1]}(-\Delta)^{s}u\,d\mu(s). \tag{1.1}\]
As customary, the notation \((-\Delta)^{s}\) is reserved to the fractional Laplacian, defined, for all \(s\in(0,1)\) as
\[(-\Delta)^{s}\,u(x)=c_{N,s}\int\limits_{\mathbb{R}^{N}}\frac{2u(x)-u(x+y)-u(x- y)}{|y|^{N+2s}}\,dy. \tag{1.2}\]
The positive normalizing constant \(c_{N,s}\) is chosen in such a way that, for \(u\) smooth and rapidly decaying, the Fourier transform of \((-\Delta)^{s}u\) returns \((2\pi|\xi|)^{2s}\) times the Fourier transform of \(u\) and provides consistent limits as \(s\nearrow 1\) and as \(s\searrow 0\), namely
\[\lim_{s\nearrow 1}(-\Delta)^{s}u=(-\Delta)^{1}u=-\Delta u\qquad\text{and} \qquad\lim_{s\searrow 0}(-\Delta)^{s}u=(-\Delta)^{0}u=u.\]
Particular cases for the operator in (1.1) are (minus) the Laplacian (corresponding to the choice of \(\mu\) being the Dirac measure concentrated at \(1\)), the fractional Laplacian \((-\Delta)^{s_{*}}\) (corresponding to the choice of \(\mu\) being the Dirac measure concentrated at some fractional power \(s_{*}\)), the "mixed order operator" \(-\Delta+(-\Delta)^{s_{*}}\) (when \(\mu\) is the sum of two Dirac measures), etc.
The "continuous" superposition of operators of different fractional orders has also been recently considered in the literature, see e.g. [10].
A list of interesting cases for this operator will be discussed in detail in Section 5.
For the moment, let us recall that operators arising from the superpositions of local and non-local operators are a topic intensively studied in the contemporary research, under different perspectives, including regularity theory (see [1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]), existence and nonexistence results (see [1]), viscosity solution theory (see [14, 15, 16, 17, 18, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 297, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 325, 336, 341, 355, 361, 375, 383, 390, 311, 313, 326, 342, 343, 356, 361, 375, 383, 391, 392, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 325, 337, 338, 393, 311, 339, 326, 341, 358, 394, 360, 309, 311, 327, 338, 395, 310, 328, 396, 341, 359, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 311, 314, 315, 316, 317, 318, 319, 320, 321, 328, 329, 334, 335, 336, 337, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 366, 367, 368, 369, 370, 371, 372, 373, 375, 376, 378, 379, 380, 382, 383, 384, 385, 386, 387, 388, 389, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 42, 434, 44, 45, 46, 47, 48, 49, 42, 44, 49, 43, 44, 45, 46, 48, 49, 44, 47, 49, 44, 48, 49, 40, 411, 44, 44, 49, 45, 46, 49, 47, 48, 49, 40, 411, 44, 44, 49, 41, 44, 41, 44, 44, 41, 44, 41, 44, 44, 41, 44, 44, 41, 44, 44, 45, 46, 49, 40, 41, 44, 44, 41, 44, 44, 41, 44, 44, 45, 46, 47, 48, 49, 40, 411, 44, 44, 41, 44, 44, 41, 44, 44, 41, 44, 44, 41, 44, 41, 44, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 44, 41, 41, 44, 41, 44
(corresponding to \(\mathbb{R}^{2}\)) that are conveniently related to the spectral properties of the operator \(A_{\mu}\) in (1.1). To this end, we briefly recall the construction of the minimal and maximal curves of the Dancer-Fucik spectrum (see [13, Chapter 4] for a general setting).
One looks at the operator \(A_{\mu}\) in (1.1) (the setting can be actually generalized to include the case of monotone, self-adjoint operators with compact inverse, coupled to potentials). The classical spectrum of \(A_{\mu}\) consists of isolated eigenvalues \(\lambda_{l}\), with \(l\geqslant 1\), with finite multiplicity, satisfying \(0<\lambda_{1}<\dots<\lambda_{l}<\dots\).
Instead, the Dancer-Fucik spectrum of \(A_{\mu}\) consists of the couples \((a,b)\in\mathbb{R}^{2}\) for which the equation
\[A_{\mu}u=bu^{+}-au^{-} \tag{1.8}\]
has a nontrivial solution.
The Dancer-Fucik spectrum is a closed subset of \(\mathbb{R}^{2}\) (see [13, Proposition 4.4.3]). We also point out that equation (1.8) reduces to \(A_{\mu}u=\lambda u\) when \(a=b=\lambda\), and therefore the Dancer-Fucik spectrum of \(A_{\mu}\) contains points of the form \((\lambda_{l},\lambda_{l})\).
The Dancer-Fucik spectrum presents an interesting geometry, see [13, Theorem 4.7.9]. Namely, there exist two continuous and strictly decreasing functions \(\nu_{l-1}\) and \(\mu_{l}\), such that:
* for all \(a\in(\lambda_{l-1},\lambda_{l+1})\), we have that \(\nu_{l-1}(a)\leqslant\mu_{l}(a)\),
* \(\nu_{l-1}(\lambda_{l})=\lambda_{l}=\mu_{l}(\lambda_{l})\),
* for all \(a\in(\lambda_{l-1},\lambda_{l+1})\), we have that both \((a,\nu_{l-1}(a))\) and \((a,\mu_{l}(a))\) belong to the Dancer-Fucik spectrum,
* if \(a\in(\lambda_{l-1},\lambda_{l+1})\) and \(b\in(\lambda_{l-1},\lambda_{l+1})\), with either \(b<\nu_{l-1}(a)\) or \(b>\mu_{l}(a)\), then \((a,b)\) does not belong to the Dancer-Fucik spectrum.
In particular, setting, for any \(l\geqslant 2\),
\[Q_{l}:=(\lambda_{l-1},\lambda_{l+1})\times(\lambda_{l-1},\lambda_{l+1}), \tag{1.9}\]
we have that the graphs of \(\nu_{l-1}\) and \(\mu_{l}\) are strictly decreasing curves in \(Q_{l}\) that belong to the Dancer-Fucik spectrum. Also, both these curves pass through the point \((\lambda_{l},\lambda_{l})\), while the region \(\{(a,b)\in Q_{l}:b<\nu_{l-1}(a)\}\) below the lower curve and the region \(\{(a,b)\in Q_{l}:b>\mu_{l}(a)\}\) above the upper curve lie outside the Dancer-Fucik spectrum.
Points in the region \(\{(a,b)\in Q_{l}:\nu_{l-1}(a)<b<\mu_{l}(a)\}\) between these two graphs (when such region is nonempty) may or may not belong to the Dancer-Fucik spectrum.
The geometry related to the Dancer-Fucik spectrum is sketched in Figure 1.
For our purposes, for all \(l\geqslant 2\), the region in \(Q_{l}\) below the lower curve of the Dancer-Fucik spectrum is of particular importance, since a portion of this region contains the pairs \((a,b)\) allowing for nontrivial solutions of (1.7).
To describe this portion of the plane, we define
\[\begin{split}\mathcal{S}:=&\inf\Bigg{\{}\mu^{+}(0) \left\|u\right\|_{L^{2}(\Omega)}^{2}+\mu^{+}(1)\left\|\nabla u\right\|_{L^{2} (\Omega)}^{2}\\ &\qquad\qquad+\int\limits_{(0,1)}\left[c_{N,s}\iint\limits_{ \mathbb{R}^{2N}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{N+2s}}\,dx\,dy\right]\,d\mu^{+}( s)\Bigg{\}}.\end{split} \tag{1.10}\]
The infimum above1
Footnote 1: From now on, with a slight abuse of notation, the quantity in brackets in formula (1.10) (and similar quantities) will be abbreviated into
\[\int\limits_{[0,1]}\left[c_{N,s}\iint\limits_{\mathbb{R}^{2,N}}\frac{|u(x)-u(y)|^ {2}}{|x-y|^{N+2s}}\,dx\,dy\right]\,d\mu^{+}(s),\]
with the understanding that the measure evaluation at \(s=0\) and \(s=1\) (if nonvoid) returns the classical expressions.
is taken over all the functions \(u\in C_{0}^{\infty}(\Omega)\) satisfying \(\left\|u\right\|_{L^{2s_{\sharp}}_{\sharp}(\mathbb{R}^{N})}=1\), with \(s_{\sharp}\) as in (1.6). Roughly speaking, one can consider \(\mathscr{S}\) as the analogue of the Sobolev constant for the operator \(A_{\mu}\) in (1.1).
For our purpose this generalized Sobolev constant is useful to identify the pairs \((a,b)\) allowing for a nontrivial solution of (1.7). Specifically, these pairs are precisely the ones lying in \(Q_{l}\) below the lower curve of the Dancer-Fucik spectrum and satisfying
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{\mathscr{S}}{|\Omega|^{(2s_{\sharp})/ N}}. \tag{1.11}\]
Here above and in the rest of this paper, \(|\Omega|\) stands for the Lebesgue measure of \(\Omega\). The corresponding region of interest is sketched in Figure 2.
The result that we obtain is thus as follows:
**Theorem 1.1**.: _Let \(\mu=\mu^{+}-\mu^{-}\) with \(\mu^{+}\) and \(\mu^{-}\) satisfying (1.3), (1.4) and (1.5)._
_Let \((a,b)\in Q_{l}\). Assume that \(b<\nu_{l-1}(a)\) and that (1.11) is satisfied._
_Then, there exists \(\gamma_{0}>0\), depending only on \(N\), \(\Omega\), \(s_{\sharp}\), \(a\) and \(b\), such that if \(\gamma\in[0,\gamma_{0}]\) then problem (1.7) admits a nontrivial solution._
We stress that Theorem 1.1 is not only new in its wide generality, but it also possesses many specific cases which are also new.
In particular:
Figure 1. Upper (in red) and lower (in green) curves of the Dancer-Fucik spectrum. The points of this spectrum can only lie within these two curves.
* If \(\mu:=\delta_{1}\), i.e. if \(A_{\mu}\) reduces to the classical Laplacian, then problem (1.7) has been recently studied in [10]. In particular, [10, Theorem 1.3] provided the existence of a nontrivial solution when \(N\geqslant 4\). Our result gives an existence result in a small region below the lower curve and holds for \(N\geqslant 3\) (hence improving the known condition on the dimension). A detailed discussion of this type of results will be given in Corollary 5.1.
* If \(\mu:=\delta_{s}\), i.e. if the operator is the fractional Laplacian \((-\Delta)^{s}\) for some \(s\in(0,1)\), our results are still new, to the best of our knowledge (in fact, they seem to be new even in the case \(a=b\) in which no jumping nonlinearity is present, but see [11] for related results). We treat this case in detail in Corollary 5.2.
* If \(\mu:=\delta_{1}+\delta_{s}\), i.e. if \(A_{\mu}=-\Delta+(-\Delta)^{s}\) for some \(s\in(0,1)\), then Theorem 1.1 is new. In this setting, the particular case \(a=b=\lambda\) has been recently studied in [1, Theorem 1.4], where a nontrivial solution was found for a suitable range of \(\lambda\). The application of Theorem 1.1 for this choice of \(\mu\) will be discussed in Corollary 5.3.
* The case of the superposition of two nonlocal operators with different orders, corresponding to the choice of the measure \(\mu:=\delta_{s_{1}}+\delta_{s_{2}}\) and to an operator of the type \((-\Delta)^{s_{1}}+(-\Delta)^{s_{2}}\) for some \(s_{1}\), \(s_{2}\in(0,1)\), is also new to our best knowledge.
* The case in which the measure \(\mu\) changes sign is also new to our best knowledge. This seems to be new even in the case \(\mu:=\delta_{1}-\alpha\delta_{s}\), corresponding to an operator of the form \(-\Delta-\alpha(-\Delta)^{s}\), where \(s\in(0,1)\) and \(\alpha\) is a small positive constant (notice the "wrong" sign in the second term of this operator). A simple example in this setting is provided in Corollary 5.4. Actually, we think that our strategy on how to deal with "wrong" sign contributions may be of general interest
Figure 2. The region (in light blue) below the lower curve of the Dancer-Fücik spectrum where the existence of a nontrivial solution is guaranteed by Theorem 1.1.
and lead to the study of a rather general class of operators with competing diffusive trends.
* The case of a convergent series \[\sum_{k=0}^{+\infty}c_{k}(-\Delta)^{s_{k}}u,\qquad\text{where }\ \sum_{k=0}^{+\infty}c_{k}\in(0,+\infty),\] with (i) either \(c_{k}\geqslant 0\) for all \(k\in\mathbb{N}\), (ii) or \[c_{k}>0\ \text{ for all }k\in\{1,\ldots,\overline{k}\}\text{ and }\sum_{k=\overline{k}+1}^{+\infty}c_{k}\leqslant\gamma\sum_{k=0}^{ \overline{k}}c_{k},\] for some \(\overline{k}\in\mathbb{N}\) and \(\gamma\geqslant 0\), are also new (see Corollaries 5.5 and 5.6).
* The continuous superposition of fractional operators of the form \[\int\limits_{0}^{1}(-\Delta)^{s}u\,f(s)\,ds,\] where \(f\) is a measurable and non identically zero function, is also new (see Corollary 5.7).
In the forthcoming paper [4], we will also consider the case of nonlinear fractional operators of mixed order of \(p\)-Laplacian type.
The rest of this paper is organized as follows. Section 2 gathers several estimates of Sobolev type which will constitute the functional analytic core of our study. Then, we present in Section 3 the variational framework in which we work and we complete the proof of the main result in Section 4.
In Section 5 we apply the general result in Theorem 1.1 to several specific cases of interest, which are also new in the literature.
An interesting technical aspect of the proofs presented is that our arguments are (for the first time in the literature, to our best knowledge2) capable also of dealing with operators "with the wrong sign", i.e. the ones coming from the measure \(\mu^{-}\) and which have to be "reabsorbed" through quantitative estimates into \(\mu^{+}\). We think that it is particularly remarkable that no extra assumption on the equation is needed for this. Given its interest also in practical situations (in which competing operators could participate to a complex model with opposite3 diffusion and concentration features) we believe that this novelty can open a new direction of research and apply to other problems as well.
## 2. Sobolev-type estimates
In this section, we consider a bounded open set \(\Omega\subset\mathbb{R}^{N}\) and we develop suitable energy estimates to deal with the operator in (1.1).
For this, for \(s\in(0,1)\), we let
\[[u]_{s}:=\left(c_{N,s}\iint\limits_{\mathbb{R}^{2N}}\frac{|u(x)-u(y)|^{2}}{|x-y |^{N+2s}}\,dx\,dy\right)^{\frac{1}{2}} \tag{2.1}\]
be the Gagliardo seminorm of a measurable function \(u:\mathbb{R}^{N}\to\mathbb{R}\), see e.g. [10].
The consistent choice of the normalizing constant \(c_{N,s}\) is such that
\[\underset{s\nearrow 1}{\lim}[u]_{s}=[u]_{1}:=\|\nabla u\|_{L^{2}(\mathbb{R}^{N})} \qquad\text{and}\qquad\underset{s\searrow 0}{\lim}[u]_{s}=[u]_{0}:=\|u\|_{L^{2}( \mathbb{R}^{N})}.\]
We now observe that higher exponents in fractional norms control lower exponents, with uniform constants, according to the next observation:
**Lemma 2.1**.: _Let \(0\leqslant s_{1}\leqslant s_{2}\leqslant 1\)._
_Then, for any measurable function \(u:\mathbb{R}^{N}\to\mathbb{R}\) with \(u=0\) a.e. in \(\mathbb{R}^{N}\setminus\Omega\) we have that_
\[[u]_{s_{1}}\leqslant c\,[u]_{s_{2}}, \tag{2.2}\]
_for a suitable positive constant \(c=c(N,\Omega)\)._
Proof.: First, we suppose that \(u\in C^{\infty}_{0}(\Omega)\). A rather delicate issue is that we can take the constant \(c(N,\Omega)\) independent of \(s_{1}\) and \(s_{2}\). To check this, it is convenient to write the Gagliardo seminorm in terms of the Fourier transform (see e.g. [10]) as
\[[u]_{s}=\left(\,\int\limits_{\mathbb{R}^{N}}|2\pi\xi|^{2s}\,|\widehat{u}(\xi) |^{2}\,d\xi\right)^{\frac{1}{2}}.\]
We stress that, by Plancherel Theorem, this is also valid when \(s=0\) and \(s=1\).
One can combine this with the fractional Sobolev constant, which can be explicitly computed in the Hilbert setting, see [12, formula (2)], according to which
\[[u]_{s}^{2}\geqslant C^{\star}(N,s)\,\|u\|_{L^{2^{\star}}(\mathbb{R}^{N})}^{2},\]
with
\[C^{\star}(N,s):=2^{4s}\pi^{3s}\,\frac{\Gamma((N+2s)/2)}{\Gamma((N-2s)/2)}\, \left(\frac{\Gamma(N/2)}{\Gamma(N)}\right)^{2s/N},\]
where we used the standard notation for the Gamma Function.
We recall that the Gamma Function on the real line has a minimum at \(r_{\star}:=1.46...\), with \(\Gamma(r_{\star})>0.88\), rising to either side of this minimum. Thus,
\[C^{\star}(N,s)\geqslant\frac{0.88}{\Gamma(N+10)}\,\left(\frac{0.88}{\Gamma(N )}\right)^{2s/N}\geqslant\frac{0.88}{\Gamma(N+10)}\,\left(\frac{0.88}{\Gamma( N)}\right)^{2/N}=:C^{\star}(N).\]
Consequently, using the Sobolev and Holder inequalities,
\[[u]_{s_{1}}^{2} \leqslant \int\limits_{B_{1/(2\pi)}}|2\pi\xi|^{2s_{1}}\,|\widehat{u}(\xi)| ^{2}\,d\xi+\int\limits_{\mathbb{R}^{N}\setminus B_{1/(2\pi)}}|2\pi\xi|^{2s_{2 }}\,|\widehat{u}(\xi)|^{2}\,d\xi\] \[\leqslant \int\limits_{\mathbb{R}^{N}}|\widehat{u}(\xi)|^{2}\,d\xi+\int \limits_{\mathbb{R}^{N}}|2\pi\xi|^{2s_{2}}\,|\widehat{u}(\xi)|^{2}\,d\xi\] \[= \|u\|_{L^{2}(\Omega)}^{2}+[u]_{s_{2}}^{2}\]
\[\leqslant |\Omega|^{\frac{2s_{2}}{N}}\|u\|_{L^{s_{2}^{*}}(\Omega)}^{2}+[u]_{s_{ 2}}^{2}\] \[\leqslant \frac{(1+|\Omega|)^{\frac{2s_{2}}{N}}}{C^{*}(N,s_{2})}[u]_{s_{2}}^ {2}+[u]_{s_{2}}^{2}\] \[\leqslant \frac{(1+|\Omega|)^{\frac{2s}{N}}}{C^{*}(N)}[u]_{s_{2}}^{2}+[u]_{ s_{2}}^{2}\]
and this proves (2.2) when \(u\in C_{0}^{\infty}(\Omega)\).
Now we perform a density argument to establish (2.2) in the general case. To this end, let \(u:\mathbb{R}^{N}\to\mathbb{R}\) be a measurable function with \(u=0\) a.e. in \(\mathbb{R}^{N}\setminus\Omega\). We can assume that \([u]_{s_{2}}<+\infty\), otherwise there is nothing to prove. Then, by the density of the smooth functions in the (possibly fractional) Sobolev spaces, we find a sequence of functions \(u_{k}\in C_{0}^{\infty}(\Omega)\) such that \([u_{k}-u]_{s_{2}}\to 0\) as \(k\to+\infty\).
Thus, by (possibly fractional) Sobolev embeddings, up to a subsequence we can assume that \(u_{k}\to u\) in \(L^{2}(\Omega)\) and a.e. in \(\mathbb{R}^{N}\). Accordingly, we can use the already proved version of (2.2) to infer that \([u_{k}]_{s_{1}}\leqslant c\,[u_{k}]_{s_{2}}\) and, as a consequence,
\[\liminf_{k\to+\infty}[u_{k}]_{s_{1}}\leqslant c\liminf_{k\to+\infty}[u_{k}]_{ s_{2}}\leqslant c\left([u]_{s_{2}}+\liminf_{k\to+\infty}[u_{k}-u]_{s_{2}} \right)=c\,[u]_{s_{2}}.\]
Hence, the desired result in (2.2) follows from Fatou's Lemma.
We define the space \(\mathcal{X}(\Omega)\) as the set of measurable functions \(u:\mathbb{R}^{N}\to\mathbb{R}\) such that \(u=0\) in \(\mathbb{R}^{N}\setminus\Omega\) and
\[\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+}(s)<+\infty.\]
**Lemma 2.2**.: _Suppose that \(\mu^{+}\) satisfies (1.3). Then, \(\mathcal{X}(\Omega)\) is a Hilbert space._
Proof.: Let \(u_{n}\) be a Cauchy sequence and take \(\epsilon>0\). Thus, there exists \(\overline{n}\in\mathbb{N}\) such that if \(m\), \(k\geqslant\overline{n}\) then
\[\int\limits_{[0,1]}[u_{m}-u_{k}]_{s}^{2}\,d\mu^{+}(s)\leqslant\epsilon. \tag{2.3}\]
Now, we distinguish two cases, either \(\mu^{+}(1)\neq 0\) or \(\mu^{+}(1)=0\).
If \(\mu^{+}(1)\neq 0\), then we obtain from (2.3) that
\[\epsilon\geqslant\mu^{+}(1)\int\limits_{\Omega}|\nabla u_{m}-\nabla u_{k}|^{2 }\,dx,\]
and therefore \(u_{n}\) is a Cauchy sequence in \(H_{0}^{1}(\Omega)\). Accordingly, there exists \(u\in H_{0}^{1}(\Omega)\) such that \(u_{n}\to u\) in \(H_{0}^{1}(\Omega)\) as \(n\to+\infty\).
Also, exploiting Lemma 2.1, we see that, for all \(s\in[0,1)\),
\[[u-u_{n}]_{s}\leqslant c(N,\Omega)\,[u-u_{n}]_{1}.\]
As a consequence,
\[\int\limits_{[0,1]}[u-u_{n}]_{s}^{2}\,d\mu^{+}(s)\leqslant c^{2}(N,\Omega) \mu^{+}([0,1])[u-u_{n}]_{1}^{2},\]
which gives that
\[\lim_{n\to+\infty}\int\limits_{[0,1]}[u-u_{n}]_{s}^{2}\,d\mu^{+}(s)=0,\]
as desired.
If instead \(\mu^{+}(1)=0\), then we deduce from (2.3) and Lemma 2.1 that
\[\epsilon\geqslant\int\limits_{[0,1)}[u_{m}-u_{k}]_{s}^{2}\,d\mu^{+}( s)\geqslant\int\limits_{[\overline{s},1)}[u_{m}-u_{k}]_{s}^{2}\,d\mu^{+}(s)\] \[\geqslant\frac{1}{c^{2}(N,\Omega)}\int\limits_{[\overline{s},1) }[u_{m}-u_{k}]_{\overline{s}}^{2}\,d\mu^{+}(s)=\frac{\mu^{+}([\overline{s},1) )}{c^{2}(N,\Omega)}\,[u_{m}-u_{k}]_{\overline{s}}^{2}.\]
We point out that \(\mu^{+}([\overline{s},1))>0\) in light of (1.3) and the fact that \(\mu^{+}(1)=0\).
Accordingly, \(u_{n}\) is a Cauchy sequence in \(H^{\overline{s}}_{0}(\Omega)\) and therefore it converges to some \(u\) in \(H^{\overline{s}}_{0}(\Omega)\). Hence, we can extract a subsequence \(u_{n_{j}}\) converging to \(u\) in \(L^{2}(\Omega)\) and a.e. in \(\mathbb{R}^{N}\). Then, if \(m\geqslant\overline{n}\), we have that
\[\epsilon \geqslant \lim\limits_{j\to+\infty}\int\limits_{[0,1)}[u_{m}-u_{n_{j}}]_{s }^{2}\,d\mu^{+}(s)\] \[\geqslant \lim\limits_{j\to+\infty}\left(\mu^{+}(0)\|u_{m}-u_{n_{j}}\|_{L^{ 2}(\Omega)}^{2}+\int\limits_{(0,1)}[u_{m}-u_{n_{j}}]_{s}^{2}\,d\mu^{+}(s)\right)\] \[= \mu^{+}(0)\|u_{m}-u\|_{L^{2}(\Omega)}^{2}\] \[\qquad+\lim\limits_{j\to+\infty}\int\limits_{(0,1)}\left(c_{N,s} \iint\limits_{\mathbb{R}^{2N}}\frac{|(u_{m}-u_{n_{j}})(x)-(u_{m}-u_{n_{j}})(y) |^{2}}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{+}(s).\]
As a result, by Fatou's Lemma,
\[\epsilon \geqslant \mu^{+}(0)\|u_{m}-u\|_{L^{2}(\Omega)}^{2}\] \[\qquad+\int\limits_{(0,1)}\left(c_{N,s}\iint\limits_{\mathbb{R}^ {2N}}\liminf\limits_{j\to+\infty}\frac{|(u_{m}-u_{n_{j}})(x)-(u_{m}-u_{n_{j}})( y)|^{2}}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{+}(s)\] \[= \mu^{+}(0)\|u_{m}-u\|_{L^{2}(\Omega)}^{2}\] \[\qquad+\int\limits_{(0,1)}\left(c_{N,s}\iint\limits_{\mathbb{R}^ {2N}}\frac{|(u_{m}-u)(x)-(u_{m}-u)(y)|^{2}}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{ +}(s)\] \[= \int\limits_{[0,1]}[u_{m}-u]_{s}^{2}\,d\mu^{+}(s),\]
which says that the sequence \(u_{n}\) converges to \(u\) in \(\mathfrak{X}(\Omega)\), as desired.
In this setting, we can "reabsorb" the negative part of the signed measure \(\mu\), according to the following result:
**Proposition 2.3**.: _Assume (1.4) and (1.5)._
_Then, there exists \(c_{0}=c_{0}(N,\Omega)>0\) such that, for any \(u\in\mathfrak{X}(\Omega)\), we have_
\[\int\limits_{[0,\overline{s}]}[u]_{s}^{2}\,d\mu^{-}(s)\leqslant c_{0}\,\gamma \int\limits_{[\overline{s},1]}[u]_{s}^{2}\,d\mu(s).\]
Proof.: We notice that if \(\mu^{+}([\overline{s},1])=0\), then condition (1.5) would give that \(\mu^{-}([0,\overline{s}])=0\), and therefore Proposition 2.3 would be trivially satisfied.
Thus, from now on we suppose that \(\mu^{+}([\overline{s},1])>0\). By applying Lemma 2.1 with \(s_{1}:=\overline{s}\) and \(s_{2}:=s\) we infer that, for all \(s\in[\overline{s},1]\),
\[[u]_{\overline{s}}^{2}\leqslant c^{2}(N,\Omega)\,[u]_{s}^{2}.\]
Similarly, applying Lemma 2.1 with \(s_{1}:=s\) and \(s_{2}:=\overline{s}\), for all \(s\in[0,\overline{s}]\) we have that
\[[u]_{s}^{2}\leqslant c^{2}(N,\Omega)\,[u]_{\overline{s}}^{2}.\]
Consequently, recalling (1.4) and (1.5),
\[\int\limits_{[0,\overline{s}]} [u]_{s}^{2}\,d\mu^{-}(s)\leqslant c^{2}(N,\Omega)\,\int\limits_{[0,\overline{s}]}[u]_{\overline{s}}^{2}\,d\mu^{-}(s)=c^{2}(N,\Omega)\,[u]_{ \overline{s}}^{2}\,\mu^{-}\left([0,\overline{s}]\right)\] \[\leqslant c^{2}(N,\Omega)\,\gamma\,[u]_{\overline{s}}^{2}\,\mu^{+ }\big{(}[\overline{s},1]\big{)}=c^{2}(N,\Omega)\,\gamma\int\limits_{[ \overline{s},1]}[u]_{\overline{s}}^{2}\,d\mu^{+}(s)\] \[\leqslant c^{4}(N,\Omega)\,\gamma\int\limits_{[\overline{s},1]}[u ]_{s}^{2}\,d\mu^{+}(s).\]
This is the desired result, with \(c_{0}:=c^{4}(N,\Omega)\).
**Proposition 2.4**.: _Assume (1.3), (1.4) and (1.5). Let \(s_{\sharp}\in[\overline{s},1]\) be as in (1.6)._
_Then, there exists a positive constant \(\bar{c}=\bar{c}(N,\Omega,s_{\sharp})\) such that, for any \(u\in\mathcal{X}(\Omega)\),_
\[[u]_{s_{\sharp}}\leqslant\bar{c}\left(\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+ }(s)\right)^{\frac{1}{2}}. \tag{2.4}\]
_In particular, the space \(\mathcal{X}(\Omega)\) is continuously embedded in \(L^{r}(\Omega)\) for any \(r\in[1,2_{s_{\sharp}}^{*}]\) and compactly embedded in \(L^{r}(\Omega)\) for any \(r\in[1,2_{s_{\sharp}}^{*})\)._
Proof.: By Lemma 2.1, used here with \(s_{1}:=s_{\sharp}\) and \(s_{2}:=s\), for all \(s\in[s_{\sharp},1]\) we have that \([u]_{s_{\sharp}}\leqslant c(N,\Omega)\,[u]_{s}\).
As a result,
\[\mu^{+}\big{(}[s_{\sharp},1]\big{)}\,[u]_{s_{\sharp}}^{2}\leqslant c^{2}(N, \Omega)\,\int\limits_{[s_{\sharp},1]}[u]_{s}^{2}\,d\mu^{+}(s)\leqslant c^{2}(N,\Omega)\,\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+}(s).\]
This and (1.6) yield the desired result.
## 3. Variational setting
In this section, we cast problem (1.7) into a suitable variational setting. To start, we state the following definition.
**Definition 3.1**.: A weak solution of problem (1.7) is a function \(u\in\mathcal{X}(\Omega)\) such that, for all \(v\in\mathcal{X}(\Omega)\),
\[\int\limits_{[0,1]} \left(c_{N,s}\iint\limits_{\mathbb{R}^{2N}}\frac{\left(u(x)-u(y) \right)\left(v(x)-v(y)\right)}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{+}(s)\] \[-\int\limits_{[0,\overline{s}]} \left(c_{N,s}\iint\limits_{\mathbb{R}^{2N}}\frac{\left(u(x)-u(y) \right)\left(v(x)-v(y)\right)}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{-}(s)\] \[=\int\limits_{\Omega}(bu^{+}-au^{-})v\,dx+\int\limits_{\Omega}|u|^ {2_{s_{\sharp}}^{*}-2}\,uv\,dx.\]
The variational functional \(E:\mathscr{X}(\Omega)\to\mathbb{R}\) associated with problem (1.7) is defined by
\[E(u)=\frac{1}{2}\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+}(s)-\frac{1}{2}\int \limits_{[0,\overline{s}]}[u]_{s}^{2}\,d\mu^{-}(s)-\frac{1}{2}\int\limits_{ \Omega}\left[a\left(u^{-}\right)^{2}+b\left(u^{+}\right)^{2}\right]\,dx-\frac{ 1}{2^{*}_{s_{t}}}\int\limits_{\Omega}|u|^{2^{*}_{s_{t}}}\,dx. \tag{3.1}\]
**Remark 3.2**.: Note that in the functional (3.1) the term arising from the negative part of the measure \(\mu\) can be absorbed in the norm. In fact, by Proposition 2.3 we have that
\[\int\limits_{[0,\overline{s}]}[u]_{s}^{2}\,d\mu^{-}(s)\leqslant c_{0}(N, \Omega)\,\gamma\int\limits_{[\overline{s},1]}[u]_{s}^{2}\,d\mu(s)\leqslant c _{0}(N,\Omega)\,\gamma\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+}(s).\]
In particular, if \(\gamma\) is sufficiently small (possibly depending on \(N\) and \(\Omega\)) it follows that
\[E(u)\geqslant\frac{1}{4}\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+}(s)-\frac{1}{ 2}\int\limits_{\Omega}\left[a\left(u^{-}\right)^{2}+b\left(u^{+}\right)^{2} \right]\,dx-\frac{1}{2^{*}_{s_{\overline{s}}}}\int\limits_{\Omega}|u|^{2^{*}_ {s_{t}}}\,dx.\]
We now state a weak convergence result (to be used below in the analysis of Palais-Smale sequences in the forthcoming Proposition 3.4):
**Lemma 3.3**.: _Let \(u_{n}\) be a bounded sequence in \(\mathscr{X}(\Omega)\)._
_Then, there exists \(u:\mathbb{R}^{N}\to\mathbb{R}\) such that, up to a subsequence, for any \(v\in\mathscr{X}(\Omega)\),_
\[\begin{split}&\lim\limits_{n\to+\infty}\int\limits_{[0, \overline{s}]}\left(\,\iint\limits_{\mathbb{R}^{2N}}\frac{c_{N,s}(u_{n}(x)-u_{ n}(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,dx\,dy\right)\,d\mu^{-}(s)\\ &\qquad=\int\limits_{[0,\overline{s}]}\left(\,\iint\limits_{ \mathbb{R}^{2N}}\frac{c_{N,s}(u(x)-u(y))(v(x)-v(y))}{|x-y|^{N+2s}}\,dx\,dy \right)\,d\mu^{-}(s).\end{split} \tag{3.2}\]
_Also,_
\[u_{n}\text{ converges to }u\text{ in }L^{1}(\Omega)\text{ as }n\to+\infty. \tag{3.3}\]
Proof.: By (2.1) and Proposition 2.3,
\[\begin{split}\int\limits_{[0,\overline{s}]}\left(c_{N,s}\iint \limits_{\mathbb{R}^{2N}}\frac{|u_{n}(x)-u_{n}(y)|^{2}}{|x-y|^{N+2s}}\,dx\,dy \right)\,d\mu^{-}(s)=\int\limits_{[0,\overline{s}]}[u_{n}]_{s}^{2}\,d\mu^{-}(s )\\ \leqslant c_{0}\,\gamma\int\limits_{[\overline{s},1]}[u_{n}]_{s}^ {2}\,d\mu(s)\leqslant c_{0}\,\gamma\int\limits_{[0,1]}[u_{n}]_{s}^{2}\,d\mu^{+ }(s),\end{split}\]
which is bounded uniformly in \(n\).
The desired result in (3.2) now follows from the Banach-Alaoglu Theorem.
It remains to establish (3.3). To this aim, we consider three cases: \(\mu^{-}\) is the zero measure, \(\mu^{-}\) is a Dirac measure at \(0\) and, as a last possibility, \(\mu^{-}((0,\overline{s}])>0\).
In the first two cases, we could have used directly Proposition 2.4, obtaining that, up to a subsequence, \(u_{n}\) converges to some \(u\) in \(L^{2}(\mathbb{R}^{N})\). This \(u\) would have satisfied both (3.2) and (3.3) (because in the first case (3.2) is void and in the second case, recalling footnote 1 on page 4, it has to be interpreted as a weak convergence in \(L^{2}(\mathbb{R}^{N})\)).
So, we can focus on the third case, namely we suppose that \(\mu^{-}((0,\overline{s}])>0\). Hence, by the Dominated Convergence Theorem,
\[\lim\limits_{\epsilon\searrow 0}\mu^{-}([\epsilon,\overline{s}])=\mu^{-}((0, \overline{s}])>0.\]
Accordingly, there exists \(\epsilon_{0}\in(0,\overline{s}]\) such that \(\mu^{-}([\epsilon_{0},\overline{s}])>0\).
From this and Lemma 2.1, we have that
\[\mu^{-}([\epsilon_{0},\overline{s}])[u_{n}]_{\epsilon_{0}}^{2}\leqslant c^{2}(N, \Omega)\,\int\limits_{[\epsilon_{0},\overline{s}]}[u_{n}]_{s}^{2}\,d\mu^{-}(s),\]
which is bounded uniformly in \(n\), thanks to Proposition 2.3. Thus, by the compactness result for fractional Sobolev spaces (see e.g. [10]), we obtain (3.3), as desired.
Now we address the convergence of the Palais-Smale sequences. For this, we first point out that, in view of (1.11), there exists \(\epsilon_{0}=\epsilon_{0}(N,\Omega,s_{\sharp},a,b)\in\left(0,\frac{8}{|\Omega| ^{(2s_{\sharp})/N}}\right)\) such that
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{8}{|\Omega|^{(2s_{\sharp})/N}}+ \epsilon_{0}.\]
Hence, we define
\[\theta_{0}=\theta_{0}(N,\Omega,s_{\sharp},a,b):=\frac{|\Omega|^{(2s_{\sharp})/ N}}{8}\,\epsilon_{0}\in(0,1)\]
and we see that
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{8}{|\Omega|^{(2s_{\sharp})/N}}(1- \theta_{0}). \tag{3.4}\]
With this notation, we have:
**Proposition 3.4**.: _Let \(\mathcal{S}\) be as in (1.10) and_
\[c^{*}:=\frac{s_{\sharp}}{N}\left((1-\theta_{0})\mathcal{S}\right)^{\frac{N}{2 s_{\sharp}}}. \tag{3.5}\]
_Then, there exists \(\gamma_{0}>0\), depending on \(N\), \(\Omega\), \(s_{\sharp}\), \(a\) and \(b\), such that if \(\gamma\in[0,\gamma_{0}]\) and \(c\in(0,c^{*})\), then every \((\text{PS})_{c}\) sequence of the functional (3.1) has a subsequence that converges weakly to a nontrivial critical point of (3.1)._
Proof.: Let \(u_{n}\) be a \((\text{PS})_{c}\) sequence of the functional \(E\), i.e.
\[\begin{split}&\lim\limits_{n\to+\infty}E(u_{n})\\ =&\lim\limits_{n\to+\infty}\frac{1}{2}\int\limits_{[0,1 ]}[u_{n}]_{s}^{2}\,d\mu^{+}(s)-\frac{1}{2}\int\limits_{[0,\overline{s}]}[u_{n }]_{s}^{2}\,d\mu^{-}(s)\\ &\qquad-\frac{1}{2}\int\limits_{\Omega}\left[a\,(u_{n}^{-})^{2}+ b\,(u_{n}^{+})^{2}\right]\,dx-\frac{1}{2^{*}_{s_{\sharp}}}\int\limits_{\Omega}|u_{n }|^{2^{*}_{\sharp}}\,dx\\ =& c\end{split} \tag{3.6}\]
and \(dE(u_{n})\) converges to \(0\) in the dual of \(\mathcal{X}(\Omega)\), namely
\[\lim\limits_{n\to+\infty}\sup\limits_{v\in\mathcal{X}(\Omega)}\left|\langle dE (u_{n}),v\rangle\right|=0. \tag{3.7}\]
Since, for all \(v\in\mathcal{X}(\Omega)\),
\[\langle dE(u_{n}),v\rangle=\int\limits_{[0,1]}\left(c_{N,s}\iint \limits_{\mathbb{R}^{2N}}\frac{\left(u_{n}(x)-u_{n}(y)\right)\left(v(x)-v(y) \right)}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{+}(s)\] \[\qquad\qquad-\int\limits_{[0,\overline{s}]}\left(c_{N,s}\iint \limits_{\mathbb{R}^{2N}}\frac{\left(u_{n}(x)-u_{n}(y)\right)\left(v(x)-v(y) \right)}{|x-y|^{N+2s}}\,dx\,dy\right)d\mu^{-}(s)\] \[\qquad\qquad-\int\limits_{\Omega}\left(b\,u_{n}^{+}-a\,u_{n}^{-} \right)v\,dx-\int\limits_{\Omega}|u_{n}|^{2^{*}_{\sharp}-2}\,u_{n}v\,dx,\]
choosing \(v:=u_{n}\) in (3.7), we obtain that
\[0= \lim_{n\rightarrow+\infty}\langle dE(u_{n}),u_{n}\rangle\] \[= \lim_{n\rightarrow+\infty}\int\limits_{[0,1]}\left(c_{N,s}\iint \limits_{\mathbb{R}^{2N}}\frac{|u_{n}(x)-u_{n}(y)|^{2}}{|x-y|^{N+2s}}\,dx\,dy \right)d\mu^{+}(s)\] \[\qquad\quad-\int\limits_{[0,\overline{s}]}\left(c_{N,s}\iint \limits_{\mathbb{R}^{2N}}\frac{|u_{n}(x)-u_{n}(y)|^{2}}{|x-y|^{N+2s}}\,dx\,dy \right)d\mu^{-}(s)\] \[\qquad\quad-\int\limits_{\Omega}\left[b\left(u_{n}^{+}\right)^{2 }+a\left(u_{n}^{-}\right)^{2}\right]dx-\int\limits_{\Omega}|u_{n}|^{2^{*}_{s }}\,dx\] \[= \lim_{n\rightarrow+\infty}\int\limits_{[0,1]}[u_{n}]_{s}^{2}\,d \mu^{+}(s)-\int\limits_{[0,\overline{s}]}[u_{n}]_{s}^{2}\,d\mu^{-}(s)-\int \limits_{\Omega}\left[a\left(u_{n}^{-}\right)^{2}+b\left(u_{n}^{+}\right)^{2} \right]\,dx-\int\limits_{\Omega}|u_{n}|^{2^{*}_{s}}\,dx\] \[= \lim_{n\rightarrow+\infty}2E(u_{n})+\left(\frac{2}{2^{*}_{s_{ \sharp}}}-1\right)\int\limits_{\Omega}|u_{n}|^{2^{*}_{s_{\sharp}}}\,dx. \tag{3.8}\]
Combining this and (3.6), we infer that
\[0=2c+\left(\frac{2}{2^{*}_{s_{\sharp}}}-1\right)\lim_{n\rightarrow+\infty} \int\limits_{\Omega}|u_{n}|^{2^{*}_{s_{\sharp}}}\,dx, \tag{3.9}\]
yielding that, for large \(n\),
\[\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{\sharp}}}\right)\|u_{n}\|^{2^{*}_{s_{ \sharp}}}_{L^{2^{*}_{s_{\sharp}}}(\Omega)}\leqslant c+1. \tag{3.10}\]
Moreover, from this and the Holder inequality, it follows that
\[\int\limits_{\Omega}\left[a\left(u_{n}^{-}\right)^{2}+b\left(u_{n}^{+}\right) ^{2}\right]\,dx\leqslant\max\{a,b\}\|u_{n}\|^{2}_{L^{2}(\Omega)}\leqslant\max \{a,b\}|\Omega|^{(2s_{\sharp})/N}\|u_{n}\|^{2}_{L^{2^{*}_{s_{\sharp}}}(\Omega)} \tag{3.11}\]
Now, by (3.6) and Proposition 2.3, we have that, as soon as \(n\) is big enough,
\[\frac{1}{2}\int\limits_{[0,1]}[u_{n}]_{s}^{2}\,d\mu^{+}(s)\] \[\leqslant \frac{1}{2}\int\limits_{[0,\overline{s}]}[u_{n}]_{s}^{2}\,d\mu^{ -}(s)+\frac{1}{2}\int\limits_{\Omega}\left[a\left(u_{n}^{-}\right)^{2}+b\left( u_{n}^{+}\right)^{2}\right]\,dx+\frac{1}{2^{*}_{s_{\sharp}}}\int\limits_{\Omega}|u_{n}| ^{2^{*}_{s_{\sharp}}}\,dx+c+1\] \[\leqslant \frac{c_{0}(N,\Omega)\gamma}{2}\int\limits_{[\overline{s},1]}[u_{ n}]_{s}^{2}\,d\mu(s)+\frac{1}{2}\int\limits_{\Omega}\left[a\left(u_{n}^{-}\right)^{2}+b \left(u_{n}^{+}\right)^{2}\right]\,dx+\frac{1}{2^{*}_{s_{\sharp}}}\int\limits_ {\Omega}|u_{n}|^{2^{*}_{s_{\sharp}}}\,dx+c+1,\]
and therefore, if \(\gamma\) is sufficiently small (possibly depending on \(N\) and \(\Omega\)),
\[\frac{1}{4}\int\limits_{[0,1]}[u_{n}]_{s}^{2}\,d\mu^{+}(s)\leqslant\frac{1}{2 }\int\limits_{\Omega}\left[a\left(u_{n}^{-}\right)^{2}+b\left(u_{n}^{+}\right) ^{2}\right]\,dx+\frac{1}{2^{*}_{s_{\sharp}}}\int\limits_{\Omega}|u_{n}|^{2^{* }_{s_{\sharp}}}\,dx+c+1.\]
From this, (3.10) and (3.11), we obtain that
\[\frac{1}{4}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)\leqslant \frac{1}{2}\left(\frac{(c+1)N}{s_{\sharp}}\right)^{\frac{2}{2s_{\sharp}^{*}}} \max\{a,b\}|\Omega|^{(2s_{\sharp})/N}+\frac{N(c+1)}{2s_{\sharp}},\]
which says that \(\int_{[0,1]}[u_{n}]_{s}^{2}\,d\mu^{+}(s)\) is uniformly bounded in \(n\).
Hence, in view of Lemma 2.2 and Proposition 2.4, there exists \(u\in\mathcal{X}(\Omega)\) such that, up to subsequences,
\[\begin{split}& u_{n}\rightharpoonup u\text{ in }\mathcal{X}(\Omega),\\ & u_{n}\to u\text{ in }L^{r}(\Omega)\text{ for every }r\in[1,2^{*}_{s_{\sharp}}),\\ & u_{n}\to u\text{ a.e. in }\Omega.\end{split} \tag{3.12}\]
Furthermore, we observe that \(u\) is a weak solution of (1.7), according to Definition 3.1, thanks to the convergence statements in (3.12) and Lemma 3.3.
It remains to prove that
\[u\not\equiv 0. \tag{3.13}\]
To this end, suppose by contradiction that \(u\equiv 0\). We recall from (3.8) that
\[0 = \lim_{n\to+\infty}\langle dE(u_{n}),u_{n}\rangle\] \[= \lim_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d \mu^{+}(s)-\int\limits_{[0,\overline{s}]}\left[u_{n}\right]_{s}^{2}d\mu^{-}(s) -\int\limits_{\Omega}\left[a\left(u_{n}^{-}\right)^{2}+b\left(u_{n}^{+}\right) ^{2}\right]\,dx-\int\limits_{\Omega}\left|u_{n}\right|^{2^{*}_{s_{\sharp}}}dx\] \[= \lim_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d \mu^{+}(s)-\int\limits_{[0,\overline{s}]}\left[u_{n}\right]_{s}^{2}d\mu^{-}(s )-\int\limits_{\Omega}\left|u_{n}\right|^{2^{*}_{s_{\sharp}}}dx.\]
Thus, exploiting Proposition 2.3, we have that
\[0 \geqslant \lim_{n\to+\infty}\left(1-c_{0}(N,\Omega)\gamma\right)\int\limits _{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)-\int\limits_{\Omega}\left|u_{n} \right|^{2^{*}_{s_{\sharp}}}dx.\]
Accordingly, recalling the definition of \(\mathcal{S}\) in (1.10), we infer that
\[0 \geqslant \lim_{n\to+\infty}\left(1-c_{0}(N,\Omega)\gamma\right)\int \limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)-\mathcal{S}^{-\frac{2^{*} _{s_{\sharp}}}{2}}\left(\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}( s)\right)^{\frac{2^{*}_{s_{\sharp}}}{2}}\] \[= \left(1-c_{0}(N,\Omega)\gamma\right)\] \[\times\lim_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s} ^{2}d\mu^{+}(s)\left(1-\frac{\mathcal{S}^{-\frac{2^{*}_{s_{\sharp}}}{2}}}{ \left(1-c_{0}(N,\Omega)\gamma\right)}\left(\int\limits_{[0,1]}\left[u_{n} \right]_{s}^{2}d\mu^{+}(s)\right)^{\frac{2^{*}_{s_{\sharp}}}{2}-1}\right).\]
Now, choosing \(\gamma\) sufficiently small (possibly in dependence of \(N\) and \(\Omega\)) so that \(1-c_{0}(N,\Omega)\gamma>0\), we conclude that
\[0\geqslant\lim_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d \mu^{+}(s)\left(1-\frac{\mathcal{S}^{-\frac{2^{*}_{s_{\sharp}}}{2}}}{\left(1-c _{0}(N,\Omega)\gamma\right)}\left(\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2 }d\mu^{+}(s)\right)^{\frac{2^{*}_{s_{\sharp}}}{2}-1}\right). \tag{3.14}\]
We observe that
\[\liminf_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)>0. \tag{3.15}\]
Indeed, suppose by contradiction that
\[\liminf_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)=0.\]
Then, by Proposition 2.3 we would also have that
\[\liminf_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{-}(s)=0,\]
and therefore, by (3.6),
\[0<c=\liminf_{n\to+\infty}\left(-\frac{1}{2_{s_{\sharp}}^{*}}\int\limits_{ \Omega}\left|u_{n}\right|^{2_{\sharp}^{*}}dx\right)\leqslant 0,\]
which is a contradiction and thus establishes (3.15).
Thanks to (3.14) and (3.15), we conclude that
\[0\geqslant\limsup_{n\to+\infty}\left(1-\frac{8^{-\frac{2_{\sharp}^{*}}{2}}}{ \left(1-c_{0}(N,\Omega)\gamma\right)}\left(\int\limits_{[0,1]}\left[u_{n} \right]_{s}^{2}d\mu^{+}(s)\right)^{\frac{2_{\sharp}^{*}}{2}-1}\right),\]
which in turn gives that
\[\liminf_{n\to+\infty}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s )\geqslant\left(1-c_{0}(N,\Omega)\gamma\right)^{\frac{2}{2_{\sharp}^{*}-2}} \mathcal{S}^{\frac{2_{\sharp}^{*}}{2_{\sharp}-2}}=\left(1-c_{0}(N,\Omega) \gamma\right)^{\frac{N-2_{\sharp}}{2_{\sharp}}}\mathcal{S}^{\frac{N}{2_{ \sharp}}}. \tag{3.16}\]
Additionally, using again (3.6), and recalling the strong convergence statement in (3.12),
\[c=\lim_{n\to+\infty}\frac{1}{2}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d \mu^{+}(s)-\frac{1}{2}\int\limits_{[0,\sharp]}\left[u_{n}\right]_{s}^{2}d\mu^ {-}(s)-\frac{1}{2_{s_{\sharp}}^{*}}\int\limits_{\Omega}\left|u_{n}\right|^{2_ {\sharp}^{*}}dx.\]
Hence, exploiting Proposition 2.3, this gives that
\[c\geqslant\lim_{n\to+\infty}\frac{1-c_{0}(N,\Omega)\gamma}{2}\int\limits_{[ 0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)-\frac{1}{2_{s_{\sharp}}^{*}}\int \limits_{\Omega}\left|u_{n}\right|^{2_{\sharp}^{*}}dx.\]
From this and (3.9) it follows that
\[c\geqslant\lim_{n\to+\infty}\frac{1-c_{0}(N,\Omega)\gamma}{2}\int\limits_{[ 0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s)-\frac{(N-2_{\sharp})c}{2_{\sharp}},\]
and therefore
\[\frac{Nc}{2_{\sharp}}\geqslant\lim_{n\to+\infty}\frac{1-c_{0}(N,\Omega)\gamma }{2}\int\limits_{[0,1]}\left[u_{n}\right]_{s}^{2}d\mu^{+}(s).\]
This and (3.16) give that
\[\frac{Nc}{s_{\sharp}}\geqslant\left(1-c_{0}(N,\Omega)\gamma\right)^{\frac{N }{2_{\sharp}}}\mathcal{S}^{\frac{N}{2_{\sharp}}},\]
and thus, recalling the definition of \(c^{*}\) in (3.5),
\[c^{*}>c\geqslant\left(1-c_{0}(N,\Omega)\gamma\right)^{\frac{N}{2s_{\sharp}}}\frac {S_{\sharp}}{N}S^{\frac{N}{2s_{\sharp}}}=\left(\frac{1-c_{0}(N,\Omega)\gamma}{1 -\theta_{0}}\right)^{\frac{N}{2s_{\sharp}}}c^{*}.\]
We point out that this implies that
\[c_{0}(N,\Omega)\gamma\geqslant\theta_{0},\]
hence, choosing \(\gamma\) sufficiently small, possibly in dependence of \(N\), \(\Omega\), \(s_{\sharp}\), \(a\) and \(b\), we obtain the desired contradiction.
Then, the claim in (3.13) is established, completing the proof of Proposition 3.4.
## 4. Existence theory and proof of Theorem 1.1
With the preliminary work carried out so far, we are now in position of proving the existence result in Theorem 1.1.
Proof of Theorem 1.1.: The aim is to exploit [10, Theorem 4.1]. To this end, we set \(E_{l}\) to be the eigenspace associated with the eigenvalue \(\lambda_{l}\) and we remark that, for all \(u\in E_{j}\) with \(j\in\{1,\ldots,l\}\),
\[\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu(s)=\lambda_{j}\|u\|_{L^{2}(\Omega)}^{2} \leqslant\lambda_{l}\|u\|_{L^{2}(\Omega)}^{2}.\]
Now, we observe that, by the Holder inequality,
\[\|u\|_{L^{2}(\Omega)}^{2^{*}_{\sharp}}\leqslant|\Omega|^{\frac{2s_{\sharp}}{ N-2s_{\sharp}}}\|u\|_{L^{\frac{2s_{\sharp}}{2s_{\sharp}}}(\Omega)}^{2^{*}_{ \sharp}},\]
and thus
\[\int\limits_{\Omega}|u|^{2^{*}_{\sharp}}\,dx=\|u\|_{L^{\frac{2s_{\sharp}}{2s_ {\sharp}}}(\Omega)}^{2^{*}_{\sharp}}\geqslant|\Omega|^{-\frac{2s_{\sharp}}{N- 2s_{\sharp}}}\|u\|_{L^{2}(\Omega)}^{2^{*}_{\sharp}}.\]
Consequently, recalling the definition of the functional in (3.1), we have that, for all \(u\in E_{j}\) with \(j\in\{1,\ldots,l\}\),
\[E(u)\] \[=\frac{1}{2}\,\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu^{+}(s)-\frac{1 }{2}\int\limits_{[0,\overline{s}]}[u]_{s}^{2}\,d\mu^{-}(s)-\frac{1}{2}\int \limits_{\Omega}\left[a\left(u^{-}\right)^{2}+b\left(u^{+}\right)^{2}\right]\, dx-\frac{1}{2^{*}_{\sharp s_{\sharp}}}\int\limits_{\Omega}|u|^{2^{*}_{\sharp}}\,dx\] \[\leqslant\frac{1}{2}\,\int\limits_{[0,1]}[u]_{s}^{2}\,d\mu(s)- \frac{\min\left\{a,b\right\}}{2}\|u\|_{L^{2}(\Omega)}^{2}-\frac{1}{2^{*}_{ \sharp s_{\sharp}}}|\Omega|^{-\frac{2s_{\sharp}}{N-2s_{\sharp}}}\|u\|_{L^{2}( \Omega)}^{2^{*}_{\sharp}}\] \[\leqslant\frac{\lambda_{l}}{2}\|u\|_{L^{2}(\Omega)}^{2}-\frac{ \min\left\{a,b\right\}}{2}\|u\|_{L^{2}(\Omega)}^{2}-\frac{1}{2^{*}_{\sharp s _{\sharp}}}|\Omega|^{-\frac{2s_{\sharp}}{N-2s_{\sharp}}}\|u\|_{L^{2}(\Omega)} ^{2^{*}_{\sharp}}. \tag{4.1}\]
Now we consider the function
\[h(t):=\frac{1}{2}\big{(}\lambda_{l}-\min\left\{a,b\right\}\big{)}t^{2}-\frac{ 1}{2^{*}_{\sharp s_{\sharp}}}|\Omega|^{-\frac{2s_{\sharp}}{N-2s_{\sharp}}}t^{2 ^{*}_{\sharp}}\]
and we observe that \(\lambda_{l}-\min\left\{a,b\right\}>0\). Accordingly, we obtain that
\[\max_{t\geqslant 0}h(t)=\left(\frac{1}{2}-\frac{1}{2^{*}_{\sharp s_{\sharp}}} \right)|\Omega|\big{(}\lambda_{l}-\min\left\{a,b\right\}\big{)}^{\frac{N}{2s_ {\sharp}}}.\]
Plugging this information into (4.1), we conclude that
\[E(u)\leqslant\left(\frac{1}{2}-\frac{1}{2^{*}_{\sharp s_{\sharp}}}\right)| \Omega|\big{(}\lambda_{l}-\min\left\{a,b\right\}\big{)}^{\frac{N}{2s_{\sharp}}}.\]
Therefore, exploiting the assumption in (3.4) and recalling the definition of \(c^{*}\) in (3.5), we obtain that, for all \(u\in E_{j}\) with \(j\in\{1,\ldots,l\}\),
\[E(u)<\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{\sharp}}}\right)|\Omega|\left(\frac{(1 -\theta_{0})\mathcal{S}}{|\Omega|^{(2s_{\sharp})/N}}\right)^{\frac{N}{2s_{ \sharp}}}=\left(\frac{1}{2}-\frac{1}{2^{*}_{s_{\sharp}}}\right)\left((1- \theta_{0})\mathcal{S}\right)^{\frac{N}{2s_{\sharp}}}=c^{*}.\]
Thus, Propositions 3.4 ensures the convergence of Palais-Smale sequences below the threshold \(c^{*}\), provided that \(\gamma\) is sufficiently small. This allows us to use [13, Theorem 4.1], from which the desired result follows.
## 5. Examples and applications
Note that the operator introduced in (1.7) is very general and we can employ it to produce a wide number of new interesting existence results for critical problems, depending on the particular choice of the measure \(\mu\). We showcase some of these cases here below.
We start proving that, by choosing \(\mu\) in a proper way, our result can be compared to [13, Theorem 1.3] and [16, Theorem 1.2].
**Corollary 5.1**.: _Let \(\lambda_{l}\) be the sequence of Dirichlet eigenvalues of \(-\Delta\) and \(2^{*}=2N/(N-2)\) be the classical Sobolev exponent._
_If \((a,b)\in Q_{l}\), \(b<\nu_{l-1}(a)\) and_
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{S}{|\Omega|^{2/N}}\]
_where \(S\) as denotes the classical best Sobolev constant, then problem_
\[\begin{cases}-\Delta\,u=bu^{+}-au^{-}+|u|^{2^{*}-2}\,u&\text{in }\Omega,\\ u=0&\text{in }\ \partial\Omega,\end{cases}\]
_possesses a nontrivial solution._
Proof.: Let \(\mu:=\delta_{1}\) be the Dirac measure centred at the point \(1\). In this case \(\mu^{-}=0\) as well and \(\mu\) satisfies (1.3), (1.4) and (1.5). Furthermore, we can take \(\overline{s}:=1\) and \(s_{\sharp}:=1\), so that \(\mathcal{S}\) in (1.10) reduces to the classical Sobolev constant. The desired result now follows from Theorem 1.1.
To our best knowledge, our main result is new even for the case of the fractional Laplacian, and even for the case \(a=b\) in which the jumping nonlinearity is not present. For the reader convenience, we state this result here below:
**Corollary 5.2**.: _Let \(s\in[0,1)\) and \(2^{*}_{s}=(2N)/(N-2s)\) be the critical fractional Sobolev exponent._
_Denote by \(\lambda_{l}\) the sequence of Dirichlet eigenvalues of \((-\Delta)^{*}\) and by \(S(s)\) the fractional Sobolev constant corresponding to \((-\Delta)^{s}\)._
_If \((a,b)\in Q_{l}\), \(b<\nu_{l-1}(a)\) and_
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{S(s)}{|\Omega|^{2s/N}},\]
_then problem_
\[\begin{cases}(-\Delta)^{s}\,u=bu^{+}-au^{+}+|u|^{2^{*}_{s}-2}\,u&\text{in } \Omega,\\ u=0&\text{in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\]
_admits a nontrivial solution._
Proof.: Here, one takes \(\mu:=\delta_{s}\), \(\overline{s}:=s\) and \(s_{\sharp}:=s\), and the desired result is a consequence of Theorem 1.1.
Now we show how to relate our new results to [1, Theorem 1.4]:
**Corollary 5.3**.: _Let \(s\in[0,1)\). Denote by \(\lambda_{l}\) the sequence of Dirichlet eigenvalues of the mixed operator \(-\Delta+(-\Delta)^{s}\), by \(S\) the classical Sobolev constant and by \(2^{*}=2N/(N-2)\) the classical critical Sobolev exponent._
_Then, if \((a,b)\in Q_{l}\), \(b<\nu_{l-1}(a)\) and_
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{S}{|\Omega|^{2/N}},\]
_then problem_
\[\left\{\begin{aligned} -\Delta\,u+(-\Delta)^{s}\,u& =bu^{+}-au^{-}+|u|^{2^{*}-2}\,u&\text{in }\Omega,\\ u&=0&\text{in }\mathbb{R}^{N}\setminus \Omega,\end{aligned}\right. \tag{5.1}\]
_admits a nontrivial solution._
Proof.: We set \(\mu:=\delta_{1}+\delta_{s}\), where \(\delta_{1}\) and \(\delta_{s}\) denote the Dirac measures centered at the points \(1\) and \(s\) respectively. As done in the proof of Corollary 5.1, we can take \(\overline{s}:=1\) and \(s_{\sharp}:=1\) and deduce the desired result from Theorem 1.1.
Interestingly, our setting is general enough to include also operators containing small terms with the "wrong" sign. As a paradigmatic example, we showcase the following result:
**Corollary 5.4**.: _Let \(s\in[0,1)\) and \(\alpha\in\mathbb{R}\). Denote by \(\lambda_{l}\) the sequence of Dirichlet eigenvalues of the mixed operator \(-\Delta-\alpha(-\Delta)^{s}\), by \(2^{*}=2N/(N-2)\) the classical critical Sobolev exponent, and by \(S\) the classical Sobolev constant._
_Let \((a,b)\in Q_{l}\) and \(b<\nu_{l-1}(a)\) and suppose that_
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{S}{|\Omega|^{2/N}},\]
_Then, there exists \(\alpha_{0}>0\), depending only on \(N\), \(\Omega\), \(a\) and \(b\), such that if \(\alpha\leqslant\alpha_{0}\), then problem_
\[\left\{\begin{aligned} -\Delta\,u-\alpha(-\Delta)^{s}\,u& =bu^{+}-au^{-}+|u|^{2^{*}-2}\,u&\text{in }\Omega,\\ u&=0&\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right.\]
_admits a nontrivial solution._
Proof.: We define \(\mu:=\delta_{1}-\alpha\delta_{s}\), \(\overline{s}:=1\) and \(s_{\sharp}:=1\). Once again, the desired result follows from Theorem 1.1.
One more interesting application arises taking \(\mu\) as a convergent series of Dirac measures. On this matter, we provide the next two results:
**Corollary 5.5**.: _Let \(1\geqslant s_{0}>s_{1}>s_{2}>\ldots\geqslant 0\). Denote by \(\lambda_{l}\) the sequence of Dirichlet eigenvalues of the operator_
\[\sum_{k=0}^{+\infty}c_{k}(-\Delta)^{s_{k}}\qquad\text{with }\,c_{k}\geqslant 0 \,\text{ and }\,\sum_{k=0}^{+\infty}c_{k}\in(0,+\infty),\]
_by \(S_{0}\) the best Sobolev constant corresponding to the exponent \(s_{0}\) and by \(2^{*}_{s_{0}}=2N/(N-2s_{0})\) the critical Sobolev exponent._
_Then, if \((a,b)\in Q_{l}\), \(b<\nu_{l-1}(a)\) and_
\[\min\left\{a,b\right\}>\lambda_{l}-\frac{S_{0}}{|\Omega|^{2s_{0}/N}},\]
_then problem_
\[\left\{\begin{aligned} \sum_{k=0}^{+\infty}c_{k}(-\Delta)^{s_{k} }u&=bu^{+}-au^{-}+|u|^{2^{*}_{s_{0}}-2}\,u&\text{in }\Omega,\\ u&=0&\text{in }\mathbb{R}^{N} \setminus\Omega,\end{aligned}\right.\]
_admits a nontrivial solution._
Proof.: We set
\[\mu:=\sum_{k=0}^{+\infty}c_{k}\,\delta_{s_{k}},\]
where \(\delta_{s_{k}}\) denote the Dirac measures centered at each \(s_{k}\). In this case, we can take \(\overline{s}:=0\) and \(s_{\sharp}:=s_{0}\) and deduce the desired result from Theorem 1.1.
**Corollary 5.6**.: _Let \(1\geqslant s_{0}>s_{1}>s_{2}>\ldots\geqslant 0\) and \(c_{k}\in\mathbb{R}\) for all \(k\in\mathbb{N}\) be such that_
\[\sum_{k=0}^{+\infty}c_{k}\in(0,+\infty).\]
_Assume that there exists \(\gamma\geqslant 0\) and \(\overline{k}\in\mathbb{N}\) such that_
\[c_{k}>0\ \text{ for all }k\in\{1,\ldots,\overline{k}\}\quad\text{ and }\quad\sum_{k=\overline{k}+1}^{+\infty}c_{k}\leqslant\gamma\sum_{k=0}^{ \overline{k}}c_{k}. \tag{5.2}\]
_Denote by \(\lambda_{l}\) the sequence of Dirichlet eigenvalues of the operator_
\[\sum_{k=0}^{+\infty}c_{k}(-\Delta)^{s_{k}}\]
_by \(S_{0}\) the best Sobolev constant corresponding to the exponent \(s_{0}\) and by \(2_{s_{0}}^{*}=2N/(N-2s_{0})\) the critical Sobolev exponent._
_Let \((a,b)\in Q_{l}\) and \(b<\nu_{l-1}(a)\) and suppose that_
\[\min\,\{a,b\}>\lambda_{l}-\frac{S_{0}}{|\Omega|^{2s_{0}/N}},\]
_Then, there exists \(\gamma_{0}>0\), depending only on \(N\), \(\Omega\), \(s_{0}\), \(a\) and \(b\), such that if \(\gamma\in[0,\gamma_{0}]\) then problem_
\[\begin{cases}\sum_{k=0}^{+\infty}c_{k}(-\Delta)^{s_{k}}u=bu^{+}- au^{-}+|u|^{2_{s_{0}}^{*}-2}\,u&\text{in }\Omega,\\ u=0&\text{in }\mathbb{R}^{N}\setminus\Omega,\end{cases}\]
_admits a nontrivial solution._
Proof.: We set
\[\mu:=\sum_{k=0}^{+\infty}c_{k}\,\delta_{s_{k}}\]
where \(\delta_{s_{k}}\) denote the Dirac measures centered at each \(s_{k}\).
Notice that (5.2) guarantees that the assumptions on \(\mu\) in (1.3), (1.4) and (1.5) are satisfied. Thus, we can take \(\overline{s}:=s_{\overline{k}}\) and \(s_{\sharp}:=s_{0}\) and infer the desired result from Theorem 1.1.
It is worth nothing that the wide generality of our setting enables us to address also the case of the continuous superposition of fractional operators. To be more precise, the following result holds true.
**Corollary 5.7**.: _Let \(s_{\sharp}\in[0,1)\), \(\gamma\geqslant 0\) and \(f\) be a measurable and non identically zero function such that_
\[\begin{split}& f\geqslant 0\text{ in }(s_{\sharp},1)\text{,}\\ &\int\limits_{s_{\sharp}}^{1}f(s)\,ds>0\\ \text{and}&\int\limits_{0}^{s_{\sharp}}\max\{0,-f(s)\} \,ds\leqslant\gamma\int\limits_{s_{\sharp}}^{1}f(s)\,ds.\end{split} \tag{5.3}\]
_Denote by \(\lambda_{l}\) the sequence of Dirichlet eigenvalues of the operator_
\[\int\limits_{0}^{1}f(s)(-\Delta)^{s}\,u\,ds, \tag{5.4}\]
_by \(S_{\sharp}\) the best Sobolev constant corresponding to the exponent \(s_{\sharp}\) and by \(2^{*}_{s_{\sharp}}=2N/(N-2s_{\sharp})\) the fractional critical Sobolev exponent._
_Let \((a,b)\in Q_{l}\) and \(b<\nu_{l-1}(a)\) and suppose that_
\[\min\,\{a,b\}>\lambda_{l}-\frac{S_{\sharp}}{|\Omega|^{2s_{\sharp}/N}},\]
_Then, there exists \(\gamma_{0}>0\), depending only on \(N\), \(\Omega\), \(s_{\sharp}\), \(a\) and \(b\), such that if \(\gamma\in[0,\gamma_{0}]\) then problem_
\[\left\{\begin{split}\int\limits_{0}^{1}f(s)(-\Delta)^{s}\,u\,ds& =bu^{+}-au^{-}+|u|^{2^{*}_{s_{\sharp}}-2}\,u&\text{ in }\Omega,\\ u&=0&\text{ in }\mathbb{R}^{N}\setminus \Omega,\end{split}\right.\]
_admits a nontrivial solution._
Proof.: We observe that the operator in (5.4) is a particular case of \(A_{\mu}\) as defined in (1.1), where \(d\mu(s)\) boils down to \(f(s)\,ds\).
Additionally, (5.3) guarantees that the assumptions in (1.3), (1.4) and (1.5) are satisfied. Thus, we can take \(\overline{s}:=s_{\sharp}\), which plays the role of the fractional critical exponent. The desired result then follows from Theorem 1.1.
## Acknowledgements
SD and EV are members of the Australian Mathematical Society (AustMS). EV is supported by the Australian Laureate Fellowship FL190100081 "Minimal surfaces, free boundaries and partial differential equations".
CS is member of INdAM-GNAMPA.
This work was partially completed while KP was visiting the Department of Mathematics and Statistics at the University of Western Australia, and he is grateful for the hospitality of the host department. His visit to the UWA was supported by the Simons Foundation Award 962241 "Local and nonlocal variational problems with lack of compactness".
|
2309.12553 | ICASSP 2023 Acoustic Echo Cancellation Challenge | The ICASSP 2023 Acoustic Echo Cancellation Challenge is intended to stimulate
research in acoustic echo cancellation (AEC), which is an important area of
speech enhancement and is still a top issue in audio communication. This is the
fourth AEC challenge and it is enhanced by adding a second track for
personalized acoustic echo cancellation, reducing the algorithmic + buffering
latency to 20ms, as well as including a full-band version of AECMOS. We open
source two large datasets to train AEC models under both single talk and double
talk scenarios. These datasets consist of recordings from more than 10,000 real
audio devices and human speakers in real environments, as well as a synthetic
dataset. We open source an online subjective test framework and provide an
objective metric for researchers to quickly test their results. The winners of
this challenge were selected based on the average mean opinion score (MOS)
achieved across all scenarios and the word accuracy (WAcc) rate. | Ross Cutler, Ando Saabas, Tanel Parnamaa, Marju Purin, Evgenii Indenbom, Nicolae-Catalin Ristea, Jegor Gužvin, Hannes Gamper, Sebastian Braun, Robert Aichner | 2023-09-22T00:51:19Z | http://arxiv.org/abs/2309.12553v1 | # ICASSP 2023 Acoustic Echo Cancellation Challenge
###### Abstract
The ICASSP 2023 Acoustic Echo Cancellation Challenge is intended to stimulate research in acoustic echo cancellation (AEC), which is an important area of speech enhancement and is still a top issue in audio communication. This is the fourth AEC challenge and it is enhanced by adding a second track for personalized acoustic echo cancellation, reducing the algorithmic + buffering latency to 20ms, as well as including a full-band version of AECMOS [1]. We open source two large datasets to train AEC models under both single talk and double talk scenarios. These datasets consist of recordings from more than 10,000 real audio devices and human speakers in real environments, as well as a synthetic dataset. We open source an online subjective test framework and provide an objective metric for researchers to quickly test their results. The winners of this challenge were selected based on the average mean opinion score (MOS) achieved across all scenarios and the word accuracy (WAcc) rate.
ICASSP 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Echo Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Cancellation Challenge 2023 Acoustic Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 203 Cancellation 2023 Cancellation Challenge 203 Cancellation 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 203 Cancellation 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 2023 Cancellation Challenge 203 Cancellation Challenge 2
needed that everyone in the research community can use, which we provide as part of the challenge.
This AEC challenge is designed to stimulate research in the AEC domain by open-sourcing a large training dataset, test set, and subjective evaluation framework. We provide two new open-source datasets for training AEC models. The first is a real dataset captured using a large-scale crowd-sourcing effort. This dataset consists of real recordings that have been collected from over 10,000 diverse audio devices and environments. The second dataset is synthesized from speech recordings, room impulse responses, and background noise derived from [10]. An initial test set was released for the researchers to use during development and a blind test set near the end, which has been used to decide the final competition winners. We believe these datasets are large enough to facilitate deep learning and representative enough for practical usage in shipping telecommunication products (e.g., see [11]).
This is the fourth AEC challenge we have conducted. The first challenge was held at ICASSP 2021 [12], the second at INTERSPEECH 2021 [13], and the third at ICASSP 2022 [14]. These challenges had 49 participants with entries ranging from pure deep models, hybrid linear AEC + deep echo suppression, and DSP methods. While the submitted AECs have consistently been getting better, there is still significant room for improvement as shown in Table 2. The two largest areas for improvement are (1) Single Talk Near End quality, which is affected by background noise, reverberation, and capture device distortions, and (2) Double Talk Other Depardations, which includes missing audio, distortions, and cut-outs. In addition, the overall challenge metric, \(M\) was 0.883 out of 1.0 in the ICASSP 2022 challenge, which also shows significant room for improvement.
To improve the challenge and further stimulate research in this area we have made the following changes:
* We included a second track for personalized AEC. Based on the excellent results for personalized noise suppression in the ICASSP 2022 Deep Noise Suppression Challenge [15], we expected significant improvements for the double talk scenario.
* We reduced the algorithmic latency + buffering latency from 40ms to 20ms which is necessary for use in real-time collaboration systems. This will make getting the same speech quality done in previous challenges more difficult.
* We provided a full-band version of AECMOS so it can be better used for full-band training and testing. AECMOS is freely available at [https://github.com/microsoft/AEC-Challenge](https://github.com/microsoft/AEC-Challenge).
An overview of the four AEC challenges is given in Table 3.
Related work is reviewed in Section II. The challenge description is given in Section III. The training dataset is described in Section IV, and the test set in Section V. We describe a baseline deep neural network-based AEC method in Section VI. The online subjective evaluation framework is discussed in Section VII, and the objective function in Section VIII. The challenge metric is given in Section IX and the challenge rules are described in [https://aka.ms/aec-challenge](https://aka.ms/aec-challenge). The results and analysis are given in Section X, and conclusions are discussed in Section XI.
described in P.800. An open-source implementation of P.808 is described in [22]. ITU-T P.835 [23] provides a subjective evaluation framework that gives standalone quality scores of speech (SIG) and background noise (BAK) in addition to the overall quality (OVRL). An open-source implementation of P.835 is described in [24]. More recent multidimensional speech quality assessment standards are ITU-T P.863.2 [25] and P.804 [26] (listening phase), which measure noisiness, coloration, discontinuity, and loudness. An open-source implementation of P.804 using crowdsourcing is described in [27].
ITU-T Rec. P.831 [28] provides guidelines on how to conduct subjective tests for network echo cancellers in the laboratory. ITU-T Rec. P.832 [8] focuses on the hands-free terminals and covers a broader range of degradations. Cutler et al. [29] provide an open-source crowdsourcing tool extending P.831 and P.832 and include validation studies that show it is accurate compared to expert listeners and repeatable across multiple days and different raters. Purin et al. [1] created an objective metric, AECMOS, based on this tool's results on hundreds of different AEC models. AECMOS has a high correlation to subjective opinion.
While there have been hundreds of papers published on deep echo cancellation since the first AEC challenge, we feel the winners of each challenge are of special note since they have been tested and evaluated using realistic and challenging test sets and subjective evaluations. Table 4 provides the top three papers for each previous AEC challenge. Note that because the performance rankings and paper acceptances were decoupled in ICASSP 2021 and INTERSPEECH 2021, the challenge placement and performance rankings are not identical, and for INTERSPEECH 2021 not well correlated. For ICASSP 2022 and 2023, the top five papers based on the challenge performance were submitted for review, fixing the disparity between paper acceptance and model performance.
## III Challenge description
### _Tracks_
This challenge included two tracks:
* Non-personalized AEC. This is similar to the ICASSP 2022 AEC Challenge.
* Personalized AEC. This adds speaker enrollment for the near end speaker. A speaker enrollment is a 15-25 second recording of the near end speaker that can be used for adopting the AEC for personalized echo cancellation. For training and model evaluation, the datasets in [https://github.com/microsoft/AEC-Challenge](https://github.com/microsoft/AEC-Challenge) can be used, which include both echo and near end only clips from users. For the blind test set, the enrollment clips will be provided.
### _Latency and runtime requirements_
Algorithmic latency is defined by the offset introduced by the whole processing chain including short time Fourier transform (STFT), inverse STFT, overlap-add, additional lookahead frames, etc., compared to just passing the signal through without modification. It does not include buffering latency. Some examples are:
* hop length = 10 ms.
* hop length = 24 ms.
* An overlap-save-based processing algorithm introduces no additional algorithmic latency.
* 1 = 15 samples. Using one-sided padding, the operation can be made fully "causal", i.e., a left-sided
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Challenge & 1st & Rank & 2nd & Rank & 3rd & Rank \\ \hline ICASSP 2021 & [30] & 1 & [31] & 2 & [32] & 5 \\ INTERSPEECH 2021 & [33] & 6 & [34] & 8 & [35] & 10 \\ ICASSP 2022 & [36] & 1 & [37] & 2 & [38] & 3 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: **AEC Challenge top 3 performers.**
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Challenge & Tracks & Datasets & Algorithmic + & Notes \\ & & Buffering Latency & \\ \hline ICASSP 2021 & Real-time & 2,500 real environments & 40ms & Crowdsourced P831 \\ & & Synthetic & & \\ INTERSPEECH 2021 & Real-time & 5,000 real environments & 40ms & Made test set more comprehensive \\ & Non-real-time & Synthetic & & Increased subjective test framework accuracy \\ & & & & Added AECMOS service \\ ICASSP 2022 & Real-time & 7,500 real environments & 40ms & Added mobile scenarios \\ & & Synthetic & & Added WAcc \\ & & & & Made datasets, test sets full band \\ ICASSP 2023 & Real-time & 10,000 real environments & 20ms & Added fullband AECMOS \\ & Personalized & Synthetic & & Split near end quality into BAK and SIG \\ \hline \hline \end{tabular}
\end{table} TABLE III: **Summary of AEC challenges. BAK and SIG are measurements of the background noise quality and speech signal quality.**
padding with kernel size-1 samples would result in no algorithmic latency.
* hop_length) + 2*hop_length = 30 ms.
Buffering latency is defined as the latency introduced by block-wise processing, often referred to as hop length, frameshift, or temporal stride. Some examples are:
* A STFT-based processing has a buffering latency corresponding to the hop size.
* A overlap-save processing has a buffering latency corresponding to the frame size.
* A time-domain convolution with stride 1 introduces a buffering latency of 1 sample.
Real-time factor (RTF) is defined as the fraction of time it takes to execute one processing step. For a STFT-based algorithm, one processing step is the hop size. For a time-domain convolution, one processing step is 1 sample. RTF = compute time / time step.
All models submitted to this challenge must meet all of the below requirements:
1. To be able to execute an algorithm in real-time, and to accommodate for variance in compute time which occurs in practice, we require RTF \(\leq\) 0.5 in the challenge on an Intel Core i5 Quadcore clocked at 2.4 GHz using a single thread.
2. Algorithmic latency + buffering latency \(\leq\) 20ms.
3. No future information can be used during model inference.
## IV Training datasets
The challenge includes two open-source datasets, one real and one synthetic. The datasets are available at [https://github.com/microsoft/AEC-Challenge](https://github.com/microsoft/AEC-Challenge).
### _Real dataset_
The first dataset was captured using a large-scale crowdsourcing effort. This dataset consists of more than 50,000 recordings from over 10,000 different real environments, audio devices, and human speakers in the following scenarios:
1. Far end single talk, no echo path change
2. Far end single talk, echo path change
3. Near end single talk, no echo path change
4. Double talk, no echo path change
5. Double talk, echo path change
6. Sweep signal for RT60 estimation
RT60 is the time for an initial signal's sound pressure level to attenuate 60 dB from its original level. For the far end single talk case, there is only the loudspeaker signal (far end) played back to the users and users remain silent (no near end speech). For the near end single talk case, there is no far end signal and users are prompted to speak, capturing the near end signal. For double talk, both the far end and near end signals are active, where a loudspeaker signal is played and users talk at the same time. Echo path changes were incorporated by instructing the users to move their device around or bring themselves to move around the device. The RT60 distribution for 4387 desktop environments in the real dataset for which impulse response measurements were available is estimated using a method by Karjalainen et al. [39] and shown in Figure 2. For 1251 mobile environments the RT60 distribution shown was estimated blindly from speech recordings [40]. The RT60 estimates can be used to sample the dataset for training. The near end single talk speech quality is given in Figure 1.
We use _Amazon Mechanical Turk_ as the crowdsourcing platform and wrote a custom HIT application that includes a custom tool that users download and execute to record the six scenarios described above. The dataset includes Microsoft Windows and Android devices. Each scenario includes the microphone and loopback signal (see Figure 3). Even though our application uses the WASAPI raw audio mode to bypass built-in audio effects, the PC can still include Audio DSP on the receive signal (e.g., equalization and Dynamic Range Compression); it can also include Audio DSP on the send signal, such as AEC and noise suppression.
For far end signals, we use both clean speech and real-world recordings. For clean speech far end signals, we use the speech segments from the Edinburgh dataset [41]. This corpus consists of short single speaker speech segments (1 to \(3\) seconds). We used a long short term memory (LSTM) [42] based gender detector to select an equal number of male and female speaker segments. Further, we combined \(3\) to \(5\) of these short segments to create clips of length between \(9\) and \(15\) seconds in duration. Each clip consists of a single gender speaker. We create a gender-balanced far end signal source comprising of \(500\) male and \(500\) female clips. Recordings are saved at the maximum sampling rate supported by the device and in 32-bit floating point format; in the released dataset we down-sample to 48 kHz and 16-bit using automatic gain control to minimize clipping.
For noisy speech far end signals we use \(2000\) clips from the near end single talk scenario. Clips are gender balanced to include an equal number of male and female voices.
For the far end single talk scenario, the clip is played back twice. This way, the echo canceller can be evaluated both on the first segment, when it has had minimal time to converge, and on the second segment, when the echo canceller has converged and the result is more indicative of a real call scenario.
For the double talk scenario, the far end signal is similarly played back twice, but with an additional silent segment in the middle, when only near end single talk occurs.
For near end speech, the users were prompted to read sentences from a TIMIT [43] sentence list. Approximately 10 seconds of audio is recorded while the users are reading.
For track two (personalized AEC) we include 30 seconds of target speaker for each clip in the test set. In addition, the training and test set from the ICASSP 2022 Deep Noise Suppression Challenge track two [15] can be used.
### Synthetic dataset
The second dataset provides 10,000 synthetic scenarios, each including single talk, double talk, near end noise, far end noise, and various nonlinear distortion scenarios. Each scenario includes a far end speech, echo signal, near end speech, and near end microphone signal clip. We use 12,000 cases (100 hours of audio) from both the clean and noisy speech datasets derived in [10] from the LibriVox project1 as source clips to sample far end and near end signals. The LibriVox project is a collection of public-domain audiobooks read by volunteers. [10] used the online subjective test framework ITU-T P.808 to select audio recordings of good quality (4.3 \(\leq\) MOS \(\leq\) 5) from the LibriVox project. The noisy speech dataset was created by mixing clean speech with noise clips sampled from AudioSet [44], Freesound2 and DEMAND [45] databases at signal to noise ratios sampled uniformly from [0, 40] dB.
Footnote 1: [https://librivox.org](https://librivox.org)
Footnote 2: [https://freesound.org](https://freesound.org)
To simulate a far end signal, we pick a random speaker from a pool of 1,627 speakers, randomly choose one of the clips from the speaker, and sample 10 seconds of audio from the clip. For the near end signal, we randomly choose another speaker and take 3-7 seconds of audio which is then zero-padded to 10 seconds. The selected far end speakers were 71% male, and 67% of the near end speakers were male. To generate an echo, we convolve a randomly chosen room impulse response from a large Microsoft unreleased database with the far end signal. The room impulse responses are generated by using Project Acoustics technology3 and the RT60 ranges from 200 ms to 1200 ms. The distribution of RT60 is shown in Figure 4. In 80% of the cases, the far end signal is processed by a nonlinear function to mimic loudspeaker distortion (the linear-to-nonlinear ratio is 0.25). For example, the transformation can be clipping the maximum amplitude, using a sigmoidal function as in [46], or applying learned distortion functions, the details of which we will describe in a future paper. This signal gets mixed with the near end signal at a signal-to-echo ratio uniformly sampled from -10 dB to 10 dB. The signal-to-echo ratio is calculated based on the clean speech signal (i.e., a signal without near end noise). The far end and near end signals are taken from the noisy dataset in 50% of the cases. The first 500 clips can be used for validation as these have a separate list of speakers and room impulse responses. Detailed metadata information can be found in the repository.
Footnote 3: [https://www.aka.ms/acoustics](https://www.aka.ms/acoustics)
* Long- or varying delays, i.e., files where the delay between loopback and mic-in is atypically long or varies during the recording
* Strong speaker and/or mic distortions
* Stationary near end noise
* Non-stationary near end noise
* Recordings with audio DSP processing from the device, such as AEC or noise reduction
* Glitches, i.e., files with "choppy" audio, for example, due to very high CPU usage
* Gain variations, i.e., recordings where far end level changes during the recording (A), sampled randomly
## 6 Baseline AEC Method
We adapt a noise suppression model developed in [47] to the task of echo cancellation. Specifically, a recurrent neural network with gated recurrent units takes concatenated log power spectral features of the microphone signal and far end signal as input and outputs a spectral suppression mask. The short-time Fourier transform is computed based on 20ms frames with a hop size of 10 ms, and a 320-point discrete Fourier transform. We use a stack of two gated recurrent unit layers, each of size 322 nodes, followed by a fully-connected layer with a sigmoid activation function. The model has 1.3 million parameters. The estimated mask is point-wise multiplied by the magnitude spectrogram of the microphone signal to suppress the far end signal. Finally, to resynthesize the enhanced signal, an inverse short-time Fourier transform is used on the phase of the microphone signal and the estimated magnitude spectrogram. We use a mean squared error loss between the clean and enhanced magnitude spectrograms. The Adam optimizer [48] with a learning rate of 0.0003 is used to train the model. The model and the inference code are available in the challenge repository.4
Footnote 4: [https://github.com/microsoft/AEC-Challenge/tree/main/baseline/icassp2022](https://github.com/microsoft/AEC-Challenge/tree/main/baseline/icassp2022)
## 7 Online subjective evaluation framework
We have extended the open source P.808 Toolkit [22] with methods for evaluating echo impairments in subjective tests. We followed the _Third-party Listening Test B_ from ITU-T Rec. P.831 [28] and ITU-T Rec. P.832 [8] and adapted them to our use case as well as for the crowdsourcing approach based on the ITU-T Rec. P.808 [21] guidance.
A third-party listening test differs from the typical listening-only tests (according to the ITU-T Rec. P.831) in the way that listeners hear the recordings from the _center_ of the connection rather in the former one in which the listener is positioned at one end of the connection [28] (see Figure 6). Thus, the speech material should be recorded by having this concept in mind. During the test session, we use different combinations of single- and multi-scale Absolute Category Ratings depending on the speech sample under evaluation. We distinguish between single talk and double talk scenarios. For the near end single talk, we ask for the overall quality. For the far end single talk and double talk scenario, we ask for an echo annoyance and for impairments of other degradations in two separate questions:
1. How would you judge the degradation from the echo?
2. How would you judge other degradations (noise, missing audio, distortions, cut-outs)
Both impairments are rated on the degradation category scale (from 1: _Very annoying_, to 5: _Imperceptible_) to obtain degradation mean opinion scores (DMOS). Note that we do not use the Other degradation category for far end single talk for evaluating echo cancellation performance, since this metric mostly reflects the quality of the original far end signal. However, we have found that having this component in the questionnaire helps increase the accuracy of echo degradation ratings (when measured against expert raters). Without the Other category, raters can sometimes assign degradations due to noise to the Echo category [29].
The setup illustrated in Figure 5 is used to process all speech samples with all of the AECs under the study. To simplify the rating process for crowdworkers, we distinguished between near end and far end single talk as well as the double talk scenarios and tried to simulate them for the test participants. In the case of near end single talk we recorded the AEC output (\(S_{out}\)). For far end single talk, we added the output of the AEC (\(S_{out}\)) with a delay of 600ms to the loopback (\(R_{in}\)) signal, yielding \(R_{in}+\) delayed \(S_{out}\). For the listener, this simulates hearing the echo of their own speech (i.e., \(R_{in}\) as an acoustic sidetone). For double talk the process is similar, but due to there being more speakers, simply adding the delayed AEC output (\(S_{out}\)) would cause confusion for the test participants. To mitigate this issue, the signals are played in stereo instead, with the loopback signal (\(R_{in}\)) played in one ear (i.e., acoustic sidetone) and the delayed output of the AEC (\(S_{out}\)) played in the other. Figure 6 was used to illustrate the double talk scenario to crowdworkers.
For the far end single talk scenario, we evaluate the second half of each clip to avoid initial degradations from initialization, convergence periods, and initial delay estimation. For the double talk scenario, we evaluate the final third of the audio clip.
The subjective test framework is available at [https://github.com/microsoft/P.808](https://github.com/microsoft/P.808). A more detailed description of the test framework and its validation is given in [29].
Figure 5: Echo canceller test set-up for Third Party Listening Test B according to the ITU-T Rec.P.831 (after [26]). \(S\) is send and \(R\) is receive.
### Objective metric
We have developed an objective perceptual speech quality metric called AECMOS. It can be used to stack rank different AEC methods based on MOS estimates with high accuracy. It is a neural network-based model that is trained using the ground truth human ratings obtained using our online subjective evaluation framework. The audio data used to train the AECMOS model is gathered from the numerous subjective tests that we conducted in the process of improving the quality of our AECs as well as the first two AEC challenge results. The performance of AECMOS on AEC models is given in Table 5 compared with subjective human ratings on the 18 submitted models. A more detailed description of AECMOS is given in [1]. Sample code can be found on [https://aka.ms/aec-challenge](https://aka.ms/aec-challenge).
CRUSE [56] noise suppression model which MS-1 is based on, changing the frame size from 20ms to 40ms increased DNSMOS OVRL by 0.1. In addition, changing the frame size of MS-1 from 20ms to 10ms decreased DNSMOS OVRL by 0.07. Therefore, we conclude that MS-1 should be significantly better than [36] if that model also had an algorithmic + buffering latency of 20ms.
### XI. Conclusions
This latest AEC challenge induced lower algorithmic latency + buffering latency requirements and added a personalized track. The performance of the top models is exceptional, though it shows there is still a lot of headroom to improve, especially in the double talk other, single near end, and WAcc metrics (see Table 10). We are optimistic that the personalized enrollment data can improve these areas much more than was shown in this challenge, which is a good area for future research. In addition, even lower latency requirements are needed for a telecommunication system to achieve end-to-end latencies of less than 50ms, which is the just-noticeable difference when latency impacts conversations [57]. End-to-end latencies significantly above 50ms have been shown to be correlated to lower participation in group meetings [58]. To achieve this goal the algorithmic latency + buffering latency should be less than 5ms, which is another good area for future work.
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & ByeAudio-18 & KuaiShou-13 & NWPU-19 & NWPU-10 & Nanjing-16 & NWPU\_Elevoc-7 \\ \hline KuaiShou-13 & 0.00 & & & & & & & & \\ NWPU-19 & 0.00 & 0.34 & & & & & & & \\ NWPU-10 & 0.00 & 0.00 & 0.01 & & & & & & \\ Nanjing-16 & 0.00 & 0.00 & 0.00 & 0.00 & & & & & \\ NWPU\_Elevoc-7 & 0.00 & 0.00 & 0.00 & 0.00 & 0.96 & & & & \\ NHCUSpeechLh-b11 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 7: **ANOVA for the top challenge entries. The pair-wise p-values are shown for the lower triangular matrix only.**
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l|l|l|l|l} \hline \hline & Personalized & ST FE \& Echo MOS & DT \& Echo MOS & DT \& MS & ST NE \& MOS & Overall MOS & CI MOS & WAcc ratio & final Score \\ \hline _Microsoft-1*_ & N & 4.688 & 4.703 & 4.289 & 4.265 & 4.412 & **4.473** & 0.018 & 0.797 & 0.856 \\ \hline _ByteAudio p-REC-18_ & Y & 4.703 & 4.736 & 4.357 & 4.062 & 4.361 & **4.444** & 0.018 & 0.822 & 0.854 \\ _Microsoft-2*_ & N & 4.695 & **4.707** & 4.295 & 4.155 & **4.415** & **4.453** & 0.019 & 0.807 & 0.854 \\ \hline _ByteAudio-18_ & N & 4.709 & **4.707** & **4.312** & 3.993 & 4.830 & **4.833** & 0.018 & 0.822 & 0.852 \\ KuaiShou-13 & N & 4.703 & 4.679 & 4.087 & 4.099 & 4.252 & **4.364** & 0.019 & 0.780 & 0.831 \\ \hline NWPU-19 & N & 4.704 & **4.725** & **4.160** & 3.918 & 4.202 & **4.344** & 0.019 & 0.795 & 0.829 \\ NWPU-10 & N & 4.702 & 4.664 & **4.124** & 3.912 & 4.192 & **4.320** & 0.019 & 0.790 & 0.823 \\ \hline Nanjing-16 & N & 4.619 & 4.640 & 3.926 & 3.920 & 4.149 & 4.251 & 0.020 & 0.755 & 0.803 \\ NWPU\_Elevoc-7 & N & 4.661 & 4.526 & 3.804 & 3.962 & 4.250 & 4.241 & 0.021 & 0.767 & 0.803 \\ \hline _NWPU\_pAC-20_ & Y & 4.664 & 4.599 & 3.756 & 3.914 & 4.115 & 4.210 & 0.021 & 0.750 & 0.794 \\ \hline _NHCUSpeechLh-b11_ & N & 4.640 & **4.622** & 3.929 & 3.736 & **4.244** & **4.232** & 0.020 & 0.690 & 0.788 \\ \hline _Harih-1_ & N & 4.655 & 4.504 & 3.735 & 3.754 & 4.119 & 4.153 & 0.022 & 0.762 & 0.784 \\ \hline _CIV-Tenet-pAC-21_ & Y & 4.003 & 4.427 & 4.010 & 3.825 & 4.192 & 4.091 & 0.023 & 0.732 & 0.766 \\ BJT-5 & N & 4.697 & 4.062 & 3.750 & 4.065 & 4.052 & 4.125 & 0.020 & 0.649 & 0.759 \\ EGO-9 & N & 4.661 & 4.253 & 2.583 & 3.726 & 3.871 & 4.019 & 0.023 & 0.718 & 0.749 \\ baseline & N & 4.535 & 4.283 & 3.479 & 3.883 & 3.887 & 4.013 & 0.023 & 0.649 & 0.736 \\ ZhongTele-2 & N & 4.567 & 4.112 & 3.269 & 3.828 & 4.050 & 3.965 & 0.024 & 0.608 & 0.719 \\ Whanthun-6 & N & 4.358 & 4.238 & 3.511 & 3.406 & 4.031 & 3.915 & 0.024 & 0.663 & 0.718 \\ \hline _COUP-14_ & N & 4.420 & 4.347 & 3.222 & 3.560 & 3.993 & 3.968 & 0.024 & 0.582 & 0.715 \\ \hline _Orange-17_ & N & 3.603 & 3.969 & 3.578 & 3.458 & 3.607 & 3.661 & 0.277 & 0.673 & 0.667 \\ \hline _CVA-8_ & N & 4.126 & 4.268 & 3.145 & 3.038 & 3.531 & 3.622 & 0.026 & 0.636 & 0.652 \\ \hline _Tonsel-24_ & N & 3.988 & 3.453 & 3.003 & 3.457 & 3.476 & 3.475 & 0.028 & 0.480 & 0.596 \\ \hline _NoiseWater-15_ & N & 2.661 & 2.692 & 3.726 & 3.999 & 3.556 & 3.268 & 0.030 & 0.422 & 0.546 \\ \hline _NoiseWater-25_ & N & 2.688 & 2.425 & 3.781 & 3.923 & 3.517 & 3.267 & 0.031 & 0.388 & 0.537 \\ \hline \hline \end{tabular}
\end{table}
Table 8: **ANOVA for the top challenge entries. The pair-wise p-values are shown for the lower triangular matrix only.** |
2309.00157 | Information Fusion for Assistance Systems in Production Assessment | We propose a novel methodology to define assistance systems that rely on
information fusion to combine different sources of information while providing
an assessment. The main contribution of this paper is providing a general
framework for the fusion of n number of information sources using the evidence
theory. The fusion provides a more robust prediction and an associated
uncertainty that can be used to assess the prediction likeliness. Moreover, we
provide a methodology for the information fusion of two primary sources: an
ensemble classifier based on machine data and an expert-centered model. We
demonstrate the information fusion approach using data from an industrial
setup, which rounds up the application part of this research. Furthermore, we
address the problem of data drift by proposing a methodology to update the
data-based models using an evidence theory approach. We validate the approach
using the Benchmark Tennessee Eastman while doing an ablation study of the
model update parameters. | Fernando Arévalo, Christian Alison M. Piolo, M. Tahasanul Ibrahim, Andreas Schwung | 2023-08-31T22:08:01Z | http://arxiv.org/abs/2309.00157v1 | # Information Fusion for Assistance Systems in Production Assessment
###### Abstract
Assistance systems are becoming a frequent company for machine operators because they often summarize vital information for the production and machine condition. This information supports the operator during decision-making while having an (unknown) fault. Moreover, the assistance systems often provide a procedure to handle the (faulty) condition. Typical components of an assistance system are a detection system, a knowledge base, and an (interactive) user interface. Data-based models are a common option for a detection system. However, systems that rely purely on data-based models are normally trained with a specific set of data, which cannot necessarily prevent data drift. Thus, an anomaly or unknown condition detection mechanism is required to handle data with new fault cases. Besides, the model's capability to adapt to the unknown condition is equally important to anomaly detection--in other words, its capability to update itself automatically. Alternatively, expert-centered models are powered by the knowledge of operators, which provides the models with production context and expert domain knowledge. The challenge lies in combining both systems and which framework can be used to achieve this fusion. We propose a novel methodology to define assistance systems that rely on information fusion to combine different sources of information while providing an assessment. The main contribution of this paper is providing a general framework for the fusion of \(n\) number of information sources using the evidence theory. The fusion provides a more robust prediction and an associated uncertainty that can be used to assess the prediction likeliness. Moreover, we provide a methodology for the information fusion of two primary sources: an ensemble classifier based on machine data and an expert-centered model. We demonstrate the information fusion approach using data from an industrial setup, which rounds up the application part of this research. Furthermore, we address the problem of data drift by proposing a methodology to update the data-based models using an evidence theory approach. We validate the approach using the Benchmark Tennessee Eastman while doing an ablation study of the model update parameters.
Data drift, ensemble classification, knowledge model, model update, information fusion, Dempster-Shafer evidence theory, assistance system, anomaly detection
## I Introduction
systems accompany the operators during the machinery operation by providing assessment during decision-making. These systems support the operators with (real-time) information on the process in terms of production, machine condition, and recommendations to handle faults or to improve the machine's performance. Assistance systems have typical components such as a (real-time) data collection system, a (fault) detection system, a knowledge base, a computing engine, and an (interactive) user interface [1][2][3]. Due to their high performance, data-based models are a popular choice when selecting a detection system with reported applications in medicine [4], industry [1][3], road infrastructure [5], and agriculture [2]. Usually, the data-based models are trained using a specific dataset presenting good results. However, not all data-based models can handle new upcoming faults in the data. Hence, an anomaly detection system must have a mechanism to recognize an upcoming anomaly and the capability to learn upcoming data that differs from the original training data. Equally important is the system's capability to adapt or retrain the data-based models automatically. The retraining or automatic update of the models must consider a minimum size of training data that assures that the models capture the essential patterns to be learned.
Systems composed by the combination or fusion of several individual models often present better results and robustness than individual models (e.g., bagging and boosting). Though data-based models attain high performance, alternatively, expert-centered knowledge-based models provide versatile features, which are production context and expert domain knowledge. The challenge here lies in how to combine a data-based model and a knowledge-based model. Thus, a common framework is required to perform a fusion of both systems. Such a framework must provide not only a way to combine the models' outputs but to quantify the uncertainty. The uncertainty provides information regarding how reliable the combined system output is.
We propose a novel methodology for assistance systems that rely on information fusion in production assessment, in which several information sources can be combined into a more robust system output. The novelty of this paper is presenting a common framework that allows the fusion of several information sources on the decision level using the evidence theory. Besides, we quantify the uncertainty of the system output to provide a better assessment of system output reliability. An essential contribution of this paper is the ability of the data-based model to handle unknown fault cases in the data, which allows the model to update the models automatically.
The individual contributions of this paper are:
* A methodology for the automatic model update of ECSs, while feeding up data with unknown fault cases. The methodology includes an uncertainty monitoring strategy that improves the anomaly detection of the EC, stores the data of the unknown condition, and retrains the pool of classifiers of the EC. We present the parameters of the automatic update module: threshold size, window size, and detection patience. The automatic update methodology is rounded up with experiments using the benchmark dataset Tennessee Eastman. The EC is tested using different fault class scenarios, in which we test the impact of a window during anomaly detection. Moreover, we present a detailed analysis of the automatic update parameters regarding retrained EC performance.
* A general framework to combine \(n\) number of information sources on the decision level to generate a robust system prediction. The framework uses the Dempster-Shafer evidence theory. Besides, the framework quantifies the uncertainty of the prediction, which can be used to assess the reliability of the system prediction.
* A methodology to combine a multiclass EC with an expert-centered knowledge-based model, in which we apply the general framework of the information fusion. The system architecture shows the components of each model, namely, the inference model and model update module. The application of the information fusion system is tested with the data of an industrial setup using a small-scaled bulk good system. The performance of the individual models (EC and knowledge-based) is compared with the combined system.
This paper is structured as follows: Section II presents a literature survey on the main topics of this paper. The theoretical background is described in section III. Our proposed approach is detailed in section IV. Section IV-C and section IV-D present the methodology for information fusion and model update, respectively. Section V portrays a use case for retraining the EC using the benchmark Tennessee Eastman. Whereas section VI presents a use case for information fusion using the data of a bulk good system laboratory plant. Finally, section VII summarizes the conclusions and future work.
## II Related Work
This section reviews the literature related to information fusion, update of data-based models, and assistance systems.
### _Assistance Systems_
Assistance systems provide valuable information for the users. They can be whether non-invasive or with direct control of the process. The assistance can range from recommendation systems [6][7], interactive systems [8], or even systems that prevent actions from the user. Architectures of assistance systems commonly contemplate the modules: data collection, a condition detection engine, a knowledge base, and an (interactive) user interface [9]. The (fault) condition detection engine is vital to identify the current state of the machinery or process. The engine is usually powered either by a knowledge-centered model [9] or a data-based model [10]. The knowledge base plays a crucial role in the assistance system because it provides the information that supports the user when a (faulty) condition is active [9]. There are different ways to build a knowledge base, namely using ontologies [9][11], knowledge graphs [8][12], or static databases. The proposed architectures of assistance systems contain the primary modules to support the users. However, there are factors to be considered, such as the update of the condition detection engine and the knowledge base, and the quantification of the system uncertainty. The challenge lies in a holistic architecture that addresses these factors and proposes the interactions of the primary systems. This research differentiates from the state-of-the-art, in which we propose a holistic methodology using information fusion for assistance systems with a special focus on production assessment. In this sense, the methodology addresses the major components of the assistance system architecture. We propose a novel architecture based on the evidence theory that can combine \(n\) number of information sources while quantifying the uncertainty of the resulting system prediction. For this purpose, we provide a detailed description of the architecture in terms of components and their relationships, with a special focus on the role of uncertainty.
### _Information Fusion_
Information fusion is a popular approach to combining several sources of information because the combined system often yields better performance and robustness. Information fusion on the decision level is a common practice using data-based models (e.g., supervised classifiers in the case of bagging) [13]. The use of information fusion and data-based models is reported in [14][15], in which evidence theory combines models at the decision level. Information fusion using evidence theory provides an additional feature: the uncertainty quantification [10]. The uncertainty serves to assess the output reliability of the combined system [16].
\begin{table}
\begin{tabular}{l l} \hline \hline
**Symbol** & **Description** \\ \hline DSET & Dempster-Shafer evidence theory \\ ECET & Ensemble classification using evidence theory \\ KLAFATE & Knowledge transfer framework using evidence theory \\ DSRC & Dempster-Shafer rule of combination \\ YRC & Yager rule of combination \\ EC & Ensemble classifier \\ \(m\) & Mass function \\ \(U\) & Uncertainty \\ \(w\) & confidence weight \\ \(p\) & prediction \\ \(D^{Tr}\) & Training data \\ \(D^{Va}\) & Validation data \\ \(D^{Te}\) & Testing data \\ \(k\) & sensitivity to zero factor \\ \(F_{D}\) & Fusion using DST rule of combination \\ \(F_{Y}\) & Fusion using Yager rule of combination \\ \(Ws\) & Window size \\ \(Th\) & Threshold size \\ \(Pt\) & Detection patience \\ \hline \hline \end{tabular}
\end{table}
Table I: List of symbols and abbreviations.
Alternatively, knowledge-based models are expert-centered approaches containing valuable expert domain and environment context [17]. Different knowledge-based approaches can be found in the literature using case-based reasoning (CBR) and natural language processing (NLP) [3], ontologies, and assistance systems [9]. Though combining the strength of data-based and knowledge-based models might be considered a logical step to follow, finding a common framework to perform the fusion is challenging. Besides, knowledge-based models often have a low number of input features in comparison with data-based models. The last aspect requires special attention while performing an inference of the primary systems before performing an information fusion. Current research methodologies cover the information fusion of data-based models [14][15]. However, existing literature does not report the fusion of data-based and knowledge-based models, though the heterogeneity of the sources could improve the overall result. We propose a methodology for the information fusion of a data-based model with an expert-centered model, in which we use the Dempster-Shafer evidence theory as a general framework for the fusion. Besides, we test the feasibility of the methodology using data from an industrial setup.
### _Update of Data-based Models_
The ability of data-based models to handle data with unknown fault cases has grown interest in the research community [18][19]. A primary step is identifying the unknown fault case or anomaly from the upcoming data. There are different approaches reported in the literature to detect anomalies, which propose the use of evidence theory [20] and unsupervised learning [21][22]. After identifying the anomaly from the data, the next step is updating the model. In this sense, some methodologies are focus on concept drift detection [23][24], incremental learning [25][26], emerging classes or labels [27][28][29], and incremental class [28]. Thus, detecting an anomaly is followed by an update or retraining of the data-based model. However, there are challenges associated with the retraining or updating of models: the size of the training data sufficient to capture the essence of the upcoming fault. An essential factor to consider is the performance evaluation of the retrained models. A careful study of the parameters is required because only some upcoming faults might be handled with the same set of retraining parameters. Existing literature addresses the anomaly detection [20][21][22], and even the identification of emerging classes (or unknown conditions) [27][28][29]. However, the model update using uncertainty remains unexplored. To this end, we propose a methodology for updating data-based models using DSET, in which we monitor the uncertainty of the fusion to trigger a model update. We focus on the model update of data-based models, specifically for ensemble classification using evidence theory. Besides, we perform an ablation study of the retraining parameters while showing their impact on the model performance. We demonstrate the robustness of the model update using the benchmark Tennessee Eastman.
## III Theoretical Background
This section presents the basic theory for performing information fusion and the transformation of model predictions using an evidential treatment. The equations of this section are applied during the development of the sections IV-C and IV-D.
### _Evidence Theory_
Dempster-Shafer [30] defined a frame of discernment \(\Theta=\{A,B\}\) for the focal elements A and B. The power set \(2^{\Theta}\) is defined by \(2^{\Theta}=\{\phi,\{A\},\{B\},\Theta\}\). The definition of a basic probability assignment (BPA) is given by: m: \(2^{\Theta}\rightarrow[0,1]\), in which the BPA must comply with \(m(\phi)=0\), and \(\sum_{A\in\Theta}m(A)=1\). The last equation represents the sum of BPAs. The focal elements of \(\Theta\) are mutually exclusive: \(A\cap B=\phi\).
The _Dempster-Shafer rule of combination_ (DSRC) defines how to perform the fusion of two mass functions (e.g., sources of information) using the equation:
\[\begin{split} m_{DS}(A)&=(m_{1}\oplus m_{2})(A)\\ &=\frac{\sum_{B\cap C=A\neq\phi}m_{1}(B)m_{2}(C)}{1-\sum_{B\cap C =\phi}m_{1}(B)m_{2}(C)}\end{split} \tag{1}\]
where \(m_{DS}(A)\) is the fusion of the mass functions \(m_{1}\) and \(m_{2}\). The conflicting evidence \(b_{k}\) is defined by:
\[b_{k}=\sum_{B\cap C=\phi}m_{1}(B)m_{2}(C) \tag{2}\]
It is important to remark that, while using DSRC, the conflicting evidence is distributed by each focal element.
Yager [31] defined an alternative rule of combination, which in contrast to DSRC, assigns the conflicting evidence to the focal element \(\Theta\). The _Yager rule of combination_ (YRC) is defined by the equation:
\[m_{Y}(A)=\sum_{B\cap C\neq\phi}m_{1}(B)m_{2}(C) \tag{3}\]
where \(m_{Y}(A)\) is the fusion of the mass functions \(m_{1}(B)\) and \(m_{1}(C)\). The focal element \(\Theta\) of the mass function \(m_{Y}(A)\) is defined by: \(m_{Y}(\theta)=q(\theta)+q(\phi)\), where \(q(\phi)\) represents the conflicting evidence. Likewise DSRC, the conflicting evidence \(q(\phi)\) is represented by:
\[q(\phi)=\sum_{B\bigcap C=\phi}m_{1}(B)m_{2}(C) \tag{4}\]
In the case of multiple fusion operations, the mass functions are combined using the following equation:
\[m(A)=\bigg{(}\Big{(}m_{1}\oplus m_{2}\Big{)}...\oplus m_{N}\bigg{)}(A) \tag{5}\]
where \(m(A)\) is the fusion of the \(n\) mass functions, and \(N\in\mathbb{N}\).
### _Evidential Treatment of Model Predictions_
We consider models with a common frame of discernment \(\Theta=\{L_{1},L_{2},...,L_{N}\}\), where \(N\) represents the number of labels or classes,, and \(N\in\mathbb{N}\). The power set is represented by \(2^{\Theta}=\{\phi,\{L_{1}\},\{L_{2}\},\{L_{1},L_{2}\}\}\). The last term represents the overall uncertainty \(U\). Each model (e.g., classifier or a rule-based system) provides a prediction in the form of a unique label \(p=L_{1}\) or as an array, \(p=[L_{1},L_{2},...,L_{n}]\). In section III-A, the sum of BPAs is defined as \(\sum_{A\subseteq\Theta}m(A)=1\). In [10], we proposed a strategy to transform a prediction into a mass function. This operation plays an essential role in the fusion of different information sources. We presented a sum of BPA that considers the weights of each focal element \(w_{m}\), and the quantification of the overall uncertainty \(U\): \(S_{wbpa}=\sum_{j=1}^{N}m_{j}\cdot w_{m_{j}}+U=1\), where \(n\in\mathbb{N}\) and \(w_{m}\) is the weight of the evidence \(m\). The following conditions must be fulfilled: \(\forall m_{j}\quad m_{j}>0\) and \(w_{m_{j}}\to[0,1]\). The overall uncertainty is defined as \(U=1-\sum_{j=1}^{N}m_{j}\cdot w_{m_{j}}\), in which a high value of \(U\) represents a high uncertainty on the body of evidence (e.g., lack of evidence). We consider that the focal elements are mutually exclusive, which means that only one label is active at the time, which transforms \(S_{wbpa}\) into \(S_{wbpa}=m_{R_{j}}\cdot w_{m_{R_{j}}}+U=1\). However, we adapted the _sensitivity to zero_ approach of Cheng et al. [32], using the equation [33]: \(k=1-10^{-F}\), where \(k\in\mathbb{R}\), \(F\in\mathbb{N}\), and \(F\gg 1\). Thus, we transform \(S_{wbpa}\) into:
\[S_{awbpa}=\sum_{j=1}^{N}m^{\prime}_{p_{j}}\cdot w_{m_{p_{j}}}+U=1 \tag{6}\]
where \(m^{\prime}_{p_{j}}\) represents the \(j_{th}\) focal element, and is defined using:
\[m^{\prime}_{p_{j}}=\begin{cases}k&\text{if }p_{j}=True\\ \dfrac{1-k}{N-1}&\text{otherwise}\end{cases} \tag{7}\]
where \(k\) is the approximation factor, \(N\) is the number of focal elements of \(\Theta\), and \(N\in\mathbb{N}\). The active prediction \(p\) can be transformed into a mass function \(m\) using: \(\mathbf{m}=\mathbf{m}^{\prime}_{\mathbf{p}}\cdot\mathbf{w}_{\mathbf{p}}\). The mass function can be represented as a row vector using the following equation:
\[m=[m^{\prime}_{p_{1}}\cdot w_{p_{1}}\quad...\quad m^{\prime}_{p_{N}}\cdot w_{ pN}\quad U] \tag{8}\]
and the uncertainty \(U\) is defined as:
\[U=1-\sum_{j=1}^{N}m^{\prime}_{p_{j}}\cdot w_{m_{p_{j}}}\quad U] \tag{9}\]
## IV INFUSION: Information Fusion for Assistance Systems in Production Assessment
This research proposes an INformation FUsion approach for asSIstance systems in productiON assessment (INFU-SION). This section covers the topics: theoretical background, prediction systems, information fusion, model update of the prediction system, and the assistance system.
As a first insight into this theme, we present a general system overview as seen in Fig. 1.
The general system is conformed by \(n\) systems used as information sources. The motivation behind this is the creation of a more robust system. The general system overview is composed of the blocks:
* The batch data is the numerical representation of the physical behavior of a machine. The data is split in three categories: training data \(D^{Tr}\), validation data \(D^{Va}\), and testing data \(D^{Te}\). The data is used during the training and inference processes of the models.
* The modules form the production assessment system:
* \(n\) Systems, in which each system has a model and a model update module. For instance, model 1 has two outputs: the model prediction \(\hat{y}_{Sys_{1}}\) and its associated uncertainty \(U_{Sys_{1}}\).
* The fusion module, which combines the predictions \(\hat{y}_{Sys_{1}}\)-\(\hat{y}_{Sys_{n}}\) of the information sources _model 1_model_\(n\) into the ensemble prediction \(\hat{y}_{Sys}^{E}\).
* The model update module is triggered either by each system uncertainty (e.g., \(U_{Sys_{1}}\)) or by the ensemble uncertainty \(U_{Sys}^{E}\).
* The assessment module matches each ensemble prediction with its corresponding assessment.
* The knowledge base has the assessment for each ensemble prediction.
* The assessment is presented to the user (operator) through a user interface.
A primary motivation of this paper is the integration of data-based and knowledge-based models because the combined outcome profits from the strengths of both models. Therefore, the \(n\) systems of Fig. 1 are transformed into two major systems: an ensemble classifier (EC) that groups different data-based models and a knowledge-based model. Section IV-A details both systems.
### _Prediction Systems_
As presented in Fig. 1, a (prediction) system is conformed by an inference model and an update module. The trained model represents the physical system and is used to predict the system's answer while feeding data to it. The inference model can be data-based (e.g., a supervised classifier), an ensemble classifier (EC) formed by several models, a model built on equations representing the physical system, an ontology, or a knowledge-based model. The model update module adapts the system when the initial conditions have changed (or unknown events occur). The update is performed automatically or manually, depending on the module strategy.
A model \(M_{i}\) is trained using a training dataset \(D^{Tr}\) (in the case of data-based models), or is modeled using the relationships between the process variables and thresholds (in the case of a knowledge-based model). A training dataset \(D^{Tr}\) contains \(N_{o_{Tr}}\) number of observations, \(N_{f_{Tr}}\) number of features, and \(N_{c_{Tr}}\) number of classes. A frame of discernment \(\Theta\) is formed by all the labels (or classes) that the model can predict: \(\Theta=\{C_{1},...,C_{N}\}\), where \(N\in\mathbb{N}\).
Thus, a model \(M_{i}\) outputs the prediction \(\hat{y_{i}}\) while feeding the testing data \(D^{Te}\):
\[\hat{y_{i}}=M_{i}(D^{Te}) \tag{10}\]
where \(\hat{y_{i}}\in\Theta\). The prediction \(\hat{y_{i}}\) is transformed into the mass function \(m_{i}\) using equations (6)-(9):
\[m_{i}=f_{m}(\hat{y_{i}},w_{M_{i}}) \tag{11}\]
where \(w_{M_{i}}\) represents the (confidence) weights for each class predicted by the model \(M_{i}\).
We focus this research on a prediction system using EC and rule-based knowledge models. Previous research deepened in these two topics separately [20][10]. Fig. 2 details the INFUSION system, where the prediction systems are adjusted to a data-based and knowledge-based model. Thus, the data-based model is represented by the EC using the ensemble classification and evidence theory (ECET) approach [20], and the knowledge-based model is built using the knowledge transfer framework and evidence theory (KLAFATE) methodology [10]. It is important to remark that each system has an inference model and a model update module. It is important to note that ECET is an EC formed by \(n\) systems, specifically the \(n\) supervised classifiers. ECET presents a similar structure from Fig. 1 for the system's prediction, except for the model update module.
The model update module of KLAFATE is manual because it relies on the expertise of the team expert. The methodology is explained in detail in [10]. The automatic model update module of ECET is introduced in this research and is explored in detail in section IV-D. The main blocks of this module are:
* The pool of classifiers and the list of hyperparameters reported in [20].
* The (re)-training pool of classifiers module, which is formed by the blocks:
* train model using either the prior training data \(D^{Tr}\), or using the re-training data \(D^{Tr^{\prime}}\).
* model validation either the prior validation data \(D^{Va}\), or the new validation data \(D^{Va^{\prime}}\).
* uncertainty quantification
* The anomaly detection module which monitors the ensemble uncertainty \(U_{E}\) and the anomaly prediction \(\hat{y}_{AN}\) of ECET, and the system uncertainty \(U_{Sys}\) and the system prediction \(\hat{y}_{Sys}\).
#### Iv-C1 ECET Prediction System
In [20], we presented an approach of _ensemble classification using evidence theory_ (ECET), in which we propose the use of information fusion to combine the predictions of \(N\) number of classifiers. In this paper, we extend the contribution of [20] by formalizing the approach theoretically. This theoretical formalization plays a crucial role in section IV-C and section IV-D, which correspond to the methodologies of information fusion and model update, respectively. Thus, given a \(n\) number of classifiers, each classifier produces an output \(\hat{y_{i}}\) using equation (10), where \(\hat{y_{i}}\in\Theta\). The output is subsequently transformed into a mass function \(m_{i}\) using equations (6)-(8). The ensemble classifier (EC) is obtained by combining all the classifiers, specifically using the DSRC on the mass function of each classifier prediction. As described in equation (5), the DSRC can be used for multiple fusion operations. However, the fusion is performed in pairs. For instance, in the case of three classifiers, the fusion of \(m_{1}\) (corresponding to the output \(\hat{y_{1}}\) of model \(C_{1}\)) and \(m_{2}\) is performed first, the result of this fusion \(m_{1}\oplus m_{2}\) is then combined with \(m_{3}\). The fusion of the pair of mass functions \(m_{i}\) and \(m_{D_{i-1}}\) is represented using:
\[F_{D_{i}}=\begin{cases}m_{i}\oplus m_{D_{i-1}}&\text{if }i>1\\ 0&\text{otherwise}\end{cases} \tag{12}\]
where \(i\in\mathbb{N}\), \(m_{i}\) is the mass function of the current classifier, and \(m_{D_{i-1}}\) is the fusion of the previous mass functions. After
Figure 1: General system overview.
\(m_{D_{i-1}}\) is updated:
\[m_{D_{i-1}}=\begin{cases}F_{D_{i}}&\text{if }i>1\\ m_{i}&\text{otherwise}\end{cases} \tag{13}\]
where \(i\in\mathbb{N}\). The last element of the fusion \(F_{D_{i}}\), which is a row vector, corresponds to the uncertainty \(U_{D_{i}}\):
\[U_{D_{i}}=\begin{cases}F_{D_{i}}[N]&\text{if }i>1\\ 0&\text{otherwise}\end{cases} \tag{14}\]
, where \(N\) is the cardinality of the frame of discernment \(\Theta\), and \(N\in\mathbb{N}\). After performing the last fusion, the system prediction \(\hat{y}_{EN}\) is calculated using:
\[\hat{y}_{EC}=\operatorname*{arg\,max}_{\Theta}F_{D_{i}} \tag{15}\]
where \(\hat{y}_{EC}\in\Theta\). The system uncertainty is calculated using: \(U_{D}=U_{D_{i}}\). A similar procedure is performed when using the YRC to calculate the fusion \(F_{Y_{i}}\), the previous mass function \(m_{D_{i-1}}\), and the uncertainty \(U_{Y_{i}}\). It is important to remark that the current mass function \(m_{i}\) is used for DSRC and YRC.
#### Iii-B2 Klafate Prediction System
In [10] we presented a knowledge-based model using the _knowledge transfer framework using evidence theory_ (KLAFATE) [10]. The knowledge was extracted from a failure mode and effects analysis (FMEA) and modeled in rules. Thus, a knowledge rule \(R_{i}\) is defined as the function: \(R_{i}=f(V_{1},...,V_{N_{V}},T_{1},...,T_{N_{T}})\), where \(V_{1}\) represents a process variable, \(T_{1}\) is a threshold or limit value of the process value, \(N_{V}\) is the number of process variables, \(N_{T}\) is the number of thresholds, \(N_{V}\) and \(N_{T}\in\mathbb{N}\). The knowledge rules are mutually exclusive: \(R_{i}\cap R_{i+1}=\phi\). The knowledge model is represented as a set of rules [10]:
\[L_{T_{R}}=\begin{cases}L_{T_{R_{1}}}&\text{if }\quad R_{1}\\...\\ L_{T_{R_{m}}}&\text{if }\quad R_{m}\\ L_{T_{R_{m+1}}}&\text{otherwise}\end{cases} \tag{16}\]
where \(L_{T_{R_{i}}}\) represents the approximated rule \(R_{i}\), \(m\) is the number of knowledge rules, \(m\in\mathbb{N}\), and \(L_{T_{R}}\), \(R_{i}\in\Theta\). The active rule is obtained using equations (6)-(9):
\[L_{T_{R_{i}}}=\begin{cases}k&\text{if }R_{i}=\text{True}\\ \dfrac{1-k}{N-1}&\text{otherwise}\end{cases}\]
Figure 2: INFUSION overview.
where \(k\) is the approximation factor, \(N\) is the cardinality of \(\Theta\), \(k\in\mathbb{R}\), and \(N\in\mathbb{N}\). Thus, the mass function is defined using equation (8):
\[m=[L_{T_{R_{1}}}\cdot w_{R_{1}}\quad...\quad L_{T_{R_{N}}}\cdot w_{R_{N}}\quad U] \tag{17}\]
where \(w_{R_{1}}\) is the (confidence) weight of the rule \(R_{1}\), and \(U\) is the overall uncertainty. The uncertainty \(U\) is calculated using the equation (9):
\[U=1-\sum_{j=1}^{N}L_{T_{R_{j}}}\cdot w_{R_{j}} \tag{18}\]
The (confidence) weight \(w_{R_{j}}\) is defined using the equation [10]:
\[w_{R_{j}}=\frac{1}{N_{R}}\sum_{i=1}^{N_{R}}w_{R_{C_{i}}}(V,T)\]
The mass function \(m_{R_{i}}\) is transformed into the prediction \(\hat{y}_{KE}\) using:
\[\hat{y}_{KE}=\operatorname*{arg\,max}_{\Theta}m_{R_{j}} \tag{19}\]
where \(\hat{y}_{KE}\in\Theta\).
### _Assistance System_
The assistance system provides an interactive source of assessment for the user while receiving the process data. It provides the current status of the system (e.g., system prediction and uncertainty), the assessment (e.g., troubleshooting through the FMEA knowledge base) in the case of a fault case, and notifies in case of an unknown condition for the consequent model update.
The knowledge of the FMEA is stored as a knowledge tuple \(TU_{i}\)[10]:
\[TU_{i}=(P,SP,FM,\mathbf{C},\mathbf{E},\mathbf{RE},R,w_{R}) \tag{20}\]
where \(FM\) represents a failure mode, \(P\) is a process, \(SP\) a subprocess, \(\mathbf{C}\) a set of causes, \(\mathbf{E}\) a set of effects, \(\mathbf{RE}\) a set of recommendations, and \(i\in\mathbb{N}\). A set of recommendation is also represented as: \(\mathbf{RE}=[RE_{1},...,RE_{N_{RE}}]\), where \(N_{RE}\in\mathbb{N}\). The latest representation applies to the sets of effects and causes.
In the assessment context, the rule \(R\) corresponds to the system prediction \(\hat{y}_{Sys}\), and the confidence weight \(w_{R}\) to the system weight \(w_{\hat{y}_{Sys}}\), where \(R,\hat{y}_{Sys}\in\Theta_{Sys}\), and \(w_{Sys}=1\). It is important to remark, that each system prediction \(\hat{y}_{Sys}\) is linked to a knowledge tuple \(TU_{i}\), a failure mode \(FM\), and to a weight \(w_{Sys}\): \(\hat{y}_{Sys}\iff TU_{i}\), \(\hat{y}_{Sys}\iff FM\), and \(\hat{y}_{Sys}\iff w_{\hat{y}_{Sys}}\). In contrast, a system prediction \(\hat{y}_{Sys}\) can be associated to a set of causes \(\mathbf{C}\), effects \(\mathbf{E}\), and recommendations \(\mathbf{RE}\). The assessment module is modeled through a matching function that associates a system prediction \(\hat{y}_{Sys}\) to the rest of the knowledge of the tuple \(TU_{i}\):
\[P,SP,FM,\mathbf{C},\mathbf{E},\mathbf{RE}=f_{Ma}(\hat{y}_{Sys},TU) \tag{21}\]
where \(i\in\mathbb{N}\). The matching function \(f_{Ma}\) provides the assessment while feeding the system prediction \(\hat{y}_{Sys}\), specifically returning the troubleshooting information associated with the failure mode: the process \(P\), the subprocess \(SP\), the set of causes \(\mathbf{C}\), the set of effects \(\mathbf{E}\), and the set of recommendations \(\mathbf{RE}\). The assistance system was described in detail in a previous work [10].
### _Information fusion_
Information fusion has a growing research interest because it improves robustness while combining different models. To this end, we propose a novel framework for combining \(n\) number of models using DSET. Moreover, this framework is used for the fusion of a data-based model and a knowledge-based model.
Thus, as presented in Fig. 1, the system is formed by \(n\) number of subsystems. The system mass function \(m_{Sys}\) is obtained after applying the information fusion to all subsystems:
\[m_{Sys}(A)=\bigg{(}\Big{(}m_{Sys_{1}}\oplus m_{Sys_{2}}\Big{)}...\oplus m_{Sys_ {n}}\bigg{)}(A) \tag{22}\]
where \(n\in\mathbb{N}\), and \(m_{Sys}(A)\in\Theta_{Sys}\). The system mass function \(m_{Sys}\) is also referred as \(F_{Sys}\). It is important to remark that all the systems share the same frame of discernment: \(\Theta_{KE}=\Theta_{EC}=\Theta_{Sys}\), and
\[\Theta_{Sys}=\{C_{1},...,C_{N_{Sys}}\} \tag{23}\]
where \(C_{1}\) represents the first class (or fault case), \(N_{Sys}\) is the number of classes (or fault cases), and \(N_{Sys}\in\mathbb{N}\).
The equation (22) can also be represented as:
\[m_{Sys}(A)=\begin{cases}(\bigoplus_{i}^{N_{Sys}}m_{Sys_{i}})(A)&\text{if }i>1\\ 0&\text{otherwise}\end{cases} \tag{24}\]
where \(i,N_{Sys}\in\mathbb{N}\).
This paper adapts the system to two main subsystems: a data-based model \(M_{EC}\) and a knowledge-based model \(M_{KE}\).
As a first step we obtain the outputs \(y_{\hat{E}}\) and \(y_{\hat{E}C}\) by feeding data to the models \(M_{KE}\) and \(M_{EC}\):
\[y_{\hat{E}C}=M_{EC}(D^{Te}) \tag{25}\]
and
\[y_{\hat{K}E}^{\sim}=M_{KE}(D^{Te}) \tag{26}\]
where \(D^{Te}\) is the testing data.
The predictions \(y_{\hat{E}C}\) and \(y_{\hat{K}E}\) are transformed into the mass functions \(m_{EC}\) and \(m_{KE}\) respectively, using equations (6)-(9):
\[m_{EC}=f_{m}(y_{\hat{E}C},w_{M_{i}}) \tag{27}\]
and
\[m_{KE}=f_{m}(y_{\hat{K}E},w_{M_{i}}) \tag{28}\]
where \(w_{M_{i}}=1\)\(\forall i\), and \(i\in\mathbb{N}\).
The next step is to obtain the system fusion \(F_{Sys}\) by applying either DSRC or YRC.
Thus, the system fusion \(F_{D_{Sys}}\) is calculated using DSRC and applying the equations (1), (2), (22), (24):
\[\begin{split} F_{D_{Sys}}(A)&=(\bigoplus_{i}^{N_{ Sys}}m_{Sys_{i}})(A)\\ &=(m_{Sys_{1}}\oplus m_{Sys_{2}})(A)\\ &=(m_{EC}\oplus m_{KE})(A)\end{split} \tag{29}\]
Likewise, the system fusion \(F_{Y_{Sys}}\) is calculated using YRC and applying the equations (3), (4), (22), (24):
\[F_{Y_{Sys}}=(m_{EC}\oplus m_{KE})(A) \tag{30}\]
. The system uncertainty \(U_{D}\) is calculated using the last DSRC fusion \(F_{D_{i}}\): \(\hat{y}_{Sys}\) using:
\[U_{D_{i}}=F_{D_{i}}[|\Theta_{Sys}|] \tag{31}\]
where \(F_{D_{i}}[|\Theta_{Sys}|]\) corresponds to the overall uncertainty of the system fusion \(F_{D_{i}}\). Likewise, the system uncertainty \(U_{Y}\) is calculated using the last YRC fusion \(F_{Y_{i}}\):
\[U_{Y_{i}}=F_{Y_{i}}[|\Theta_{Sys}|] \tag{32}\]
where \(F_{Y_{i}}[|\Theta_{Sys}|]\) corresponds to the overall uncertainty of the system fusion \(F_{Y_{i}}\).
The last step is the calculation of the system mass function \(m_{Sys}\) and the system uncertainties using DSRC \(U_{D}\) and YRC \(U_{Y}\). The system mass function \(m_{Sys}\) is obtained from the last DSRC system fusion \(F_{D_{Sys}}\): \(m_{Sys}=F_{D_{i}}\). The mass function \(m_{Sys}\), then, is transformed into the prediction \(\hat{y}_{Sys}\) using:
\[\hat{y}_{Sys}=\operatorname*{arg\,max}_{\Theta}m_{Sys} \tag{33}\]
where \(\hat{y}_{Sys}\in\Theta_{Sys}\). Algorithm 1 describes the steps for the information fusion of \(N_{Sys}\) number of subsystems while feeding the testing data \(D^{Te}\), where \(N_{Sys}\in\mathbb{N}\). Algorithm 1 is an updated version of the algorithm presented in [20].
```
1:procedureInformation Fusion
2:for\(j=1\) to \(N_{Sys}\)do\(\triangleright\)\(N_{Sys}\) Subsystems
3:for\(i=1\) to \(N_{D^{Te}}\)do\(\triangleright\)\(N_{D^{Te}}\) Samples
4:\(\hat{y}_{i}\gets M_{j}(S_{i})\)\(\triangleright\) by Eq. (25)
5:\(m_{i}\gets f_{m}(\hat{y}_{i},w_{i}^{M_{j}})\)\(\triangleright\) by Eq.(6)-(9), (27)
6:if i = 1 then
7:\(F_{D_{i-1}}=F_{Y_{i-1}}=0\)
8:\(m_{D_{i-1}}=m_{Y_{i-1}}=m_{i}\)
9:\(U_{D_{i-1}}=U_{Y_{i-1}}=0\)
10:else
11:\(F_{D_{i}}=m_{i}\oplus m_{D_{i-1}}\)\(\triangleright\) by Eq. (29)
12:\(F_{Y_{i}}=m_{i}\oplus m_{Y_{i-1}}\)\(\triangleright\) by Eq.(30)
13:\(m_{D_{i-1}}=F_{D_{i}}\)
14:\(m_{Y_{i-1}}=F_{Y_{i}}[|\Theta_{Sys}|]\)\(\triangleright\) by Eq.(31)
15:\(U_{D_{i}}=F_{D_{i}}[|\Theta_{Sys}|]\)\(\triangleright\) by Eq.(32)
16:\(U_{Y_{i}}=F_{Y_{i}}[|\Theta_{Sys}|]\)\(\triangleright\) by Eq.(32)
17:\(m_{Sys}=F_{D_{i}}\)
18:\(\hat{y}_{Sys}=\operatorname*{arg\,max}_{\Theta}m_{Sys}\)\(\triangleright\) by Eq.(33)
19:\(U_{D}\gets U_{D_{i}}\)
20:\(U_{Y}\gets U_{Y_{i}}\)
21:return\(y_{Sys}\), \(U_{D}\), \(U_{Y}\)
```
**Algorithm 1** Information Fusion of \(N_{Sys}\) Systems [20]
### _Model Update_
The anomaly detection functionality is crucial in the model update because it identifies when an unknown condition is present. We present an (automatic) model update for ECET based on uncertainty monitoring. The (manual) model update of KEXT was proposed in [10]. The model update is a sequence of five steps: anomaly detection, collection of unknown data, data isolation using a window, retraining, and inference.
#### Iii-D1 Model Update for ECET
Performing ECs are usually the result of a suitable dataset that fits the patterns of the existing data. However, the occurrence of new unknown fault cases might undermine the performance of the ECs, leading to a retraining procedure of the models. To this end, our methodology provides the theoretical basis for updating the data-based models using DSET, in which we monitor the uncertainty of the fusion to trigger a model update. The _model update of ECET_ is performed automatically using an anomaly detection strategy, in which the uncertainty is monitored. However, The model update can be set as semi-automatic (e.g., the user receives a notification from executing the model update module) in case the unknown condition needs to be analyzed in detail first. Algorithm 2 describes the sequence of the model update.
```
1:procedureModel Update
2:\(y\hat{e}_{C}\gets M_{EC}(S_{j})\)
3:\(m_{EC}\gets f_{m}(y\hat{e}_{EC},w^{EC})\)\(\triangleright\) by Eq.(6)-(9)
4:if\(C_{A}=True\)then\(\triangleright\) by Eq. (36)
5:\(\hat{y}_{A}=A_{K}\)
6:\(D_{Temp_{j}}\gets collect\_data(X_{A},\hat{y}_{A})\)
7:\(i_{A}\gets i_{A}+1\)
8:if\(C_{S}=True\)then\(\triangleright\) by Eq. (39)
9:\(D_{A}^{Te},D_{A}^{Va},D_{A}^{Te}\gets split\_data(D_{A})\)
10:\(D^{Te}\gets D_{D}^{Te}\cup D_{A}^{Te}\)\(\triangleright\) by Eq. (43)
11:\(D^{Va}\gets D_{D}^{Te}\cup D_{A}^{Va}\)\(\triangleright\) by Eq. (44)
12:\(\hat{\textbf{M}}_{Tr}\leftarrow retrain(\textbf{M},D^{Te})\)
13:\(\textbf{M}_{Tr}\leftarrow\textbf{M}_{Tr}\)\(\triangleright\) Replace old models
14:else
16:\(\hat{y}_{A}=\operatorname*{arg\,max}_{\Theta}m_{EC}\)\(\triangleright\) by Eq.(33)
17:\(i_{A}\gets 0\)
18:return\(M_{Tr}\)
```
**Algorithm 2** Model Update of ECET.
We proposed an _anomaly detection_ strategy using ECET in
[20], in which an unknown condition \(A_{K}\) was detected:
\[\hat{y}_{A}=\begin{cases}A_{K}&\text{if }C_{A}=True\\ \hat{y}_{EC}&\text{otherwise}\end{cases} \tag{34}\]
where \(\hat{y}_{A}\) is a parallel prediction to the EC prediction \(\hat{y}_{EC}\), \(A_{K}\in\mathbb{Z}\), and \(K\in\mathbb{N}\). The condition for anomalies \(C_{A}\) is defined as:
\[C_{A}=(U_{D}>Tr_{D_{Mx}})\text{ and }(U_{Y}>Tr_{Y_{Mx}}) \tag{35}\]
where \(U_{D}=b_{k}\), \(U_{Y}=q(\phi)\), \(Tr_{D_{Mx}}\) represents the maximum threshold for \(U_{D}\), \(Tr_{Y_{Mx}}\) is the maximum threshold for \(U_{Y}\). The terms \(b_{k}\) and \(q(\phi)\) are calculated using the equations (1)-(2), and (3)-(4), respectively.
In this paper, we propose the monitoring of the EC uncertainties \(U_{D_{EC}}\) and \(Y_{D_{EC}}\), as well as the system uncertainties \(U_{D_{Sys}}\) and \(Y_{D_{Sys}}\). The condition for anomalies from equation (35) is transformed into:
\[C_{A}=C_{A_{EC}}\ or\ C_{A_{Sys}} \tag{36}\]
where \(C_{A_{EC}}\) and \(C_{A_{Sys}}\) represent the condition for anomalies of EC and system, respectively. Thus, the anomaly detection of the system is defined as:
\[\hat{y}_{A_{Sys}}=\begin{cases}A_{K}&\text{if }C_{A}=True\\ \hat{y}_{Sys}&\text{otherwise}\end{cases} \tag{37}\]
The _data collection of (unknown) conditions_ needs to satisfy the condition \(C_{D}\):
\[C_{D}=C_{A}\ and\ C_{S} \tag{38}\]
where \(C_{S}\) is the condition that satisfies a minimum number of consecutive data samples. The condition \(C_{S}\) is defined as:
\[C_{S}=i_{A}>S_{Mn} \tag{39}\]
where \(i_{A}\) is the number of consecutive data samples, \(S_{Mn}\) is the minimum number of consecutive data samples, and \(i_{A},S_{Mn}\in\mathbb{N}\).
The collected data of the unknown condition \(D_{A}\) has the same features \(f_{Tr}\) of the (old) original data \(D\), such as \(f_{A}=f_{Tr}\). In contrast, the number of observations \(o_{A}\) might differ from that of the original data \(o_{Tr}\). Thus, the data \(D_{A}\) is represented by a number of observations \(N_{o_{A}}\), in which each observation is composed by the features \(X_{A}=f_{A}\) and the associated label (or class) \(\hat{y}_{A}\).
The data \(D_{A}\) is represented as:
\[X_{A_{S_{Mn}\times N_{f_{A}}}}\times Y_{A_{S_{Mn}\times 1}} \tag{40}\]
where \(S_{Mn}\) is the minimum number of consecutive samples of the unknown condition, \(N_{f_{A}}\) is the number of features, \(S_{Mn},N_{f_{A}}\in\mathbb{N}\), \(X_{A}\in\mathbb{R}\), and \(Y_{A}\in\mathbb{Z}\).
The data \(D_{A}\) is split into training \(D_{A}^{Tr}\) and testing data \(D_{A}^{Te}\):
\[D_{A}=\begin{cases}D_{A}^{Tr}\cup D_{A}^{Te}&\text{if }C_{D}=True\\ 0&\text{otherwise}\end{cases} \tag{41}\]
The training data is split \(D_{A}^{Tr}\) into training data \(D_{A}^{Tr}\) and validation data \(D_{A}^{Va}\):
\[D_{A}^{Tr}=D_{A}^{Tr}\cup D_{A}^{Va} \tag{42}\]
The next step is to integrate the existing data \(D\) with the collected data \(D_{A}\) using the following equations:
\[D^{\prime Tr}=D_{Old}^{Tr}\cup D_{A}^{Tr} \tag{43}\] \[D^{Va}=D_{Old}^{Va}\cup D_{A}^{Va}\] (44) \[D_{A}^{Te}=D_{Old}^{Te}\cup D_{A}^{Te} \tag{45}\]
The EC prediction \(\hat{y}_{EC}^{c}\) usually has not a constant steady value because of the diversity of the classifier's predictions. For this reason, we propose a _window_ on the EC prediction \(\hat{y}_{EC}\) that can ease the data isolation of the unknown condition. The window smoothes the EC output because it considers a window of \(N_{w}\) number of the last samples for the calculation of the windowed EC output \(\hat{y}_{EC}\):
\[\hat{y}_{EC_{i}}^{w}=\frac{1}{N_{w}+1}\sum_{k=i-N_{w}}^{i}\hat{y}_{EC_{k}} \tag{46}\]
where \(\hat{y}_{EC_{i}}^{w}\in\Theta_{Sys}\).
A graphical representation of the window procedure is exemplified in Fig. 3.
Having the data and the frame of discernment updated, we can proceed with the _retraining of the pool of classifiers_. The retraining is performed using the training methodology presented in [20].
The last step is _to test the EC_ using the testing data \(D^{Te}\). For this purpose, we first update the frame of discernment \(\Theta_{Sys}\):
\[\Theta_{Sys}=\Theta_{Sys_{Old}}\cup A_{K} \tag{47}\]
where \(\Theta_{Sys_{Old}}\) is the old frame of discernment, \(A_{K}\) is the new focal element, and \(N,K\in\mathbb{N}\).
Thus, the updated \(\Theta_{Sys}\) is transformed into:
\[\Theta_{Sys}=\{F_{1},...,F_{N},A_{K}\} \tag{48}\]
Figure 3: EC using a window.
#### Iv-A2 Model Update for KLAFATE
Though knowledge-based models contain valuable expert-domain knowledge, the modeling process is time-consuming and requires frequent updates to avoid knowledge obsoleteness. To this end, our methodology provides the theoretical framework for uncertainty monitoring using DSET, which can be used to trigger the update of the knowledge model by the team of experts. The _model update of KLAFATE_ is triggered by an uncertainty rise, either on the system or the knowledge model. Thus, the expert team is gathered to analyze the possibility of an unknown condition. Consequently, the expert team recommends adding information sources by including signals, process variables, or hardware to capture new physical signals. The latest purpose is to ease the identification of unknown conditions to create new knowledge rules in the FMEA. Once the expert team analyzes the acquired knowledge, the knowledge rules are validated using key performance indicators (KPI) in the short and long term. The process to create a rule-based system is described in [10].
## V Use Case: Model Update for Ensemble Classification using Tennessee Eastman Dataset
As described in section IV-D, the approach's novelty is a methodology for updating data-based models while injecting unknown fault cases in the data. The methodology uses primarily an uncertainty monitoring approach based on DSET. This section presents the results of the improved anomaly detection approach and the model update methodology. The robustness of the approaches is tested using the benchmark Tennessee Eastman. We present a description of the dataset. We describe the experiment design explaining the defined scenarios and the performance metrics. The subsection results provide the performance of the experiments. A discussion subsection closes this section by presenting the findings and limitations of the approach. The model update for the data-based model (ECET) and knowledge-based model (KLAFATE) are green highlighted in Fig. 2.
### _Description of the Tennessee Eastman Dataset_
The benchmark Tennessee Eastman (TE) was created by Down and Vogel with the motivation to provide an industrial-like dataset based on the Tennessee Eastman chemical plant [34]. The TE chemical plant have five principal process components: condenser, reactor, compressor, separator, and stripper. The dataset is amply used in literature to compare the performance of data-based models. The dataset models a chemical process considering 21 fault cases and a normal operation case. The dataset is divided into training sets and testing sets. The training set consists of 480 rows of data containing 52 features for each fault. In contrast, the training set of the normal condition contains 500 rows of data. The testing set consists of 960 rows of data, in which the first 160 rows belong to the normal condition and the rest 800 rows belong to the fault case. Given the prediction difficulty, the fault cases are usually grouped into three categories: easy cases (1, 2, 4, 5, 6, 7, 12, 14, 18), medium cases (8, 10, 11, 13, 16, 17, 19, 20) and hard cases (3, 9, 15 and 21) [35]. A detailed dataset description can be found in [34][20].
### _Experiment Design_
We followed the procedure proposed in [20], in which we used the benchmark TE to test the performance of the proposed approaches. Besides, we considered a pool of ten classifiers (e.g., five NN-based models and five non-NN-based models) as the basis of the ECs. We considered only experiments using ML-based ECs, and Hybrid ECs (a combination of non-NN-based classifiers and NN-based classifiers). The procedure is documented in detail in [20]. We trained the classifiers of the ECs using the fault cases (0,1,2,6,12) as the basis of the experiments. We defined two experiment scenarios: data isolation using a window and an update of ECs. We develop the approach using the IDE Anaconda and the libraries Scikit-learn and PyTorch [36][37][38]. We perform the experiments on a Ubuntu 20.04.3 LTS environment using a CPU i7-7700 @3.60GHz x 8, 32GB RAM, and a GPU NVIDIA GeForce GTX 1660 SUPER.
#### V-B1 Data isolation using a window
We selected the MC ECs M3 and H5-2 from the previous work [20] with the best performance criteria. The EC M3 consists of non-NN classifiers, whereas the EC H5-2 is hybrid. We compared the results obtained by performing a variation on the window size. The base classifiers' and ECs' hyperparameters are detailed in [20].
#### V-B2 Update of ECs
We selected the ML-based ECs M3, M4, and M5 to perform the experiments and comparisons. Given the constraint of limited retraining data, we discard NN-based and Hybrid ECs. The procedure consists of two data batches for each experiment. The first batch contains the known fault cases (0,1,2,6,12) and one anomaly case (e.g., fault case 7). The EC identifies the anomaly through uncertainty monitoring, collects the anomalous data, and retrains the EC if the data is sufficient. We assign the anomaly data with the arbitrary label 30. The second batch contains testing data of the fault cases (0,1,2,6,12) and the anomaly (e.g., fault case 7). For comparison purposes, the original label 7 is changed by the new label 30. We defined three main experiments, namely, the retraining of the ECs using all the fault cases (1,...,21), the study of the retraining parameters (e.g., threshold size, window size, and detection patience) using the fault cases (7,8,15), and the fine-tuned retrained ECs using all the faults (1,...,21). We selected the fault cases (7,8,15) as anomalies to have a case for each primary data group (easy, medium, and hard).
#### V-B3 Performance Metrics
We use the performance metrics F1-score (F1) and fault detection rate (FDR, also known as recall). F1 and FDR are detailed in [39].
### _Results_
This subsection presents the experiment results of the model update approach. For this purpose, the experiments are divided into two parts: data isolation using a window and a model update of EC.
#### Vi-C1 Data Isolation using a Window
We perform experiments using different window sizes to study their impact on the EC performance. We compare the effects of using no-window (\(w=0\)) and a window (\(w=20\), \(w=50\)).
Table II presents the F1-scores of the BIN EC M5 and MC EC H5-2. The hyperparameters of the base classifiers and ECs were reported in detail in [20]. The BIN EC M5 presents comparable results while varying the window size with average F1-scores of 0.6%, 0.64%, and 0.65% for the window sizes (0, 20, 50), respectively. In contrast, the MC EC H5-2 presented higher results using a window (20,50) compared to no-window \(w=0\). The MC EC H5-2 presented average F1-scores of 0.63%, 0.81%, and 0.88% for the window sizes (0,20,50), respectively.
Fig. 4 presents the plots of the MC EC H5-2 trained with fault cases (0,1,2,6,12) and using the anomaly fault case (7) while doing a variation on the window size (0, 20, 50). Figures 4a, 4b and 4c show the confusion matrices for the window sizes \(w=0\), \(w=20\), and \(w=50\), respectively. The confusion matrices for the window sizes \(w=20\) and \(w=50\) present better results than the confusion matrix with window size \(w=0\). The predictions plots of figures 4d, and 4d confirm the results of the confusion matrices, in which the predictions (blue) are closer to the ground truth (red) for EC using the window sizes \(w=20\) and \(w=50\). The anomaly case (7) is represented as the label (-1) in the predictions plot. It is important to remark that the approach using a window smooths the EC predictions.
#### Vi-C2 Model update of EC
We perform three different experiments in this subsubsection: the model update of the EC (retraining), the study of the variation of the retraining parameters, and finally, selecting a fine-tuned retrained EC.
We test the _model update of the EC_ using all the fault cases of the TE dataset. For this purpose, we selected the MC ECs M3, M4, and M5. The hyperparameters of the base classifiers and ECs were reported in detail in [20]. Table III presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12). The MC ECs M3, M4, and M5 present comparable results with an average F1-score of 0.39, 0.36, and 0.37, respectively. The EC MC M3 detected the anomalies (7,17) with F1-scores higher or equal to 0.43 and the anomalies (13,14) with F1-scores higher or equal to 0.33 and less than 0.43. The EC MC M4 detected the anomalies (8,14,17) with F1-scores higher or equal to 0.67 and the anomalies (7,10,11,15) with F1-scores higher or equal to 0.38 and less than 0.54. Alternatively, the EC M5 detected the anomalies (14,18,20) with F1-scores higher or equal to 0.54 and the anomalies (8,17) with F1-scores higher or equal to 0.43 and less than 0.54.
Fig. 5 presents the plots of the MC ECs M3, M4, and M5 trained with fault cases (0,1,2,6,12) and using the anomaly fault 7. Figures 5a, 5b and 5c show the confusion matrices for the ECs M3, M4, and M5, respectively. The confusion matrix of the MC EC M5 presents better results than the confusion matrices of the other ECs. Alternatively, the prediction plots of figures 5d, 5e and 5f present mixed results, in which M3 identifies the anomaly better, but the case (12) is confused with the anomaly. In addition, M5 presents a better prediction of the known fault cases but has a lower anomaly detection. The uncertainty quantification (UQ) using DSET is presented in figures 5g, 5h and 5i for the MC ECs M3, M4, and M5, respectively. The MC EC M5 presents steadier values than the MC ECs M3 and M4, which confirms the prediction pattern. The latest can be enunciated as the lower the uncertainty, the better the classification performance (likeliness).
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \multirow{2}{*}{**Fault**} & \multicolumn{2}{c|}{**BIN EC M5**} & \multicolumn{2}{c}{**MC EC H5-2**} \\ \cline{2-5} & **w=0** & **w=20** & **w=50** & **w=0** & **w=20** & **w=50** \\ \hline
[MISSING_PAGE_POST]
\hline Avg F1-score & 0.60 & 0.64 & 0.65 & 0.63 & 0.81 & 0.88 \\ \hline \hline \end{tabular}
\end{table}
Table II: Anomaly detection results of selected ensemble multiclass classifiers using all the fault cases, and F1-score.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Fault**} & **RT MC EC (0,1,2,6,12)** \\ \cline{2-4} & **M3** & **M4** & **M5** \\ \hline
[MISSING_PAGE_POST]
\hline
**Avg F1-score** & **0.39** & **0.36** & **0.37** \\ \hline \hline \end{tabular}
\end{table}
Table III: Classification results of the ECs after retraining using all the fault cases, and F1-score. The retraining parameters are threshold size \(th=100\), window size \(ws=20\), and detection patience \(pt=15\).
The next step is the _study of the retraining parameters_. For this purpose, we test the effects of the threshold size, window size, and detection patience. We chose the MC EC M3 to perform the experiments and selected the threshold sizes (150,250,350) and anomalies (7,8,15).
Effects of the threshold sizeTable IV presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12). The retraining parameters window size and detection patience are fixed with values of \(ws=20\) and \(pt=15\), respectively. The MC EC M3 presented higher results using a threshold size \(th=150\) with an average F1-score of 0.81 for the anomaly (7), compared with the values of 0.57 and 0.50, corresponding to the threshold sizes (250, 250). The MC EC M3 presents comparable results for the anomaly (8) with average F1-scores of 0.81, 0.82, and 0.82 for the threshold sizes (150, 250, 350), respectively. In contrast, the MC EC M3 presented higher results using a threshold size \(th=350\) with an average F1-score of 0.74 for the anomaly 15, in comparison with the values of 0.54 and 0.55, which correspond to the threshold sizes (150, 250), respectively.
Fig. 6 displays the EC M3 performance for each class while effectuating variations on the threshold size (150,250,350) for the anomalies (7,8,15). The best performance corresponds to the anomaly (8), in which the EC M3 detects the fault cases (0,1,2,6,12) often correctly, and it has limited anomaly detection. In contrast, the EC M3 presents a lower performance while applying the anomalies (7,15).
Effects of the window sizeTable V presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12). The retraining parameters threshold size and detection patience are fixed, with values of \(th=250\) and \(pt=15\), respectively. The MC EC M3 presented average F1-scores higher than 0.84 using window size (10,50) for the anomaly (7). Alternatively, the MC EC M3 presented average F1-scores higher than 0.72 for the anomaly (8) using the window size (20,50). In contrast, the MC EC M3 presented higher results using a window size \(ws=50\) with an average F1-score of 0.74 for the anomaly (15), in comparison with the values of 0.50 and 0.55, which correspond to the window sizes (150, 250), respectively.
Fig. 7 displays the EC M3 performance for each class while effectuating variations on the memory size (10,20,50) for the anomalies (7,8,15). The best performance corresponds to the anomaly (8) using a window size \(ws=20\), in which the EC M3 detects the fault cases (0,1,2,6,12) mostly correct, and it has a limited anomaly detection. In contrast, the EC M3 presents a lower performance while applying the anomalies (7,15).
Effects of the detection patienceTable VI presents the F1-scores of the MC ECs M3, M4, and M5 trained with the fault cases (0,1,2,6,12). The retraining parameters threshold size and window size are fixed with values of \(th=250\) and \(ws=20\), respectively. The MC EC M3 presented an average F1-scores of 0.84 using detection patience of \(pt=5\) and \(pt=30\), respectively, compared to the average F1-score of 0.57 for \(pt=15\). In the case of anomaly (8), the MC EC M3 presented higher results using detection patience \(pt=15\) with an average F1-score of 0.82, in comparison with the values of 0.78 and 0.58, which correspond to the detection patience (5,30), respectively. The MC EC M3 presented average F1-scores higher than 0.73 for the detection patience (5,30), while the average F1-score of 0.55 is obtained with the detection
Figure 4: Anomaly detection using different window sizes for the MC EC H5-2 trained with the known cases 0,1,2,6,12, and using the fault case (7) as an anomaly. The confusion matrices of H5-2 are displayed in (a)-(c), and the predictions in (d)-(f).
patience \(pt=15\).
Fig. 8 displays the EC M3 performance for each class while effectuating variations on the detection patience (5,15,30) for the anomalies (7,8,15). The best performance corresponds to the anomaly (8) using detection patience \(pt=15\), in which the EC M3 detects the fault cases (0,1,2,6,12) mostly correct and has a limited anomaly detection. In contrast, the EC M3 presents a lower performance while applying the anomalies (7,15).
Finally, we present the performance of the ECs with the _tuned retraining parameters_. Table VII presents the F1-scores of the MC ECs M3, M4, and M5 retrained with the fault cases (0,1,2,6,12) and the respective anomaly. In this case, the anomalies cases are all fault cases except for the original training cases. The retraining dataset contains the original fault cases and the detected data from the anomaly (unknown fault case from the data). The retraining parameters are threshold size \(th=250\), window size \(ws=20\), and detection patience
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Fault**} & \multicolumn{8}{c|}{**MC EC M3**} \\ \cline{2-10} & \multicolumn{4}{c|}{**A7**} & \multicolumn{4}{c|}{**A8**} & \multicolumn{4}{c}{**A15**} \\ \hline & **Th=150** & **Th=250** & **Th=350** & **Th=150** & **Th=250** & **Th=350** & **Th=150** & **Th=250** & **Th=350** \\ \hline
0 & 0.92 & 0.38 & 0.11 & 0.9 & 0.91 & 0.91 & 0.35 & 0.51 & 0.76 \\
1 & 0.99 & 0.95 & 0.95 & 1 & 0.95 & 0.94 & 0.91 & 0.91 & 0.97 \\
2 & 0.96 & 0.98 & 0.96 & 0.9 & 0.91 & 0.91 & 0.98 & 0.98 & 0.97 \\
6 & 0.99 & 0.22 & 0.21 & 1 & 0.99 & 0.99 & 0.22 & 0.28 & 1.00 \\
12 & 0.47 & 0.44 & 0.35 & 0.7 & 0.73 & 0.74 & 0.5 & 0.42 & 0.71 \\
30 & 0.53 & 0.42 & 0.43 & 0.4 & 0.42 & 0.42 & 0.26 & 0.19 & 0.00 \\ \hline
**Avg F1-score** & **0.81** & **0.57** & **0.50** & **0.81** & **0.82** & **0.82** & **0.54** & **0.55** & **0.74** \\ \hline \end{tabular}
\end{table}
Table IV: Anomaly detection results of MC EC M3 using the fault cases (0,1,2,6,12), the anomalies (7,8,15), thresholds variations (150,250,350), window size (20), patience (15), and F1-score.
Figure 5: Anomaly Detection and UQ results for MC ECs M3, M4 and M5 trained with the fault cases (0,1,2,6,12): Confusion matrices (a)-(c), classification results (d)-(f), and DSET UQ (g)-(i) while injecting anomaly 7.
\(pt=15\). The MC ECs M3, M4 and M5 present comparable results with an average F1-score of 0.39, 0.42, and 0.42, respectively. The MC EC M3 detected the anomalies (7,11) with F1-scores higher or equal to 0.55 and the anomalies (9,13,17) with F1-scores higher or equal to 0.34 and less than 0.42. The MC EC M4 detected the anomalies (8,14,17) with F1-scores higher or equal to 0.67 and the anomalies (7,10,11,15) with F1-scores higher or equal to 0.38 and less than 0.54. Alternatively, the EC M5 detected the anomalies (14,18) with F1-scores higher or equal to 0.68 and the anomalies (7,11,15,17,20) with F1-scores higher or equal to 0.31 and less than 0.54.
### _Comparison with Literature_
Though the current approach can automatically update the models while detecting unknown fault cases from the data, the stored data to retrain the models might be insufficient for some fault cases. Thus, the stored data for some fault cases might not capture the essential patterns to identify the
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \multirow{2}{*}{**Fault**} & \multicolumn{8}{c|}{**MC EC M3**} \\ \cline{2-10} & \multicolumn{3}{c|}{**A7**} & \multicolumn{3}{c|}{**AS**} & \multicolumn{3}{c}{**A15**} \\ \hline & **me=10** & **me=20** & **me=50** & **me=10** & **me=20** & **me=50** & **me=10** & **me=20** & **me=50** \\ \hline
0 & 0.97 & 0.38 & 0.97 & 0.8 & 0.91 & 0.85 & 0.54 & 0.51 & 0.76 \\
1 & 0.99 & 0.95 & 0.99 & 0.9 & 0.95 & 0.93 & 0.86 & 0.91 & 0.98 \\
2 & 0.98 & 0.98 & 0.97 & 1 & 0.91 & 0.95 & 0.78 & 0.98 & 0.98 \\
6 & 0.99 & 0.22 & 0.99 & 0.2 & 0.99 & 0.99 & 0.24 & 0.28 & 0.99 \\
12 & 0.43 & 0.44 & 0.42 & 0.4 & 0.73 & 0 & 0.51 & 0.42 & 0.56 \\
30 & 0.72 & 0.42 & 0.72 & 0.3 & 0.42 & 0.61 & 0.09 & 0.19 & 0.14 \\ \hline
**Avg F1-score** & **0.85** & **0.57** & **0.84** & **0.62** & **0.82** & **0.72** & **0.50** & **0.55** & **0.74** \\ \hline \end{tabular}
\end{table}
Table V: Anomaly detection results of MC EC M3 using the fault cases (0,1,2,6,12), the anomalies (7,8,15), window size variations (10,20,50), threshold (250), patience (15), and F1-score.
Figure 6: F1-score results after retraining for the ECs BIN M4, MC M3, and MC M5: (a)-(c) Bar plots for the known cases (0,1,2,6,12) and the new case (30, corresponding to the injected anomaly 7). The plots represent the ECs results using memory size 20 and patience 15, while varying the threshold (150,250,350).
Figure 7: F1-score results after retraining for the ECs BIN M4, MC M3, and MC M5: (a)-(c) Bar plots for the known cases (0,1,2,6,12) and the new case (30, corresponding to the injected anomaly 7). The plots represent the ECs results using threshold 250 and patience 15, while varying the memory size (10,20,50).
condition. In contrast, the contributions of literature presented in the comparison consider all the extent of the testing data.
Table VIII compares the anomaly detection results between the proposed approach and the literature. The multiclass ECs M3, M4, and M5 are originally trained using the fault cases (0,1,2,6,12). The testing data consists of the fault cases (3,9,15,21), which represent unknown conditions to the ECs. For this purpose, each EC is retrained with one fault case at a time. We use the F1-score as a performance metric to compare the proposed approach with other literature contributions. It is essential to mention that the MC EC H5-2 from a previous work [20] uses the full extent of testing data, as well in the case of Top-K DCCA [21]. The results of the ECs M3, M4, and M5 present lower results with average F1-scores of 20.36%, 3.50%, and 2.59%, respectively. The results of H5-2 and Top-K DCCA present general scores of 63.69% and 50.04%, respectively. Only M3 presents a score of 31.07% for the fault case 21, which still lies under the better performance results of H5-2 and Top-K DCCA with scores of 63.1% and 50.05%, respectively.
Table IX compares the anomaly detection results between our approach and the literature. We use the FDR to compare our results with the literature results. The retrained MC ECs M3, M4, and M5 present lower results with average FDR scores of 53.02%, 41.68%, and 35.04%, respectively. The MC ECs M3 and H3-4 present FDR scores of 87.97% and 73.76%, respectively. The approaches DPCA-DR, AAE, and MOD-PLS have FDR scores of 83.51%, 78.55%, and 83.83%,
\begin{table}
\begin{tabular}{c|c c|c c c|c c c} \hline \hline \multirow{2}{*}{**Fault**} & \multicolumn{6}{c}{**MC EC M3**} \\ \cline{2-10} & \multicolumn{3}{c|}{**A7**} & \multicolumn{3}{c|}{**AS**} & \multicolumn{3}{c}{**A15**} \\ \hline & **pt=5** & **pt=15** & **pt=30** & **pt=5** & **pt=15** & **pt=30** & **pt=5** & **pt=15** & **pt=30** \\ \hline \(0\) & 0.96 & 0.38 & 0.97 & 0.9 & 0.91 & 0.64 & 0.75 & 0.51 & 0.77 \\ \(1\) & 0.99 & 0.95 & 0.98 & 1 & 0.95 & 0.85 & 0.98 & 0.91 & 0.95 \\ \(2\) & 0.94 & 0.98 & 0.97 & 0.9 & 0.91 & 0.97 & 0.96 & 0.98 & 0.97 \\ \(6\) & 0.99 & 0.22 & 1 & 1 & 0.99 & 0.24 & 0.99 & 0.28 & 0.99 \\ \(12\) & 0.41 & 0.44 & 0.42 & 0.5 & 0.73 & 0.55 & 0.59 & 0.42 & 0.72 \\ \(30\) & 0.72 & 0.42 & 0.72 & 0.4 & 0.42 & 0.25 & 0.14 & 0.19 & 0 \\ \hline
**Avg F1-score** & **0.84** & **0.57** & **0.84** & **0.78** & **0.82** & **0.58** & **0.74** & **0.55** & **0.73** \\ \hline \hline \end{tabular}
\end{table}
Table VI: Anomaly detection results of MC EC M3 using the fault cases (0,1,2,6,12), the anomalies (7,8,15), patience variations (5,15,30), threshold (250), memory size (20), and F1-score
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{**Fault**} & \multicolumn{2}{c}{**RT MC EC (0,1,2,6,12)**} \\ \cline{2-4} & **M3** & **M4** & **M5** \\ \hline \(1\) & 0.98 & 0.99 & 0.99 \\ \(2\) & 0.99 & 0.99 & 0.98 \\ \(3\) & 0.00 & 0.02 & 0.01 \\ \(4\) & 0.00 & 0.03 & 0.01 \\ \(5\) & 0.00 & 0.18 & 0.13 \\ \(6\) & 1.00 & 1.00 & 1.00 \\ \(7\) & 0.72 & 0.54 & 0.31 \\ \(8\) & 0.29 & 0.71 & 0.44 \\ \(9\) & 0.34 & 0.00 & 0.00 \\ \(10\) & 0.27 & 0.38 & 0.22 \\ \(11\) & 0.55 & 0.51 & 0.54 \\ \(12\) & 0.95 & 0.95 & 0.95 \\ \(13\) & 0.35 & 0.20 & 0.15 \\ \(14\) & 0.26 & 0.77 & 0.76 \\ \(15\) & 0.13 & 0.39 & 0.31 \\ \(16\) & 0.26 & 0.09 & 0.20 \\ \(17\) & 0.42 & 0.67 & 0.51 \\ \(18\) & 0.06 & 0.02 & 0.68 \\ \(19\) & 0.20 & 0.07 & 0.05 \\ \(20\) & 0.28 & 0.23 & 0.52 \\ \(21\) & 0.07 & 0.13 & 0.01 \\ \hline
**Avg F1-score** & **0.39** & **0.42** & **0.42** \\ \hline \hline \end{tabular}
\end{table}
Table VII: Classification results of the RT ECs after retraining using all the fault cases, and F1-score. The retraining parameters are threshold size \(th=250\), window size \(ws=20\), and detection patience \(pt=15\).
Figure 8: F1-score results after retraining for the ECs BIN M4, MC M3, and MC M5: (a)-(c) Bar plots for the known cases (0,1,2,6,12) and the new case (30, corresponding to the injected anomaly 7). The plots represent the ECs results using threshold \(th=250\) and window size \(me=20\), while varying the patience (5,15,30).
respectively.
### _Discussion_
The ECs improved the anomaly detection capability after implementing the _window size_. In the case of the MC EC M5, the general F1-score improved from 0.6 to 0.65 using a window of \(w=50\) for the latest score. In the case of H5-2, the results are remarkable, in which the general F1-score score improved from 0.63 to 0.88 using a window of \(w=50\) for the latest score. However, a side effect of the window is a delay effect on the ensemble prediction, which is reflected while comparing Fig. (d)d and Fig. (f)f.
There are remarkable effects on the EC M3 performance while doing variations on the retraining parameters, namely, threshold size, window size, and detection patience. The results are mixed, and the average performance depends on the studied anomaly. However, from the results, it is possible to identify that a _threshold_ of \(Th=150\) presented the best average results for anomaly 7. In contrast, a threshold of \(Th=350\) presented the best results for anomaly 8. Alternatively, the plots of Fig. 6 visualize the performance of each class while doing variations on the threshold. The MC EC M3 presents an overall good performance while applying anomaly 8, in which the EC classifies the known cases mostly correctly and has a limited detection of the anomaly. In contrast, the anomaly detection feature decreases the performance of the known fault cases for some fault cases, which is visually represented in Fig (a)a while applying anomaly 7. _Variation of the window size_ reported favorable average performance results for a window of \(me=50\) while considering all the anomalies (7,8,15). In contrast, the plots of Fig. 7 show that the best results correspond to the window size \(me=20\) while applying anomaly 8, in which the EC classifies known cases properly, and it has a limited detection of the anomaly. Likewise the threshold experiments, a similar effect of decreasing classification performance of the known cases is detected. Generally, a _patience_ of \(pt=5\) presented the best average results for all the anomalies (7,8,15). In contrast, the plots of Fig. (b)b show that the best results correspond to the patience \(pt=15\) while applying anomaly 8, in which likewise the window size experiment, the EC classifies the known cases mostly correctly, and it has a limited detection of the anomaly. Likewise the threshold and window size experiments, the performance of the EC is affected by some faults while using the anomaly detection approach.
The retrained MC ECs M3, M4, and M5 presented mixed results using the same retraining parameters: threshold size \(th=250\), window size \(me=20\), and patience \(pt=15\). The average F1-score of M3, M4, and M5 presented values of 0.67, 0.44, and 0.42, respectively. For this configuration, M3 presented the best results, however, it is important to remark that the anomalies (14-19) are not detected. In contrast, M4 and M5 detected the faults (14,17,18), though the average scores are lower than M3 scores.
The performance of the retrained MC ECs presented mixed results. For instance, the EC M3 detected the anomaly cases (4,5,7,11,13) with FDR scores higher than 77% and the anomalies (10,20,21) with FDR scores higher than 53%. However, the results of the retrained ECs presented a lower performance than other literature contributions. The average FDR scores of M3, M4, and M5 are 50.18%, 43.60%, and 51.44%. It is important to remark that the retrained models only use 250 samples as training data (only 52% of the available data), in which other fault cases might be included as a side effect of the parameter patience.
## VI Use Case: Production Assessment using INFUSION on a Bulk Good System
As described in section IV-C, the approach's novelty is a methodology for the information fusion of data-based and knowledge-based models. The methodology primarily uses a novel framework for combining \(n\) number of models using DSET.
This section presents the results of the information fusion approach and an ablation study considering the different system configurations. The system configurations consist of the detection system using: the data-based model, the knowledge model, or a hybrid model (data-based model together with a knowledge model) using information fusion. We test the approach using a dataset of an industrial setup, namely, a bulk good system laboratory plant. We describe the testbed and the dataset. We present the results and a discussion of the findings. Fig. 2 displays the main blocks of this section: the data-based model (ECET), the knowledge-based model (KLAFATE), and the outer module for the information fusion of both models.
### _Description of the Bulk Good System Laboratory Plant and Dataset_
The bulk good system (BGS) laboratory plant is an industrial setup used for testing production and fault detection experiments. The BGS consists of four stations that represent standard modules of a bulk good handling system on a small scale: loading, storing, filling, and weighing stations. A detailed description of the BGS and applications can be
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline \multirow{2}{*}{**Fault**} & \multicolumn{2}{c|}{**RT MC EC (0,1,2,6,12)**} & \multicolumn{2}{c|}{**MC EC (0,1,2,6,12)** [20]} & \multirow{2}{*}{**Top-K DCCA**[21]} \\ \cline{2-5} & **M3** & **M4** & **M5** & & **H5-2** \\ \hline
3 & 0.00 & 7.43 & 0.00 & 64.3 & 53.82 \\
9 & 28.87 & 6.21 & 5.81 & 63.01 & 52.31 \\
15 & 21.49 & 0.00 & 0.45 & 64.35 & 43.98 \\
21 & 31.07 & 0.35 & 4.09 & 63.1 & 50.05 \\ \hline
**Avg F1-score** & **20.36** & **3.50** & **2.59** & **63.69** & **50.04** \\ \hline \end{tabular}
\end{table} TABLE VIII: Classification results of the ECs after retraining using all the fault cases, and F1-score. The retraining parameters are threshold size \(th=250\), window size \(ws=20\), and detection patience \(pt=15\).
found in [10][33]. The stations are built using state-of-the-art hardware regarding industrial controllers, communication protocols, sensors, and actors. The BGS dataset contains 14055 rows of data, each containing 133 features and three classes. The features represent information about sensors, actors, and controllers. The classes represent the different machine conditions, namely, low quality (LQ), low production (LP), and normal production (NP or the normal condition). Each class is associated with a failure mode (fm), which in this case, translates into LQ (fm1), LP (fm2), and NP (fm3). In the case of the class NP, it does not represent a failure mode but is represented in the same framework for consistency purposes of the knowledge model.
### _Experiment Design_
This subsection presents the methodology followed for the ECET and INFUSION experiments using the BGS dataset. Besides, we describe the performance metric used to compare the experiments.
#### Iv-C1 ECET using the BGS Data
We followed the same methodology of [20] for the creation of MC ECs using the BGS data, which includes the pool of base classifiers, the grid of hyperparameters of each classifier, and the grid of hyperparameters for each EC. We used the data-based models: decision tree (DTR), K-nearest neighbors (KNN), AdaBoost (ADB), support vector machine (SVM), and naive Bayes (NBY). For this purpose, we first trained the pool of classifiers using only ML models, which implies the search for the proper hyperparameters for each model. The second step is creating the ECs, using the EC hyperparameters. The last step presents the inference results of the ECs while injecting the BGS data.
#### Iv-C2 INFUSION using the BGS data
The knowledge-based model KEXT was presented in [10], in which we describe the knowledge rules. We only use the failure modes fm1, fm2, and fm3 for the INFUSION experiments. We present a comparison table using knowledge, data fusion, and knowledge and data fusion models. The KEXT model represents the knowledge model. The data fusion models are represented by the ECET ECs models and a fusion of two data-based models. Lastly, the knowledge and data fusion models are represented by the combination of the SVM-KNN-KEXT models and the INFUSION models composed of an MC EC and the KEXT model.
#### Iv-C3 Performance Metrics
We use the F1-score as the main performance metric to compare the different experiments. Panda et al. [39] present a detailed description of the F1-score calculation.
### _Results_
This subsection presents the results using the BGS data for the ECET and the INFUSION architectures. For this purpose, we present the F1-score results of the models or ECs. Besides, we display the confusion matrix, classification predictions, and uncertainty for the different architectures.
#### Iv-C1 ECET using the BGS Data
The first is to train the pool of base classifiers, which we performed using the module grid search of scikit-learn. Table X presents the hyperparameters of the base classifiers trained with the cases (1,2,3), which corresponds to the failure modes (fm1, fm2, fm3), respectively.
The next step is applying the ECET methodology to find the most performing MC ECs. We obtained the ML-based MC ECs, shown in Table XI. The hyperparameters expert (Exp), diversity (Div), version of diversity (Ver), and pre-cut (PC) are set to False.
Table XII presents the F1-scores of the MC ECs M3, M4, and M5 and the base MC classifiers DTR, KNN, and ADB. The MC ECs M3, M4, and M5 present the same average F1
\begin{table}
\begin{tabular}{c|c c c|c c|c|c|c} \hline \multirow{2}{*}{**Fault**} & \multicolumn{2}{c|}{**RT MC EC (0,1,2,6,12)**} & \multicolumn{2}{c|}{**MC EC (0,1,2,6,12)**} & \multirow{2}{*}{**DPCA-DR**} & \multirow{2}{*}{**AAE**} & \multirow{2}{*}{**MOD-PLS**} \\ \cline{2-2} \cline{6-9} & **M3** & & & & & **M5** & & **M3** \\ \hline
[MISSING_PAGE_POST]
*Avg F1-score** & **53.02** & **41.08** & **35.04** & **87.97** & **73.76** & **83.51** & **78.55** & **83.83** \\ \hline \end{tabular}
\end{table} TABLE IX: Classification results of the ECs after retraining using all the fault cases, and FDR. The retraining parameters are threshold size \(th=250\), window size \(ws=20\), and detection patience \(pt=15\).
score of 1.00, whereas the base classifiers DTR, KNN, and ADB have values of 1.0, 1.0, and 0.96, respectively.
Fig. 9 presents the plots of MC ECs M3, M4, and M5 trained using the cases (1,2,3), which correspond to the failure modes (fm1, fm2, fm3), respectively. Fig. 9a, 9b, 9c show the confusion matrices for the MC ECs M3, M4, and M5, respectively. The confusion matrices present the same performance for the MC ECs M3, M4, and M5. Fig. 9d, Fig. 9e, Fig. 9f display the predictions in blue color compared with the ground truth in red color for the MC ECs M3, M4, and M5, respectively. Likewise, in the previous case, the prediction plots are identical for the MC ECs M3, M4, and M5. Fig. 9g, Fig. 9h, Fig. 9i present the DSET UQ for MC ECs M3, M4, and M5, respectively. In contrast to the previous plots, the uncertainty is reduced as the ensemble size increases. In the case of the MC EC M5, the model presents the clearest plot, except for the fm3, which has a noisy behavior.
#### V-C2 INFUSION using the BGS data
Table XIII presents the F1-scores of the knowledge-based model, the fusion of data-based models, and the fusion of data-based and knowledge-based models. The knowledge-based model is represented by the model using the KEXT methodology. The fusion of data-based models is represented by the models using the ECET methodology (M3, M4, M5) and an additional case performing a DSET fusion of the data-based models KNN and SVM (without the ECET methodology). The fusion of data-based models and the knowledge-based model is represented by the models using the INFUSION methodology (IFS3, IFS4, IFS5) and an additional case performing a fusion of the models KNN, SVM, and KEXT. The KEXT model presents an average F1-score of 0.75, whereas the individual cases (1,2,3) presented values of 0.95, 0.79, and 0.52, respectively. The ECET and INFUSION models (IFS3, IFS4, IFS5) present the best average F1-score with a value of 1.00. The fusion of SVM and KNN presents an average F1-score of 0.96, whereas the fusion of KEXT, SVM, and KNN presents an improved average F1-score with a value of 0.98.
Fig. 10 presents the plots of the main models: the KEXT knowledge-based model, ECET data-based model (M3), and the INFUSION model (fusion of KEXT and ECET). Fig. 10a, 10b, 10c show the confusion matrices for the models KEXT, ECET (M3), and INFUSION (IFS3), respectively. The confusion matrices with the best performance correspond to the models ECET and INFUSION. In contrast, KEXT presents a poor performance by detecting fm3. Fig. 10d, Fig. 10e, Fig. 10f display the predictions in blue color compared with the ground truth in red color for the models KEXT, ECET, and INFUSION, respectively. The clearest plots correspond to the ECET and INFUSION models, whereas the KEXT model presents a noisy plot. Fig. 10g, Fig. 10h, Fig. 10i present the DSET UQ for the models KEXT, ECET, and INFUSION, respectively. In the case of KEXT, the plot presents a continuous line since the expert team can only change the uncertainty's value. In contrast, ECET presents an extremely noisy plot for the fm3. In the case of INFUSION, the plot presents a steadier uncertainty.
It is important to remark on the INFUSION robustness, in which we perform the fusion of a high-performing ECET with a low-performing KEXT. The low performance of KEXT for some fault cases did not affect INFUSION's performance. INFUSION performance presents a steady high performance while examining table XIII and the confusion matrix from Fig. 10c. Alternatively, a detailed examination of the uncertainty provides an additional perspective on INFUSION's performance, in which the uncertainty presents areas with high values. Thus, uncertainty monitoring can be used to evaluate ECET and KEXT to determine the causes of low performance.
### _Discussion_
The knowledge-based model KEXT presented mixed results, in which some faults are well identified or predicted. However, the strength of this approach relies on how well the rule represents a machine condition. Representing knowledge rules is a challenging task and often time demanding. An additional positive characteristic of the knowledge-based model relies on its explainability: an expert user can directly observe the logic and transform the rules.
Alternatively, the data-based models using ECET outperformed the knowledge-based model, which is clearly reflected in the F1-scores of Table XIII. However, the relationships between the features and outputs are often hidden (except for data-based models such as DTR, where the rules can be observed). It is important to remark on the number of features the models use, in which the knowledge-based models are built using less than ten features. In contrast, the ECET models are built using 133 features.
The fusion of data-based and knowledge-based models slightly improved the overall system's performance. The fusion model SVM-KNN-KEXT presented an improvement of fault 3 to the fusion model SVM-KNN, with scores of 0.95 and 0.92, respectively. In the case of INFUSION, the ECET results were already outstanding, resulting in a predominant effect on the fusion. The poor performance of some fault cases of KEXT did not affect the system performance.
The INFUSION methodology performed a fusion of the KEXT knowledge-based model and the ECET data-based models. No performance changes were reported since the ECET data-based models (M3, M4, and M5) presented already
\begin{table}
\begin{tabular}{l|l|c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Hyperparameters**} & **MC** \\ \cline{2-3} & & **1,2,3** \\ \hline ADB & learning\_rate & 0.01 \\ & n\_estimators & 10 \\ \hline DTR & criterion & entropy \\ & max\_depth & 10 \\ \hline KNN & criterion & manhattan \\ & n\_neighbors & 7 \\ & weights & distance \\ \hline NBY & - & NP \\ \hline SVM & C & 1000 \\ & gamma & 0.01 \\ & kernel & rbf \\ \hline \hline \end{tabular}
\end{table}
Table X: Grid of hyperparameters for base classifiers using the_BGS_ dataset and the cases (1,2,3)
outstanding performance, and the INFUSION models (IFS3, IFS4, and IFS5) presented the same performance.
## VII Conclusion
We presented a novel approach for assistance systems using information fusion in production assessment. We focused on two main topics of the assistance system: improving anomaly detection and information fusion. The anomaly detection system was improved by adding the capability of automatic retraining of the models while feeding unknown fault cases into the data. For this purpose, we presented an EC retraining strategy based on uncertainty monitoring of the EC predictions. The retraining results of the use case validated the approach, in which the benchmark TE dataset was used to test different anomalies. Different experiments were performed to analyze the impact of the main parameters of the retraining approach, namely, threshold size, window size, and detection patience. Though the results were not entirely comparable with the literature, the approach's claim was validated, in which the EC updated the models while feeding unknown fault cases into the data. Furthermore, we proposed an information fusion approach, which is based on the proposed approach, which is based on the proposed approach.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{**Fault**} & \multicolumn{3}{c|}{**MC EC**} & \multicolumn{3}{c}{**INDIV**} \\ \cline{2-7} & **M5** & **M4** & **M5** & **DTR** & **KNN** & **ADB** \\ \hline
1 & 1 & 1 & 1 & 1 & 1 & 1 \\
2 & 1 & 1 & 1 & 1 & 1 & 0.97 \\
3 & 0.99 & 0.99 & 0.99 & 1 & 0.99 & 0.91 \\ \hline Avg FI-score & 1 & 1 & 1 & 1 & 1 & 0.96 \\ \hline \end{tabular}
\end{table}
Table XII: Inference results of selected MC ECs using the cases (1,2,3) of the _BGS_ dataset, and F1-score.
Figure 9: Results using different models KEXT, ECET (MC EC M3), and SYS using cases (1,2,3): Confusion matrices (a)-(c), classification results (d)-(f), and DSET UQ (g)-(i).
fusion approach to combine an EC and a knowledge-based model at the decision level. The approach was tested using the data of an industrial setup. We performed an ablation study to compare the performance of the systems, namely, EC, knowledge-based models, and the fusion of both systems. The system performance reported better results while using an information fusion of the EC and the knowledge-based model, confirming, thus, the approach's claim.
Future research includes a semi-supervised approach in which the EC results are confronted with an unsupervised model. The purpose of the approach is to validate the data samples of the detected anomaly by examining the location of the samples in the input space. Furthermore, we will test other rules of combination to improve the anomaly detection results, thus increasing the size of anomalous data.
|
2309.10344 | A First Look at SVCB and HTTPS DNS Resource Records in the Wild | The Internet Engineering Task Force is standardizing new DNS resource
records, namely SVCB and HTTPS. Both records inform clients about endpoint and
service properties such as supported application layer protocols, IP address
hints or Encrypted Client Hello (ECH) information. Therefore, they allow
clients to reduce required DNS queries and potential retries during connection
establishment and thus help to improve the quality of experience and privacy of
the client. The latter is achieved by reducing visible meta-data, which is
further improved with encrypted DNS and ECH.
The standardization is in its final stages and companies announced support,
e.g., Cloudflare and Apple. Therefore, we provide the first large-scale
overview of actual record deployment by analyzing more than 400 M domains. We
find 3.96 k SVCB and 10.5 M HTTPS records. As of March 2023, Cloudflare hosts
and serves most domains, and most records only contain Application-Layer
Protocol Negotiation (ALPN) and IP address hints. Besides Cloudflare, we see
adoption by a variety of authoritative name servers and hosting providers
indicating increased adoption in the near future. Lastly, we can verify the
correctness of records for more than 93 % of domains based on three application
layer scans. | Johannes Zirngibl, Patrick Sattler, Georg Carle | 2023-09-19T06:10:21Z | http://arxiv.org/abs/2309.10344v1 | # A First Look at SVCB and HTTPS DNS Resource Records in the Wild
###### Abstract
The Internet Engineering Task Force is standardizing new DNS resource records, namely SVCB and HTTPS. Both records inform clients about endpoint and service properties such as supported application layer protocols, IP address hints or Encrypted Client Hello (ECH) information. Therefore, they allow clients to reduce required DNS queries and potential retries during connection establishment and thus help to improve the quality of experience and privacy of the client. The latter is achieved by reducing visible metadata, which is further improved with encrypted DNS and ECH.
The standardization is in its final stages and companies announced support, _e.g._, Cloudflare and Apple. Therefore, we provide the first large-scale overview of actual record deployment by analyzing more than 400 M domains. We find 3.96 k SVCB and 10.5 M HTTPS records. As of March 2023, Cloudflare hosts and serves most domains, and most records only contain Application-Layer Protocol Negotiation (ALPN) and IP address hints. Besides Cloudflare, we see adoption by a variety of authoritative name servers and hosting providers indicating increased adoption in the near future. Lastly, we can verify the correctness of records for more than 93 % of domains based on three application layer scans.
## 1 Introduction
With the ongoing development of the Internet, available protocols and versions, a general requirement is getting more important, namely _information about supported application layer protocols, versions and properties by individual endpoints_. The latter information can be exchanged during a handshake or first communication (_e.g._, Alternative Service (ALT-SVC) Headers in Hypertext Transfer Protocol (HTTP)). However, missing knowledge increases the handshake duration and information from existing solutions can only be used in subsequent connections. Each connection attempt and the potential use of insecure protocols reveals further meta-data related to a client and its desired connection, thus impacting its privacy and security.
To circumvent this problem, the Internet Engineering Task Force (IETF) works on a new general Domain Name System Resource Record (DNS RR) named SVCB ("SerViCe Binding") that provides service bindings for a domain [23]. This record accomplishes two major goals, directing a client _(i)_ to another alias or _(ii)_ to an endpoint including service information. As a first subtype, the HTTPS DNS RR is specified with a focus on Hypertext Transfer Protocol Secure (HTTPS) endpoints. The records allow a client to receive all required information, namely supported protocols, used ports and IP addresses, using a _single_, recursive DNS query. Provided information can be used to directly establish a secure communication channel using a protocol both endpoints support. Information about available application protocols and their explicit version can also reduce the risk of on-path or downgrade attacks, _e.g._, make HTTP Strict Transport Security (HSTS) obsolete. Furthermore, the new HTTPS record is supposed to be extended to provide ECH information to the client in the future. Once specified and deployed, ECH [21] further reduces the visibility of connection-related meta-data, _e.g._, the Server Name Indication (SNI).
Quick and widespread deployment of these new records can drastically improve the privacy of clients on the Internet. Different operators including Cloudflare [3] and Akamai [2] but also client software, _e.g._, Apple iOS [25] and Google Chromium [8] have already announced support for the new records.
Therefore, we set out to evaluate actual deployments and availability of the new records based on a large-scale measurement. Our contributions in this paper are:
_(i)_ We evaluate the support of new records for more than 400 M domains. We show that the deployment is mostly driven by Cloudflare. However, other operators show initial deployment as well.
_(ii)_ We evaluate the properties of received records and their implication for a client and established connections. We show that most domains have records with service information, mainly Application-Layer Protocol Negotiation (ALPN) values and _ipv4-_ and _ipv6hints_. Further parameters are rarely visible.
_(iii)_ We verify the correctness of received information with application layer scans. We were able to connect to 96 % of targets extracted from HTTPS records.
## 2 Background
The SVCB DNS RR represents a more general record to be used with different service types, while the HTTPS DNS RR is specifically designed to be used with HTTPS. These DNS RRs allow clients to select the correct service properties directly. To indicate the desired service, domains for SVCB records should be prefixed with Attlef labels [10] (_e.g._, _dns_). Using HTTPS records implies HTTP as service. Table 1 shows two example records. IETF designs both records to be flexible and expandable. The first SVCB record is in alias mode, indicated by the priority of \(0\), and redirects the domain to another target name. In comparison to canonical name (CUAME) records, this is also possible at the apex of a zone [23].
The second HTTPS record is in service mode and provides further information about the endpoint. In service
mode, a target name can be set to indicate another name. The target name is "." if the actual domain should be used. Additional record data is organized as key-value data, so-called _SvcParams_. Each parameter has to have a specified format to allow interoperability. As of March 2023, the draft specifies six different parameter keys and their value format. By default, an HTTPS record indicates HTTP/1.1 support. The _alpn_ parameter can indicate additional protocols. If an endpoint does not support HTTP/1.1 but other ALPNs the _no-default-alpn_ parameter has to be added. The _port_ parameter allows indicating alternative ports, while _ipv4-_ and _ipv6hint_ allow informing about IP addresses. Finally, the _mandatory_ parameter can be used to indicate a set of parameters that must be used for the service to function correctly.
The initially drafted but now reserved _ech_ parameter relies on a different draft [21]. However, it lacks deployment (see Section 4) and its final publication is delayed. Therefore, after a discussion [28], the parameter and references were removed from the SVCB and HTTPS draft [23] to allow an RFC publication. We evaluate the presence of this parameter in Section 4.
For SVCB records prefixed with _.dns_, the respective draft additionally adds the _dohpath_ parameter that allows to specify a Uniform Resource Identifier (URI) template for DNS over HTTPS [22].
## 3 Data Collection
This work relies on active measurements to collect DNS data and verify the usefulness of collected records using HTTP scans. This section explains all scans conducted between February 22nd and March 9th, 2023, and covers ethical considerations.
**DNS Scans**: We used MassDNS1 with a local Unbound resolver to resolve more than 400 M domains to their SVCB and HTTPS, but also A and NS records. We further resolved the name server domains from the latter NS records to their respective A records. This allows us to analyze who serves the new record and which operators are involved. We combined domains from the following sources as input for our measurement:
Footnote 1: [https://github.com/blechschmidt/massdns](https://github.com/blechschmidt/massdns)
_(i)_ Names on the Majestic [17], Alexa2[4], and Umbrella [9] Top 1M lists;
_(ii)_ More than 1 k available zone files from the Centralized Zone Data Service, _e.g._, _._com_, _.net_ and _.org_;
_(iii)_ A static collection of 98 M domains from 52 country-code TLDs (partial zones, _e.g._, 13 M _de_ domains);
_(iv)_ _www._ domains extracted from Certificate Transparency logs between August 2022 and January 2023.
Footnote 2: We use the last published list before deprecation from February 1st, 2023. [https://toplists.net.in.tum.de/archive/alexa/](https://toplists.net.in.tum.de/archive/alexa/)
We additionally prefixed domains with the Attrelaf label _.dns_[10]. As of March 2023, it was the only available label based on an IETF draft [22]. We exclude www. domains for this measurement but included domains from NS record names.
**Protocol Scans**: We used the QScanner introduced by Zirngibl _et al._[29] and the Goscanner [13] to test whether received ALPN information is valid for the given domain. The QScanner supports QUIC handshakes and HTTP/3 requests while the Goscanner supports Transport Layer Security (TLS)/TCP handshakes and HTTP/1.1 and HTTP/2 requests.
For each domain with an HTTPS record in service mode, we extracted the supported ALPNs, port and IP addresses from the _ipv4hint_ in the records. If no _ipv4hint_ is available, we rely on each domain's additionally requested A records. We use these tuples of domain, IP address, port, and ALPN to seed our protocol scans.
**Ethics**: During all our scans, we strictly followed a set of ethical measures, _i.e._, informed consent [11] and community best practices [19]. Our scans are conducted with a limited rate and use a request-based blocklist. Furthermore, our measurement vantage point is clearly identified based on reverse DNS, WHOIS information, and a hosted website. We did not receive any inquiries related to our scans during this work.
## 4 Analysis
We analyze the current deployment of SVCB and HTTPS records based on our measurements described in Section 3. Resolving more than 400 M domains, we received SVCB records for 3.96 k domains but HTTPS records for 10.56 M domains. SVCB should be available for domains with Attrleaf labels [10]. Therefore, we additionally resolved domains prefixed with the first specified label (_.dns)_ but only received records for 27 domains.
### _General Record Analysis_
Table 2 shows which modes (alias vs service) are used and which keys are commonly present in available records. Regarding SVCB records, 3.9 k (98.4 %) domains use the record for alias mode, aliasing the service to a different domain. Only 62 domains use the service mode and mostly advertise ALPN values or IPv4 and IPv6 addresses as hints. 27 domains prefixed with _.dns_ result in SVCB records. All records are in service mode advertising different ALPN values (4\(\times\)_h2_ for DNS over HTTPS and 26\(\times\)_dot_ for DNS over TLS). The DoH path advertised by a single domain is /dns-query?dns. The SVCB record in both scenarios is only deployed by few domains and we focus on HTTPS records for the remainder of this paper.
Regarding HTTPS records, only 2.6 k (0.02 %) domains use the alias mode, while a majority advertises endpoint information using the service mode. Similarly, most domains advertise ALPN values and IPv4 and IPv6
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Domain & TTL & CLASS & TYPE & Priority & Target Name & SvcParams \\ \hline coffeebike.no. & 3600 & IN & SVCB & 0 & barmobile.no. & \\ cloudflare.com. & 30 & IN & HTTPS & 1 & - & alpha=”h3,h2” ipv4hint=104.16.132.229,104.16.133.229 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Example SVCB and HTTPS DRS RRS
addresses as hints. The HTTPS record implies support of HTTP/1.1 by default if the _no-default-alpn_ parameter is not present. In our results, no domain with an HTTPS record in service mode has the flag set. Table III shows the Top-5 advertised ALPN parameters. A majority of domains advertise HTTP version 2 but also 3 indicating QUIC support, while 834.4 k only advertise HTTP/2. 3.2 k domains do not advertise additional ALPN values but only rely on the default. A client can still use record information and only establish a connection if it supports HTTP/1.1.
While for 10.55 M (99.9 %) domains IPv4 hints are available, 10.23 M (96.9 %) additionally advertise IPv6 addresses. Most hints contain two addresses respectively but up to eight different addresses are visible as shown in Figure 1. This allows a client to select from a set of different addresses and fallback to alternatives if necessary. All other keys are only visible with a few domains. The advertised ports in HTTPS records are 80 (2\(\times\)), 443 (10\(\times\)) and 8920 (1\(\times\)). Furthermore, we only receive 20 ECH configurations. This supports the discussion that the respective ECH draft [21] still lacks deployment while the DNS RRs are already deployed for many domains and both drafts should be decoupled. [28] 146.5 k domains from the Alexa [4], 169 k from Majestic [17] and 80.8 k domains from the Umbrella [9] Top 1M lists have an HTTPS record. The most prominent candidates are google.com with a service mode record and an ALPN parameter _h2,h3_ and youtube.com with a service mode record without additional data.
_Key take-away: The SVCB record in both scenarios are only deployed by few domains. In contrast, more than 10 M domains make use of HTTPS records, mostly serving address hints and ALPN values. The alias mode or remaining parameters are rarely used and should be reevaluated in the future._
### _Involved Operators_
For the following analysis, we focus on domains with HTTPS records in service mode (10.55 M) due to their advanced deployment. To get a better understanding of involved operators, we analyze where domains are hosted and which name servers are used. If available, we use _ipv4hints_ and map addresses to the Autonomous System (AS) announcing the respective prefix. For all domains without this parameter, we use queried A records for IPv4 addresses.
Domains with HTTPS records are hosted in 2.3 k ASes. However, Table IV shows that a majority of domains (98.8 %) resolves to ASes operated by Cloudflare (AS13335 and AS209242). Domenshop, a Norwegian web hoster, hosts the second-highest number of domains and accounts for a large share of domains indicating support for HTTP/2 but not HTTP/3. Following the Top 3 a more even distribution of the remaining 72 k domains across 2.3 k ASes is visible.
To analyze responsible name servers, we rely on NS records for domains exactly matching domains in our input. We do not follow CNAME records or extract information from SOA records. During our scan, we received NS records for 7.8 M domains with an HTTPS record. Domains without NS records in our data are either resulting in SOAs only (mostly _www._ domains) or resolve to canonical names and would require further resolution steps. In general, we are able to identify name servers supporting HTTPS records hosted in 661 different ASes. This shows a widespread deployment of name servers that support the new record in general.
Similar to web hosting, most HTTPS records are served by name servers hosted within Cloudflare followed by Domenshop. The latter appears as three different ASes (AS1921, AS12996, AS208045). Each AS hosts a name server authoritative for a similar amount of domains respectively. Most domains have one NS record for each of the three name servers for resilience.
_Key take-away: Domains with HTTPS records are hosted in more than 2.3 k ASes and name servers serving the records are in more than 1.6 k ASes. However, most records are hosted in and served by Cloudflare (98 %)._
\begin{table}
\begin{tabular}{l r r r r r r r r r r r} \hline \hline \multirow{2}{*}{Record} & \multicolumn{3}{c}{Mode} & \multicolumn{6}{c}{Keys} \\ \cline{3-13} & Total & Alias & Service & Mandatory & ALPN & No Default & Port & ECH & IPv4 Hint & IPv6 Hint & DoH Path \\ \hline SVCB & 3.96 k & 3.9 k & 62 & 0 & 53 & 0 & 2 & 0 & 25 & 15 & - \\ HTTPS & 10.56 M & 2.6 k & 10.55 M & 0 & 10.55 M & 0 & 13 & 20 & 10.55 M & 10.23 M & - \\ SVCB + _dns_ & 27 & 0 & 27 & 0 & 26 & 0 & 12 & 0 & 1 & 1 & 1 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Number of domains with each property and parameter in their SVCB and HTTPS DNS resource records.
Figure 1: Addresses in _ipv4-_ and _ipv6hints_. Note the logarithmic y-axis.
### _Validity of Records_
We conducted HTTP scans to check the validity of collected records and whether clients can use the received information for an HTTP request. The general scan approach is described in Section 3. We focus on HTTP/1.1, HTTP/2 and HTTP/3, and select targets for each scan based on the ALPN and IP address hints. Table V provides an overview about results. TLS/TCP handshakes are successful for more than 96.6 % of evaluated targets for each HTTP version respectively while QUIC handshakes are successful for more than 93.6 % of HTTP/3 targets. For 90 %, we are further able to conduct an HTTP HEAD request. Most unsuccessful connection attempts either result in a time out (1.1: 6.4 k, 2: 9.1 k, 3: 50.8 k) or a generic TLS handshake failure (1.1: 708.9 k, 2: 692.1 k, 3: 1.2 M).
Successful scans for HTTP/1.1 and HTTP/2 still cover 1.8 k ASes while HTTP/3 and thus QUIC scans only cover 416 ASes out of 2.3 k candidates. Analyzing failed scans reveals that a major origin of errors during QUIC scans and for timeouts during the HTTP request is an attack prevention mechanism by Cloudflare [18]. It is an automated challenge mechanism that delays the page load which results in errors with both the Goscanner and QScanner.
Furthermore, we find 23.0 k domains with HTTPS records served by Cloudflare name servers but hosted in different ASes that only result in timeouts at least during QUIC scans. For those domains, scan results (timeouts) are reproducible. Interestingly, those domains are hosted in more than 1.3 k ASes and no relation is visible besides the Cloudflare name servers. Furthermore, all HTTPS contain the same ALPN set (_h3, h3-29, h2_). We assume a misconfiguration and informed Cloudflare.
_Key take-away: A majority of available HTTPS records contains valid, usable information especially if used by clients able to pass Cloudflare's attack prevention. However, we identify a set of records with incorrect ALPN values. For those domains requests for some announced ALPN values time out consistently (mostly HTTP/3)._
## 5 Related Work
SVCB and HTTPS records have seen little attention by other research so far. In 2021, Zirngibl _et al._[29] used HTTPS records to identify QUIC deployments. They found records for 2.9 M domains indicating QUIC support hosted in 1.2 k ASes. However, they do not analyze records further. In contrast, they find HTTP ALT-SVC Headers for more than 20 M domains. While the latter is an alternative approach to distribute endpoint information, it requires a previous HTTP communication. Two years later, we find 4\(\times\) more HTTPS records hosted in twice as many ASes. Similarly, Trevisan _et al._[26] use alternative service information to identify QUIC deployments but only HTTP ALT-SVC Header headers from additional HTTP requests. Both, Zirngibl _et al._ and Trevisan _et al._ implied that HTTP ALT-SVC Headers are widely deployed. We show that still fewer HTTPS records are deployed, but growth is visible.
In 2019, Chai _et al._[7] evaluated Encrypted SNI, an older version of ECH that relied on TXT DNS RR to distribute key information. They identified more than 100 k domains within the Alexa Top 1M. Similar results have been reported by Tsiatsikas _et al._[27] in 2022. In 2022, Hoang _et al._[14] find 1.5 % to 2.25 % domains with a respective TXT record out of 300 M domains from TLD zone files. We show that no transition to ECH and HTTPS records is visible yet. Weber [20] reported about the visibility of HTTPS queries from a network (Akamai) perspective. While many queries failed with incorrect behavior initially, the correctness of seen responses changes quickly. Additionally, they only observed records for 126.4 k domains and no alias mode. Aguilar-Melchor _et al._[1] evaluate a potential positive effect of HTTPS records but do not evaluate its current deployment state.
Furthermore, the security and impact of ECH has been analyzed [5, 24] and related work has evaluated the state of DNS over TCP, HTTP or QUIC [6, 12, 15, 16], and shows increased deployment and in general good performance. Thus, the fundamentals for a successful deployment of SVCB and HTTPS records are given.
## 6 Conclusion
In this work, we provide the first large-scale overview of the deployment of new SVCB and HTTPS DNS resource records. While we find only very few domains with SVCB records (3.96 k without and 26 with an Attrleaf label), we show that more than 10 M domains already resolve to HTTPS. These records mainly provide ALPN values and _ipv4-_ and _ipv6hints_. We find only 20 domains with an ECH parameter which indicates lacking deployment. However, we show that most domains are hosted within Cloudflare, and Cloudflare operated name servers are authoritative.
Nevertheless, information contained in most available records is correct, and handshakes followed by HTTP requests with indicated versions are possible. Therefore, clients already querying the records (e.g., Apple devices [25]) can effectively make use of HTTPS records for more than 10 M domains and reduce, DNS requests and visible meta-data during connections establishments while reducing handshake cost.
\begin{table}
\begin{tabular}{l l r|l l l} \hline \hline \multicolumn{3}{c|}{Hosting} & \multicolumn{3}{c}{Name server} \\ ASN & Name & \#Doms & ASN & Name & \#Doms \\ \hline
13335 & Cloudflare & 10.4 M & 13335 & Cloudflare & 7.7 M \\
12996 & Domenshop & 61.6 k & 129961 & Domenshop & 24.0 k \\
209242 & Cloudflare & 49.7 k & 16509 & Amazon & 3.2 k \\
397273 & Render & 4.9 k & 397226 & Neustar & 3.1 k \\
14061 & Digitalcaccen & 4.6 k & 44273 & GoDaddy & 2.5 k \\ \hline \hline \multicolumn{5}{l}{\({}^{1}\)Domenshop uses three different name servers for most domains located in three different ASes (AS1921, AS12996, AS208045)} \\ \end{tabular}
\end{table} TABLE IV: Top 5 web hosters (out of 2.3 k) and name server providers (out of 661) of domains with HTTPS records.
\begin{table}
\begin{tabular}{l r r r r} \hline \hline \multicolumn{3}{c}{Successful} \\ HTTP & Targets & TLS & Handshake & HTTP & Requests \\ \hline
1.1 & 21.44 M & 20.72 M & 96.63 \% & 19.48 M & 90.84 \% \\
2 & 21.43 M & 20.73 M & 96.69 \% & 19.47 M & 90.84 \% \\
3 & 19.59 M & 18.34 M & 93.64 \% & 17.04 M & 87.01 \% \\ \hline \hline \end{tabular}
\end{table} TABLE V: Protocol scan results based on HTTPS records. Targets are a combination of domain and IP address pairs.
## Acknowledgment
The authors would like to thank the anonymous reviewers for their valuable feedback. This work was partially funded by the German Federal Ministry of Education and Research under the project PRIMEnet (16KIS1370), 6G-life (16KISK002) and 6G-ANNA (16KISK107) as well as the German Research Foundation (HyperNIC, grant no. CA595/13-1). Additionally, we received funding by the Bavarian Ministry of Economic Affairs, Regional Development and Energy as part of the project 6G Future Lab Bavaria and the European Union's Horizon 2020 research and innovation program (grant agreement no. 101008468 and 101079774).
|
2309.07638 | WASM-MUTATE: Fast and Effective Binary Diversification for WebAssembly | WebAssembly is the fourth officially endorsed Web language. It is recognized
because of its efficiency and design, focused on security. Yet, its swiftly
expanding ecosystem lacks robust software diversification systems. We introduce
WASM-MUTATE, a diversification engine specifically designed for WebAssembly.
Our engine meets several essential criteria: 1) To quickly generate
functionally identical, yet behaviorally diverse, WebAssembly variants, 2) To
be universally applicable to any WebAssembly program, irrespective of the
source programming language, and 3) Generated variants should counter
side-channels. By leveraging an e-graph data structure, WASM-MUTATE is
implemented to meet both speed and efficacy. We evaluate WASM-MUTATE by
conducting experiments on 404 programs, which include real-world applications.
Our results highlight that WASM-MUTATE can produce tens of thousands of unique
and efficient WebAssembly variants within minutes. Significantly, WASM-MUTATE
can safeguard WebAssembly binaries against timing side-channel
attacks,especially those of the Spectre type. | Javier Cabrera-Arteaga, Nicholas Fitzgerald, Martin Monperrus, Benoit Baudry | 2023-09-14T12:03:17Z | http://arxiv.org/abs/2309.07638v2 | # Wasm-Mutate: Fast and Effective Binary Diversification for WebAssembly
###### Abstract
WebAssembly has is renowned for its efficiency and security in browser environments and servers alike. The burgeoning ecosystem of WebAssembly compilers and tools lacks robust software diversification systems. We introduce Wasm-Mutate, a compiler-agnostic WebAssembly diversification engine. It is engineered to fulfill the following key criteria: 1) the rapid generation of semantically equivalent yet behaviorally diverse WebAssembly variants, 2) universal applicability to any WebAssembly programs regardless of the source programming language, and 3) the capability to counter high-risk security threats. Utilizing an e-graph data structure, Wasm-Mutate is both fast and effective. Our experiments reveal that Wasm-Mutate can efficiently generate tens of thousands of unique WebAssembly variants in a matter of minutes. Notably, Wasm-Mutate can protect WebAssembly binaries against timing side-channel attacks, specifically, Spectre.
## 1 Introduction
WebAssembly is the fourth official language of the web, complementing HTML, CSS and JavaScript as a fast, platform-independent binary format [21, 40]. Since its introduction in 2015, it has seen rapid adoption, with support from all major browsers, including Firefox, Safari and Chrome. WebAssembly has also been adopted outside of browsers, with world-leading execution platforms like Fastly using it as a foundational technology for their content delivery network [17]. In addition to major ones like LLVM, more and more compilers and tools can output WebAssembly binaries [23, 45, 26]. With this prevalence, it is of utmost importance to design software protection techniques for WebAssembly [27].
Software diversification is a well-known software protection technique [12, 4, 19], consisting of producing numerous variants of an original program, each retaining equivalent functionality. Software diversification in WebAssembly has many important application domains, such as optimization [5] and malware evasion [8]. It can also been used for fuzzing, a salient example of this was the discovery of a CVE in Fastly in 2021 [18], achieved through automated transformations to a WebAssembly binary.
To develop an effective WebAssembly diversification engine, several key requirements must be met. First, the engine should be language-agnostic, enabling diversification of any WebAssembly code, regardless or the source programming language and compiler toolchain. Second, it must have the capability to swiftly generate semantically equivalent variants of the original code. The speed at which this diversification occurs holds potential for real-time applications, including moving target defense [6]. The engine should also possess the ability to counter attackers by producing sufficiently distinct code variants. This paper present an original system, Wasm-Mutate, that addresses all these requirements.
Wasm-Mutate is a tool to automatically transforms a WebAssembly binary program into a variant binary program that preserves the original functionality. The core of the diversification engine relies on an e-graph data structure [48]. To the best of our knowledge, this work is the first to use an e-graph for software diversification in WebAssembly. An e-graph offers one essential property for diversification: every path through the e-graph represents a functionally equivalent variant of the input program [48, 37]. A random e-graph traversal can also be very efficient, supporting the generation of tens of thousands of equivalent variants from a single seed program in minutes [29] Consequently, the choice of e-graphs is the key to build a diversification tool that is both effective and fast. We have designed 135 rewriting rules in Wasm-Mutate, which can transform the e-graph from fine to coarse grained levels.
We assess the effectiveness of Wasm-Mutate with respect to its capacity at generating variants, which code is different from the original and which execution exhibit diverse instruction and memory traces. Our empirical evaluation reuses an existing corpus from the diversification literature [7]. We also measure the speed at which Wasm-Mutate is able to generate the first variant that exhibits a trace different from the original. Our security security assessment of Wasm-Mutate consists in evaluating the degree to which diversification can mitigate Spectre attacks. This assessment is made with WebAssembly programs that have been previously identified as vulnerable to Spectre attacks [36].
Our results demonstrate that Wasm-Mutate can generate thousands of variants in minutes. These variants have unique machine code after compilation with cranialft (static diversity) and the variants exhibit different traces at runtime
(dynamic diversity). Our experiments also provide evidence that the generated variants are hardened against Spectre attacks. To sum up, the contributions of this work are:
* The design and implementation of a WebAssembly diversification pipeline, based on semantic-preserving binary rewriting rules.
* Empirical evidence of the diversity of variants created by Wasm-Mutate, both in terms of static binaries and execution traces.
* A clearcut demonstration that Wasm-Mutate can protect WebAssembly binaries against timing side-channel attacks, specifically, Spectre.
* An open-source repository, where Wasm-Mutate is publicly available for future research [https://github.com/bytecodealliance/wasm-tools/tree/main/crates/wasm-mutate](https://github.com/bytecodealliance/wasm-tools/tree/main/crates/wasm-mutate).
This paper is structured as follows. In section 2, we introduce WebAssembly, the concepts of semantic equivalence and what we state as a rewriting rule. In section 3, we explain and detail the architecture and implementation of Wasm-Mutate. We formulate our research questions in section 4, answering them in section 5. We discuss open challenges related to our research in section 6, in order to help future research projects on similar topics. In section 7 we highlight works related to our research on software diversification. We finalize with our conclusions section 8.
## 2 Background
In this section, we define and formulate the foundation of this work: WebAssembly and its runtime structure, semantic equivalence modulo input, rewriting rules and e-graphs. Along with the paper, we use the terms, metrics and concepts defined here.
### WebAssembly
WebAssembly (Wasm) is a binary instruction set initially meant for the web, and now also used in the backend. It was adopted as a standardized language by the W3C in 2017, building upon the work of Haas et al. [21]. One of Wasm's primary advantages is that it defines its own Instruction Set Architecture (ISA), which is both platform-independent. As a result, a Wasm binary can execute on virtually any platform, including web browsers and server-side environments. WebAssembly programs are compiled ahead-of-time from source languages such as C/C++, Rust, and Go, utilizing compilation pipelines like LLVM.
```
fnmain0 { letmtarr=[1,2,3,4,5]; //Variableassignment letmtsum=0; //Loopandmemoryaccess foriin0.arr.len0 { sum=arr[1]; } //Useofexternalfunction printint("Sumofarrayelements:()",sum); }
```
Listing 1: A Rust program containing function declaration, loop, conditional and memory access.
WebAssembly programs operate on a virtual stack that allows primitive data types. Additionally, a WebAssembly program might include several custom sections. For example, binary producers such as compilers use custom sections to store metadata, such as the name of the compiler that generates the Wasm code. A WebAssembly program also declares memory sections and globals, which are used to store,
manipulate and share data during program execution, e.g. to share data with the host engine of the WebAssembly binary.
WebAssembly is designed with isolation as a primary consideration. For instance, a WebAssembly binary cannot access the memory of other binaries or cannnot interact directly with browser's APIs, such as the DOM or the network. Instead, communication with these features is constrained to functions imported from the host engine, ensuring a secure and safe Wasm environment. Moreover, control flow in WebAssembly is managed through explicit labels and well-defined blocks, which means that jumps in the program can only occur inside blocks, unlike regular assembly code [22]. In Listing 1, we provide an example of a Rust program that contains a function declaration, a loop, a loop conditional, and a memory access. When the Rust code is compiled to WebAssembly, it produces the code shown in Listing 2. The stack operations are folded with parentheses. The module in the example contains the components described previously.
The WebAssembly runtime structure is described in the WebAssembly specification and it includes 10 key elements: the Store, Stack, Locals, Module Instances, Function Instances, Table Instances, Memory Instances, Global Instances, Export Instances, and Import Instances. These components interact during the execution of a WebAssembly program, collectively defining the state of a program during its runtime.
Two of these elements, the Stack and Memory instances, are particularly significant in maintaining the state of a WebAssembly program during its execution. The Stack holds both values and control frames, with control frames handling block instructions, loops, and function calls. Meanwhile, Memory Instances represent the linear memory of a WebAssembly program, consisting of a contiguous array of bytes. In this paper, we highlight the aforementioned two components to define, compare and validate the state of two Wasm programs during their execution.
### Rewriting rules
Our definition of a rewriting rule draws from the one proposed by Sasnauskas et al. [41], and integrates a predicate to specify the replacement condition. Concretely, a rewriting rule is defined as a tuple, denoted as (LHS, RHS, Cond). Here, us refers to the code segment slated for replacement, RHS is the proposed replacement, and Cond stipulates the conditions under which the replacement is acceptable. Importantly, LHS and RHS are meant to be semantically equivalent, per the definition of previous section.
For example, the rewriting rule (x, x i32.or x, ()) implies that the LHS'x' is to be replaced by an idempotent bitwise i32.or operation with itself, absent any specific conditions. Notice that, for this specific rule, the commutative property shared by LHS and RHS, symbolized as (LHS, RHS)= (HHS, LHS). Besides, the Cond element could be an arbitrary criterion. For instance, the condition for applying the aforementioned rewriting rule could be to ensure that the newly created binary file does not exceed a threshold binary size.
Based on our understanding, our research is one of the first to apply the concept of rewriting rules to WebAssembly. This will expand the potential use cases of wasm-mutate. Beyond its role as a diversification tool, it can also be used as a standard tool for conducting program transformations in WebAssembly.
We focus on rewriting rules that guarantees semantic equivalence. Semantic equivalence refers to the notion that two programs or functions are considered equivalent if, for a given specified input domain, they produce the same output values or have the same observable behavior [30]. In other words, the semantics of the two programs are equivalent when the input-output relationship (w/ possibly some abstraction), even if the internal implementation details or the structure of the programs differ.
## 3 Design of Wasm-Mutate
In this section we present Wasm-Mutate, a tool to diversify WebAssembly binaries and produce semantically equivalent variants.
### Overview
The primary objective of Wasm-Mutate is to perform diversification, i.e., generate semantically equivalent variants from a given WebAssembly binary input. Wasm-Mutate's central approach involves synthesizing these variants by substituting parts of the original binary using rewrite rules. It leverages a comprehensive set of rewrite rules, boosted by a diversification space traversals using e-graphs(refer to subsection 3.3).
In Figure 1 we illustrate the workflow of Wasm-Mutate: it starts with a WebAssembly binary as input 1. It parses the original binary 2, turning the input program into appropriate abstractions, in particular Wasm-Mutate builds the control flow graph and data flow graph. Using the defined rewriting rules, Wasm-Mutate builds an e-graph 3 for the original program. An e-graph packages every possible equivalent code derivable from the given rewriting rules [48, 37]. Thus, at this stage, Wasm-Mutate exploits a key property of e-graphs: any path traversal through the e-graph results in a semantically equivalent code. Then, the diversification process starts, with parts of the original program being randomly replaced by traversal of the e-graph 4. The outcome of Wasm-Mutate is a semantically equivalent variant of the original binary 5. The tool guarantees semantically equivalent variants because each individual rewrite rule is semantic preserving.
### WebAssembly Rewriting Rules
In total, there are 135 possible rewriting rules implemented in Wasm-Mutate, those rules are grouped under several categories, called hereafter meta-rules. For example, 125 rewriting rules are implemented as part of a peephole meta-rule. There are 7 meta-rules that we present next.
**Add type:** In WebAssembly, the type section wraps definitions of signatures for the binary functions. Wasm-Mutate implements two rewrite rules, one of which is illustrated in the following rewriting rule.
LUS (module (type(:0)(func(param132)(result164)))
BMS (module (type(:0)(func(param132)(result164))))
#.(type(:0)(func(param164)(result132164)))
This transformation generates random function signatures with a random number of parameters and results count. This rewriting rule does not affect the runtime behavior of the variant. It also guarantees that the index of the already defined types is consistent after the addition of a new type. This is because Wasm programs cannot access or use a type definition during runtime, they are only used to validate the signature of a function during compilation and validation from the host engine. From the security perspective, this transformation prevents against static binary analysis. For example, to avoid malware detection based on signature set [8].
**Add function:** The function and code sections of a Wasm binary contain function declarations and the code body of the declared functions, respectively. Wasm-Mutate add new functions, through mutations in the two mentioned sections. To add a new function, Wasm-Mutate creates a random type signature. Then, the random function body is created. The body of the function consists of returning the default value of the result type. The following example illustrates this rewriting rule.
LUS (module (type(:0)(func(param132f32)(result164)))
BMS (module (type(:0)(func(param132f32)(result164))))
#.(func(:0)(type0)(param132f32)(result164))
#.(id4.const(0))
Wasm-Mutate never adds a call instruction to this function. So in practice, the new function is never executed. Therefore, executing both, the original binary and the mutated one with the same input, lead to the same final state. This strategy follows the work of Cohen, advocating the insertion of harmless 'garbage' code into a program. These transformations do not impact the program's functionality; they increase its static complexity.
**Remove dead code:**Wasm-Mutate can randomly remove dead code. In particular Wasm-Mutate removes: _functions, types, custom sections, imports, tables, memories, globals, data segments and elements_ that can be validated as dead code with guarantees. For instance, to delete a memory declaration, the binary code must not contain a memory access operation. Separate mutators are included within Wasm-Mutate for each of the aforementioned elements. For a more concrete example, the following listing illustrates the case of a function removal.
LUS (module(type(func)))
BMS (module(import(func())))
Cond The removed function is not called, it is not exported, and it is not in the binary_table.
When removing a function, Wasm-Mutate ensures that the resulting binary remains valid and semantically identical to the original binary: it checks that the deleted function was neither called within the binary code nor exported in the binary external interface. As exemplified above, Wasm-Mutate might also eliminate a function import while removing the function.
Eliminating dead code serves a dual purpose: it minimizes the attack surface available to potential malicious actors [1] and strengthens the resilience of security protocols. For instance, it can obstruct signature-based identification [8]. With Narayan and colleagues having demonstrated the feasibility of Return-Oriented Programming (ROP) attacks
Figure 1: Wasm-Mutate high level architecture. It generates semantically equivalent variants from a given WebAssembly binary input. Its central approach involves synthesizing these variants by substituting parts of the original binary using rewriting rules, boosted by a diversification space traversals using e-graphs(refer to subsection 3.3).
[36], the removal of dead code is able to stop jumps to harmful behaviors within the binary. On the other hand, the act of removing dead code reduces the binary's size, improving its non-functional properties, in particular bandwidth constraints.
**Edit custom sections:** The custom section in WebAssembly is used to store metadata, such as the name of the compiler that produces the binary or the symbol information for debugging. Thus, this section does not affect the execution of the Wasm program. Wasm-Mutate includes one mutator to edit custom sections. This is exemplified in the following rewriting rule.
The _Edit Custom Section_ transformation operates by randomly modifying either the content or the name of the custom section. As illustrated by Cabrera-Arteaga et al. [8], such a rewriting strategy also acts as a potent deterrent against compiler identification techniques. Furthermore, it can also be employed in an innovative manner to emulate the characteristics of a different compiler, _masquerading_ as another compilation source. This strategy ultimately aids in shrinking the identification and fingerprinting surface accessible to potential adversaries, hence enhancing overall system security, or to make it a moving target.
**If swapping:** In WebAssembly, an if-construction consists of a consequence and an alternative. The branching condition is executed right before the _if_ instruction; if the value at the top of the stack is greater than \(\mathtt{e}\), then the consequence-code is executed, otherwise the alternative-code is run. The _if swapping_ rewriting swaps the consequence and alternative codes of an if-construction.
To swap an if-construction in WebAssembly, Wasm-Mutate inserts a negation of the value at the top of the stack right before the _if_ instruction. In the following rewriting rule we show how Wasm-Mutate performs this rewriting.
The loop in the LHS part features a single first-order break, indicating that its execution will cause the program to continue iterating through the loop. The loop body concludes right before the \(\mathtt{end}\) instruction, which highlights the point at which the original loop breaks and resumes program execution. Upon selecting the loop for unrolling, its instructions are divided into two groups, labeled \(\mathtt{A}\) and \(\mathtt{B}\). As illustrated in the RHS part, the unrolling process entails creating two new Wasm blocks. The outer block encompasses both the original loop structure and the duplicated loop body, while the inner blocks, denoted as \(\mathtt{A}\)' and \(\mathtt{B}\)', represent modifications of the jump instructions in groups \(\mathtt{A}\) and \(\mathtt{B}\), respec
tively. Notice that, any jump instructions within A' and B' that originally leaped outside the loop must have their jump indices incremented by one. This adjustment accounts for the new block scope introduced around the loop body during the unrolling process. Furthermore, an unconditional branch is placed at the end of the unrolled loop iteration's body. This ensures that if the loop body does not continue, the tool breaks out of the scope instead of proceeding to the non-unrolled loop.
Loop unrolling enhances resistance to static analysis while maintaining the original performance [38]. In particular, Crane et al. [13] have validated the effectiveness of adding and modifying jump instructions against Function-Reuse attacks. Our rewriting rule has the same advantages, it unrolls loops while 1) incorporating new jumps and 2) editing existing jumps, as it can be observed with the addition of the br_if, end, and br instructions.
**Pephole:** This transformation category is about rewriting instruction sequences within function bodies, signifying the most granular level of rewriting. We implement 125 rewriting rules for this group in Wasm-Mutate. We include rewriting rules that affects the memory of the binary. For example, we include rewriting rules that creates random assignments to newly created global variables. For these rules, we incorporate several conditions, denoted by Cond, to ensure successful replacement. These conditions can be utilized interchangeably and combined to constrain transformations (see subsection 3.3).
For instance, Wasm-Mutate is designed to guarantee that instructions marked for replacement are deterministic. We specifically exclude instructions that could potentially cause undefined behavior, such as function calls, from being mutated. For this rewriting type, Wasm-Mutate only alters stack and memory operations, leaving the control frame labels unaffected.
The peephole category rewriting rules are meticulously designed and manually verified. An instance of such streamlined transformation can is illustrated in subsection 2.2, ( x i32.or x, x, ()) implies that the UNS'x' is to be replaced by an idempotent bitwise i32.or operation with itself, in the absence of any specific conditions. Therefore, this category continues to uphold the benefits previously discussed under the _Remove Dead Code_ category.
### E-graphs for WebAssembly
We build Wasm-Mutate on top of e-graphs [9]. An e-graph is a graph data structure utilized for representing rewriting rules and their chaining. In an e-graph, there are two types of nodes: e-nodes and e-classes. An e-node represents either an operator or an operand involved in the rewriting rule, while an e-class denotes the equivalence classes among e-nodes by grouping them, i.e., an e-class is a virtual node compound of a collection of e-nodes. Thus, e-classes contain at least one e-node. Edges within the graph establish operator-operand equivalence relations between e-nodes and e-classes.
In Wasm-Mutate, the e-graph is automatically built from a WebAssembly program by analyzing its expressions and operations through its data flow graph. Then, each unique expression, operator, and operand are transformed into e-nodes. Based on the input rewriting rules, the equivalent expressions are detected, grouping equivalent e-nodes into e-classes. During the detection of equivalent expressions, new operators could be added to the graph as e-nodes. Finally, e-nodes within an e-class are connected with edges to represent their equivalence relationships.
For example, let us consider one program with a single instruction that returns an integer constant, i64.const 0. Let us also assume a single rewriting rule, (x, x i64.or x, x instanceof i64). In this example, the program's control flow graph contains just one node, representing the unique instruction. The rewriting rule represents the equivalence for performing an or operation with two equal operands. Figure 2 displays the final e-graph data structure constructed out of this single program and rewriting rule. We start by adding the unique program instruction i64.const 0 as an en e-node (depicted by the leftmost solid rectangle node in the figure). Next, we generate e-nodes from the rewriting rule (the rightmost solid rectangle) by introducing a new e-node, i64.or, and creating edges to the x e-node. Following this, we establish equivalence. The rewriting rule combines the two e-nodes into a single e-class (indicated by the dashed rectangle node in the figure). As a result, we update the edges to point to the x symbol e-class.
Willsey et al. illustrate that the extraction of code fragments from e-graphs can achieve a high level of flexibility, especially when the extraction process is recursively defined through a cost function applied to e-nodes and their operands. This approach guarantees the semantic equivalence of the extracted code [48]. For example, to obtain the smallest code from an e-graph, one could initiate the extraction process at an e-node and then choose the AST with the smallest size from among the operands of its associated e-class [35]. When the cost function is omitted from the extraction methodology, the following property emerges: _Any path traversed through the e-graph will result in a semantically equivalent code variant_. This concept is illustrated in Figure 2, where it is possible to construct an infinite sequence of "or" operations. In the current study, we leverage this inherent flexibility to generate mutated variants of an
Figure 2: e-graph for idempotent bitwise-or rewriting rule. Solid lines represent operand-operator relations, and dashed lines represent equivalent class inclusion.
original program. The e-graph offers the option for random traversal, allowing for the random selection of an e-node within each e-class visited, thereby yielding an equivalent expression.
```
1:proceduretraverse(\(egraph\), \(eclass\), \(depth\))
2:ifdepth = 0 then
3:return smallesttreefrom(egraph, cclass)
4:else
5:\(nodes\gets graph[class]\)
6:\(node\gets random\_choice(nodes)\)
7:\(expr\leftarrow(node, operands=[])\)
8:forachchild \(\in\)node.childrendo
9:\(subexpr\leftarrow\)TRAVERSE(\(egraph\), child, \(depth-1\))
10:\(expr.operands\leftarrow\)expr.operands\(\cup\){\(subexpr\)}
11:return\(expr\)
```
**Algorithm 1** e-graph traversal algorithm.
We propose and implement the following algorithm to randomly traverse an e-graph and generate semantically equivalent program variants, see 1. It receives an e-graph, an e-class node (initially the root's e-class), and the maximum depth of expression to extract. The depth parameter ensures that the algorithm is not stuck in an infinite recursion. We select a random e-node from the e-class (lines 5 and 6), and the process recursively continues with the children of the selected e-node (line 8) with a decreasing depth. As soon as the depth becomes zero, the algorithm returns the smallest expression out of the current e-class (line 3). The subexpressions are composed together (line 10) for each child, and then the entire expression is returned (line 11). To the best of our knowledge, Wasm-Mutate, is the first practical implementation of random e-graph traversal for WebAssembly.
Let us demonstrate how the proposed traversal algorithm can generate program variants with an example. We will illustrate Algorithm 1 using a maximum depth of 1. Listing 3 presents a hypothetical original Wasm binary to mutate. In this example, the developer has established two rewriting rules: (x, x i32.or x, x instanceof i32) and (x, x i32.add @, x instanceof i32). The first rewriting rule represents the equivalence of performing an or operation with two equal operands, while the second rule signifies the equivalence of adding 0 to any numeric value. By employing the code and the rewriting rules, we can construct the e-graph depicted in Figure 3. The figure demonstrates the operator-operand relationship using arrows between the corresponding nodes.
```
(module (type(:0))(func(parami32f32)(result164))) (func(:0)(type)(parami32f32)(result164) i64.const1) )
```
Listing 3: Wasm function.
In Figure 3, we annotate the various steps of Algorithm 1 for the scenario described above. Algorithm 1 begins at the e-class containing the single instruction i64.const 1 from Listing 3. It then selects an equivalent node in the e-class 2, in this case, the i64.or node, resulting in: expr = i64.or 1 \(r\). The traversal proceeds with the left operand of the selected node 3, choosing the i64.add node within the e-class: expr = i64.or (i64.add 1 \(r\)) \(r\). The left operand of the i64.add node is the original node 3, expr = i64.or (i64.add i64.const1 r ) \(r\). The right operand of the i64.add node belongs to another e-class, where the node i64.const 0 is selected 6, 7, expr = i64.or (i64.add i64.const1 164.const@) r. In the final step 8, the right operand of the i64.or is selected, corresponding to the initial instruction e-node, returning: expr = i64.or (i64.add i64.const1 164.const@)i64.const1 The traversal result applied to the original Wasm code can observed in Listing 4.
### Wasm-Mutate in practice
In practice, Wasm-Mutate serves as a module within a broader process. This process starts from a WebAssembly
Figure 3: e-graph built for rewriting the first instruction of Listing 3.
binary as input and iterates over the variants generated by Wasm-Mutate in order to provide guarantees. In particular, it ensures that the output variant exhibits a different machine code per the JIT engine that executes it and unique execution traces when running. This process is explicitly laid out in Algorithm 2. One of the key elements in this algorithm is line 8, which activates Wasm-Mutate's diversification engine.
The algorithm starts by running the original WebAssembly program and recording its original execution traces, as denoted in line 5. These initial traces act as a reference for evaluating subsequent variants. An budget-based loop then initiates, as marked by lines 8 and 9, aiming to apply a series of code transformations. Upon the successful creation of a unique variant, line 11 triggers a JIT compilation within the WebAssembly engine. This step compiles the variant into machine code. The algorithm next assesses whether this machine code diverges from the original, thus confirming the actual diversity. If this condition is satisfied, the algorithm executes the variant to collect its low-level execution traces. The loop ends when a variant is found with new traces that are distinct from the original, as validated in line 15. The algorithm then returns the generated variant, wich guarantees that both diversified machine code and traces are different from the original.
```
1:procedure\(\text{\sc{typesify}}(originalW\,asm,\,engine)\)
2:Input:\(\triangleright\) A WebAssembly binary to diversify and a WebAssembly engine.
3:Output:\(\triangleright\) A statically unique and behaviourally different WebAssembly variant.
4:
5:\(originalTrace\)\(\leftarrow\)\(\mathbf{engine.execute}(originalW\,asm)\)
6:\(wasm\)\(\leftarrow\)\(originalW\,asm\)
7:while true do
8:\(variantW\,asm\)\(\leftarrow\)\(\text{\sc{Wasm}}\)-\(\text{\sc{MUTate}}(\text{\sc{t}asm})\)
9:\(wasm\)\(\leftarrow\)\(variantW\,asm\)\(/\) we stack the transformation
10:if\(variantW\,asm\) is unique then
11:\(variantJIT\)\(\leftarrow\)\(\mathbf{engine.compile}(variantW\,asm)\)
12:if\(variantJIT\) is unique then
13:
14:\(trace\)\(\leftarrow\)\(\mathbf{engine.execute}(variantJIT)\)
15:if\(trace\)\(\neq\)\(originI\,Trace\)then
16:return\(variantW\,asm\)
```
**Algorithm 2** Wasm-Mutate in practice.
### Implementation
Wasm-Mutate is implemented in Rust, comprising approximately, 10 thousands lines of Rust code. We leverage the capabilities of the wasm-tools project of the bytecodelliance for parsing and transforming WebAssembly binary code. Specifically, we utilize the wasmparser and wasm-encoder modules for parsing and encoding Wasm binaries, respectively. The implementation of Wasm-Mutate is publicly available for future research and can be found at [https://github.com/bytecodelliance/wasm-tools/tree/main/crates/wasm-mutate](https://github.com/bytecodelliance/wasm-tools/tree/main/crates/wasm-mutate).
## 4 Evaluation
In this section, we outline our methodology for evaluating Wasm-Mutate. Initially, we introduce our research questions and the corpus of programs that we utilize for the assessment of Wasm-Mutate. Next, we elaborate on the methodology for each research question. For the sake of reproducibility, our data and experimenting pipeline are publicly available at [https://github.com/ASSERT-KTH/tawasco](https://github.com/ASSERT-KTH/tawasco). Our experiments are conducted in Standard F4s-v2(Skylake) Azure machines with 4 virtual cpus and 8GiB memory per instance.
* **To what extent are the program variants generated by Wasm-Mutate statically different from the original programs?** We check whether the WebAssembly binary variants rapidly produced by Wasm-Mutate are different from the original WebAssembly binary. Then, we assess whether the x86 machine code produced by wasmtime engine is also different.
* **How fast can Wasm-Mutate generate program variants that exhibit different execution traces?** To assess the versatility of Wasm-Mutate, we also examine the presence of different behaviors in the generated variants. Specifically, we measure the speed at which Wasm-Mutate generates variants with distinct machine code instruction traces and memory access patterns.
* **To what extent does Wasm-Mutate prevent side-channel attacks on WebAssembly programs?** Diversification being an option to prevent security issues, we assess the impact of Wasm-Mutate in preventing one class of attacks: cache attacks (Spectre).
### Corpora
We answer our research questions with a corpus of 307 programs (303 + 4). These programs are summarized in Ta
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline \hline Source & Program & RQ & \#F & \# Ins. & Attack \\ \hline \hline GROW [7] & 303 & RQ1, RQ2 & 7-103 & 170-36023 & N/A \\ \hline Swine [36] & bit\_break & RQ3 & 16 & 743 & Spectre branch \\ & & & & & (bit) \\ \hline Swine [36] & bit\_lakage & RQ3 & 16 & 297 & Spectre \\ & & & & & branch target \\ & & & & & buffer(bit) \\ \hline Safefide & ret2spec & RQ3 & 2977 & 37894 & Spectre Return Stack \\ [36, 20] & & & & Buffer (nb) \\ \hline Safefide & bit & RQ3 & 2978 & 379056 & Spectre Pattern History \\ [36, 20] & & & & & Time (bit) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The dataset we use to evaluate Wasm-Mutate. Each row in the table corresponds to programs, with the columns providing: where the program is sourced from, the number of programs, research question addressed, function count, the total number of instructions found in the original WebAssembly program and the type of attack that the original program was subjected to.
ble 1. Each row in the table corresponds to the used programs, with the columns providing: where the program is sourced from, the number of programs, research question addressed, function count, the total number of instructions found in the original WebAssembly program and the type of attack that the original program was subjected to.
We answer RQ1 and RQ2 with corpus of programs from Cabrera et.al. [7], it is shown in the first row of Table 1. The corpus contains 303. The corpus contains programs for a range of tasks, from simple ones, such as sorting, to complex algorithms like a compiler lexer. The number of functions for each program ranges from 7 to 103 and, the number of total instructions ranges from 170 to 36023. All programs in corpus: 1) do not require input from user, i.e., do not functions like scanf, 2) terminate, 3) are deterministic, i.e., given the same input, provide the same output and 4) compile to WebAssembly using wasi-clang to compile them.
We answer RQ3 with four WebAssembly programs and three Spectre attack scenarios, from the Swivel project [36]. These programs are summarized in the final four rows of our corpus table. The first two programs are manually crafted and contain 16 functions, with instruction counts of 743 and 297, respectively. These binaries are specifically designed to perform the Spectre branch target attack. The third and fourth programs, documented in rows four and five, come-from the Safeside project [20]. Unlike the first two, these binaries are significantly larger, each containing nearly 3000 functions and more than 30000 instructions. They are utilized for conducting the Spectre Return Stack (RSB) and Spectre Pattern History (PHT) attacks [28].
There is a notable difference in the number of functions and instructions between the first pair of Swivel binaries and the latter pair. This disparity can be attributed to the varying compilation processes applied to these WebAssembly binaries. The three attack scenarios are described in details in subsection 4.4.
### Protocol for RQ1
With RQ1, we assess the ability of Wasm-Mutate to generate WebAssembly binaries that are different from the original program, including after their compilation to x86 machine code. In Figure 4 we show the steps we follow to answer RQ1. We run Wasm-Mutate on our corpus of 303 original C programs (step 1 in figure). To generate the variants: 1) we start with one original and pass it to Wasm-Mutate to generate a variant; 2) the variant and the original program form a population of programs; 3) we randomly select a program from this population and pass it to Wasm-Mutate to generate a variant, which we add to the population; 4) we then restart the process in the previous step. to stack more mutations This procedure is carried out for a duration of 1 hour. The final outcome (step 2 in figure) is a population with a number of stacked transformations, all starting from an original WebAssembly program. We then count the number of unique variants in the population. We compute the sha256 hash of each variant bytestream in order and define the population size metric as:
**Metric 1**: _Population_size(P): Given an original WebAssembly program \(P\), a generated corpus of WebAssembly programs \(V=\{v_{1},v_{2},...,v_{N}\}\) where \(v_{i}\) is a variant of \(P\), the population size is defined as:_
\[\left|set(\{sha256(v_{1}),...sha256(v_{N})\})\right|\forall v_{i}\in V\]
Since WebAssembly binaries may be further transformed into machine code before they execute, we also check that this additional transformations preserve the difference introduces by Wasm-Mutate in the WebAssembly binary. We use the wasmtime JIT compiler, cranialfit, with all available optimizations, to generate the x86 binaries for each WebAssembly program and its variants (step 3 in figure). Then, we calculate the number of unique variants machine code representation for wasmtime. Counting the number of unique machine code, we compute the diversification preservation ratio:
**Metric 2**: _Ratio of preserved variants: Given an original WebAssembly program \(P\) and its population size as defined in Metric 1 and the JIT compiler \(C\), we defined the ratio of preserved variants as:_
\[\frac{\left|set(\{sha256(C(v_{1})),...sha256(C(v_{N}))\})\right|}{Population\_ size (P)}\;\forall v_{i}\in V\]
If \(sha256(P_{1})\neq sha256(P_{2})\) and \(sha256(C(P_{1}))\neq sha256(C(P_{2}))\), this means that both programs are still different after being compiled to machine code, and this means that the cranialfit compiler has not removed the transformations made by Wasm-Mutate.
Note that the protocol described earlier can be mapped to Algorithm 2. For instance, to measure population size for each tested program, one could measure how often the execution of Algorithm 2 reaches line 11. Similarly, to assess the level of preservation, one could track the frequency with which the algorithm arrives at line 13.
Figure 4: Protocol to answer RQ1 and RQ2
### Protocol for RQ2
For RQ2, we evaluate how fast Wasm-Mutate can generate variants that offer distinct traces compared with the original program. We start by collecting the traces of the original program when executed in wasmtime. While continuously generating variants with random stacked transformations, we collect the execution traces of the variants as well. We record the time passed until we generate a variant that offers different execution traces, according to two types of traces: machine code instructions and memory accesses. This process can be seen in the enclosed square of Figure 4, annotated with RQ2.
We gather the instructions and memory traces utilizing IntelPIN [33, 16] (step 4 in the figure). To only collect the traces of the WebAssembly execution with a wasmtime engine, we pause and resume the collection as the execution leaves and re-enters the WebAssembly code, respectively. We implement this filtering with the built-in hooks of wasmtime. In addition, we disable ASLR on the machine where the variants are executed. This latter action ensures that the placement of the instructions in memory is deterministic. Examples of the traces we collect can be seen in Listing 5 and Listing 6 for memory and instruction traces, respectively.
```
[Writ]@s55555ed157@size=4value=bx10dd0B [Read]@s55555ed157@size=4value=bx10dd0B
```
Listing 5: Memory trace with two events out of IntelPIN for the execution of a WebAssembly program with wasmtime. Trace events record: the type of the operation, read or write, the memory address, the number of bytes affected and the value read or written.
```
[I]novrdr,quordptr[r1+deX|00] [I]novddpr[rde=kne64],ex
```
Listing 6: Instructions trace with two events out of IntelPIN for the execution of a WebAssembly program with wasmtime. Each event records the corresponding machine code that executes.
In the text below, we outline the metric used to assess how fast Wasm-Mutate can generate variants that provide different execution traces.
**Metric 3**: _Time until different trace: Given an original WebAssembly program P, and an its execution trace \(T_{1}\), the time until different trace is defined as the time between the diversification process starts and when the variant \(V\) is generated with execution trace \(T_{2}\) with \(T_{1}\neq T_{2}\)._
_Notice that the previously defined metric is instantiated twice, for instructions and memory type of events._
Refering to Algorithm 2, we quantify the elapsed time between line 6 and line 16 to obtain the time it takes for Wasm-Mutate to generate a unique WebAssembly variant producing different execution traces.
### Protocol for RQ3
To answer RQ3, we apply Wasm-Mutate to the same security WebAssembly programs used by Narayan et al. to evaluated Swivel's ability at protecting WebAssembly programs against side-channel attacks [36]. The four cache timing side-channel attacks are presented in detail in subsection 4.1. The specific binary and its corresponding attack can be appreciated in Table 1. We evaluate to what extent Wasm-Mutate can prevent such attacks. In the following text, we describe the attacks we replicate and evaluate in order of answering RQ3.
Narayan and colleagues successfully bypass the control flow integrity safeguards, using speculative code execution as detailed in [28]. Thus, we use the same three Spectre attacks from Swivel: 1) The Spectre Branch Target Buffer (btb) attack exploits the branch target buffer by predicting the target of an indirect jump, thereby rerouting speculative control flow to an arbitrary target. 2) The Spectre Pattern History Table (bht) takes advantage of the pattern history table to anticipate the direction of a conditional branch during the ongoing evaluation of a condition. 3) The Spectre Return Stack Buffer (ret2spec) attack exploits the return stack buffer that stores the locations of recently executed call instructions to predict the target of ret instructions. Each attack methodology relies on the extraction of memory bytes from another hosted WebAssembly binary that executes in parallel.
For each of the four WebAssembly binaries introduced in subsection 4.1, we generated a maximum of 1000 random stacked transformations utilizing 100 distinct seeds. This resulted in a total of 100,000 variants for each original WebAssembly binary. We then assess the success rate of attacks across these variants by measuring the bandwidth of the ex-filtrated data, that is: the rate of correctly leaked bytes per unit of time. We then count the correctly exfiltrated bytes and divided them by the variant program's execution time.
Notice that, the bandwidth metric captures not only whether the attacks are successful or not, but also the degree to which the data exfiltration is hindered. For instance, a variant that continues to exfiltrate secret data but does so over an impractical duration would be deemed as having been hardened. For this, we state the bandwidth metric in the following definition :
**Metric 4**: _Attack bandwidth: Given data \(D=\{b_{0},b1,...,b_{C}\}\) being exfiltrated in time \(T\) and \(K=k_{1},k_{2},...,k_{N}\) the collection of correct data bytes, the bandwidth metric is defined as:_
\[\frac{|b_{i}\text{ such that }b_{i}\in K|}{T}\]
## 5 Experimental Results
To what extent are the program variants generated by Wasm-Mutate statically different from the original programs?
To address RQ1, we utilize Wasm-Mutate to process the original 303 programs from [7]. Wasm-Mutate is set
to generate variants with a timeout of one hour for each individual program. Following this, we assess the sizes of their variant populations as well as their corresponding preservation ratio (Refer to Metric 1 and Metric 2 for more details).
In Figure 5, we show the distribution of the population size generated out of Wasm-Mutate. Wasm-Mutate successfully diversifies all 303 original programs, yielding a diversification rate of 100%. Within an hour, Wasm-Mutate demonstrates its impressive efficiency and effectiveness by producing a median of 9500 unique variants for the 303 original programs. The largest population size observed is 53816, while the smallest is 5716. There are several factors contributing to large population sizes.
Wasm-Mutate can diversify functions within WASIlibc. Despite the relatively low function count in the original source code, Wasm-Mutate creates thousands of distinct variants in the function of the incorporated libraries. This feature improves over methods that can only diversify the original source code processed through the LLVM compilation pipeline [7].
We have observed a significant variation in the population size out of Wasm-Mutate between different programs, ranging by several thousand variants (from a maximum of 53816 variants to a minimum of 5716 variants). This disparity is attributed to: the non-deterministic nature of Wasm-Mutate and 2) the characteristics of the program. Wasm-Mutate mutates a randomly selected portion of a program. If the selected instruction is determined to be non-deterministic, despite the transformation being semantically equivalent, Wasm-Mutate discards the variant and moves on to another random transformation. For instance, if the instruction targeted for mutation is a function call, Wasm-Mutate proceeds to the next one. This process, in conjunction with the unique characteristics of each program, results in a varying population size. For example, an input binary with a high number of function calls would lead to a greater number of trials and errors, slowing down the generation of variants, thereby resulting in a smaller overall population size for 1 hour of Wasm-Mutate execution.
As stated in subsection 4.2, we also assess static diversification with Metric 2 by calculating the preservation ratio of variant populations. Figure 6 presents the distribution of preservation ratios for the cranelift compiler of wasmtime. We have observed a median preservation ratio of 62%. On the one hand, we have observed that there is no correlation between population size and preservation ratio. In other words, having a larger population size does not necessarily lead to a higher preservation ratio. On the other hand, the phenomena of non-preserved variants can be explained as follows. Factors such as custom sections are often disregarded by compilers. Similarly, bloated code plays a role in this context. For instance, Wasm-Mutate generates certain variants with unused types or functions, which are then detected and eliminated by cranelift. Yet, note that even when working with the smallest population size and the lowest preservation percentage, the number of unique machine codes can still encompass thousands of variants.
**Answer to RQ1: Wasm-Mutate generates WebAssembly variants for all the 303 input programs. Within a one-hour diversification budget, Wasm-Mutate synthesizes more than 9000 unique variants per program on average. 62% of the variants remain different after machine-code compilation. Wasm-Mutate is good at producing a large number of WebAssembly program variants.**
### How fast can Wasm-Mutate generate program variants that exhibit different execution traces?
To answer question RQ2, we measure how long it takes to generate one variant that exhibits execution traces that are different from the original. In Figure 7, we display a cumulative distribution plot showing the time required for Wasm-Mutate to generate variants with different traces, in blue for machine code instructions and green for memory traces. The X-axis marks time in minutes, and the Y-axis shows the ratio of programs from 303 for which Wasm-Mutate created a variant within that time. For all original program, Wasm-Mutate succeeds in generating one variant with different traces comparing to the original program, either in machine code instructions or memory access, ie both cumulative distributions reach 100The shortest time to generate a variant with different machine code instruction traces is 0.12 seconds, and for different memory traces, it is 0.06 seconds. In the slowest scenarios, Wasm-Mutate
Figure 5: RQ1: Number of unique WebAssembly programs generated by Wasm-Mutate in 1 hour for each program of the corpus.
Figure 6: RQ1: Distribution of the ratio of wasmtime preserved variants.
takes under 1 minute for different machine code instruction traces and less than 3 minutes for different memory traces. Overall, Wasm-Mutate takes a median of 5.4 seconds and 12.6 seconds in generating variants with different machine code instructions and different memory instructions respectively.
The use an e-graph random traversal is the key factor for such a fast generation process. Once Wasm-Mutate locates a modifiable instruction within the binary and constructs its corresponding e-graph, traversal is virtually instantaneous. However, the time efficiency of variant generation is not consistent across all programs, as illustrated in Figure 7. This variation primarily stems from the varying complexities of the programs under analysis, as previously mentioned in subsection 5.1. Interestingly, Wasm-Mutate may attempt to build e-graphs from instructions that, while not inherently leading to undefined behavior, are part of a data flow graph that could. For example, the data flow graph might be dependent on a function call. Although transforming undefined behavioural instructions is deactivated by default in Wasm-Mutate to maintain functional equivalence with the original code, the process of attempting to construct such e-graphs can extend the duration of the diversification pass. As a result, Wasm-Mutate may require multiple attempts to successfully create and traverse an e-graph, impacting the rate at which it generates behaviorally distinct variants. This phenomenon is particularly noticeable in original programs that have a high frequency of function calls.
In average, Wasm-Mutate takes three times longer to synthesize unique memory traces than it does to generate different instruction traces (as it can be observed in how the green plot of the figure is skewed to the right). The main reason for this difference is the limited set of rewriting rules that specifically focus on memory operations. Wasm-Mutate includes more rules for manipulating code, which increases the odds of generating a variant with diverse machine code instructions. Additionally, the variant creation process halts and restarts with alternative rewriting rules if Wasm-Mutate detects that the selected code for transformation could result in unpredictable behavior.
We have identified four primary factors explaining why execution traces differs overall. First, alterations to the binary layout inherently impact both machine code instruction traces and memory accesses within the program's stack. In particular, Wasm-Mutate creates variants that change the return addresses of functions, leading to divergent execution traces, including those related to memory access. Second, our rewriting rules incorporate artificial global values into WebAssembly binaries. Since these global variables are inherently manipulated via the stack, their access inevitably generate divergent memory traces. Third, Wasm-Mutate injects 'phantom' instructions which do not aim to modify the outcome of a transformed function during execution. These intermediate calculations trigger the spill/reload component of the runtime, varying spill and reload operations. In the context of limited physical resources, these operations temporarily store values in memory for later retrieval and use, thus creating unique memory traces. Finally, certain rewriting rules implemented by Wasm-Mutate replicate fragments of code, e.g., performing commutative operations. These code segments may contain memory accesses, and while neither the memory addresses nor their values change, the frequency of these operations does. Overall, these findings influence the diversity of execution traces among the generated variants.
**Answer to RQ2: Wasm-Mutate generates variants with distinct machine code instructions and memory traces for all tested programs. The quickest time for generating a variant with a unique machine code trace is 0.12 seconds, and for divergent memory traces, the fastest generation only lasts 0.06 seconds. On average, the median time required to produce a variant with distinct traces stands at 5.4 seconds for different machine code traces and 16.2 seconds for different memory traces. These metrics indicate that Wasm-Mutate is suitable for fast-moving target defense strategies, capable of generating a new variant in well under a minute [6]. To the best of our knowledge, Wasm-Mutate is the fastest diversification engine for WebAssembly.**
### To what extent does Wasm-Mutate prevent side-channel attacks on WebAssembly programs?
To answer RQ3, we execute Wasm-Mutate on four distinct binaries WebAssembly susceptible to Spectre related attacks. Each of the four programs is transformed with one of for 100 different seeds and up to 1000 stacked transformations. We assess the resulting impact of the attacks as outlined in 4.4. The analysis encompasses a total of 4x100x1000 binaries, which also includes the original four.
Figure 8 offers a graphical representation of Wasm-Mutate's influence on the Swivel original programs and their attacks. Each plot corresponds to one original WebAssembly binary and the attack it undergoes: btb_breakout,
Figure 7: RQ2: Cumulative distribution for time until different trace. In blue for different machine code instructions, in green for different memory traces. The X-axis marks time in minutes, and the Y-axis shows the ratio of programs from 303 for which Wasm-Mutate created a variant within that time.
btb_leakage, ret2spec, and pht. The Y-axis represents the exfiltration bandwidth (see Metric 4). The bandwidth of the original binary under attack is marked as a blue dashed horizontal line. In each plot, the variants are grouped in clusters of 100 stacked transformations. These are indicated by green dots and lines. The dot signifies the median bandwidth for the cluster, while the line represents the interquartile range of the group's bandwidth.
For btb_breakout and btb_leakage, Wasm-Mutate demonstrates effectiveness, generating variants that leak less information than the original in 78% and 70% of the cases, respectively. For these particular binaries, a significant reduction in exfiltration bandwidth to zero is noted after 200 stacked transformations. This means that with a minimum of 200 stacked transformations, Wasm-Mutate can create variants that are completely resistant to the original attack. For the ret2spec and pht scenarios, the produced variants consistently exhibit lower bandwidth than the original in 76% and 71% of instances, respectively. As depicted in the plots, the exfiltration bandwidth diminishes following the application of at least 100 stacked transformations.
This success is explained by the fact that Wasm-Mutate synthesizes variants that effectively alter memory access patterns. Specifically, it does so by amplifying spill/reload operations, injecting artificial global variables, and changing the frequency of pre-existing memory accesses. These transformations influence the WebAssembly program's memory, causing disruption to cache predictors. As a result, these alterations contribute to a reduction in exfiltration bandwidth.
Furthermore, many attacks rely on a timer component to measure cache access time for memory, and disrupting this component effectively impairs the attack's effectiveness. This strategy of dynamic alteration has also been employed in other scenarios. For instance, to counter potential timing attacks, Firefox randomizes its built-in JavaScript timer [42]. Wasm-Mutate applies the same strategy by interspersing instructions within the timing steps of WebAssembly variants. In Listing 7 and Listing 8, we demonstrate Wasm-Mutate's impact on time measurements. The former illustrates the original time measurement, while the latter presents a variant with Wasm-Mutate-inserted operations amid the timing.
```
::Codefromoriginalbtb_breakout... (callFreadTiner) (set_localsend_time)...accesstomem (i64.sub(get_localsend_time)(get_localstart_time)) (set_local$duration)...
```
Listing 7: Wasm timer used in btb_breakout program.
Wasm-Mutate proves effective against cache access timers because the time measurement of single or a few instructions is inherently different. By introducing more instructions, this randomness is amplified, thereby reducing the timer's accuracy.
Furthermore, CPUs have a maximum capacity for the number of instructions they can cache. Wasm-Mutate invjects instructions in such a way that the vulnerable instruction may exceed this cacheable instruction limit, meaning that caching becomes disabled. This kind of transformation can be viewed as padding [15]. In Listing 9 and Listing 10, we illustrate the effect of Wasm-Mutate on padding instructions. Listing 9 presents the original code used for training the branch predictor, along with the expected speculated code.
Figure 8: Visual representation of Wasm-Mutate’s impact on Swivel’s original programs. The Y-axis denotes exfiltration bandwidth, with the original binary’s bandwidth under attack highlighted by a blue marker and dashed line. Variants are clustered in groups of 100 stacked transformations, denoted by green dots (median bandwidth) and lines (interquartile bandwidth range). Overall, for all 100000 variants generated out of each original program, 70% have less data leakage bandwidth.
The padding alters the arrangement of the binary code in memory, effectively impeding the attacker's capacity to initiate speculative execution. Even when an attack is launched and the vulnerable code is "speculated", the memory access is not impacted as planned.
In every program, we note that the exfiltration bandwidth tends to be greater than the original when the variants include a small number of transformations. This indicates that, although the transformations generally contribute to the reduction of data leakage, the initial few might not consistently contribute positively towards this objective. We have identified several fundamental reasons, which we discuss below.
Firstly, as emphasized in prior applications of Wasm-Mutate[8], uncontrolled diversification can be counterproductive if a specific objective, such as a cost function, is not established at the beginning of the diversification process. Secondly, while some transformations yield distinct WebAssembly binaries, their compilation produces identical machine code. Transformations that are not preserved undermine the effectiveness of diversification. For example, incorporating random nop operations directly into WebAssembly does not modify the final machine code as the nop operations are often removed by the compiler. The same phenomenon is observed with transformations to custom sections of WebAssembly binaries. Additionally, it is important to note that transformed code doesn't always execute, i.e., Wasm-Mutate may generate dead code.
Finally, for ret2spec and pht, both programs are hardened with attack bandwidth reduction, but this does not materialize in a short-term timeframe (low count of stacked transformations). Furthermore, the exfiltration bandwidth is more dispersed for these two programs. Our analysis indicates a correlation between bandwidth reduction and the complexity of the binary subject to diversification. Ret2spec and pht are considerably larger than bb_breakout and bb_leakage. The former comprises more than 300k instructions, while the latter two include fewer than 800 instructions. Given that Wasm-Mutate applies precise, fine-grained transformations one at a time, the likelihood of impacting critical attack components, such as timing memory accesses, diminishes for larger binaries, particularly when limited to 1,000 transformations. Based on these observations, we believe that a greater number of stacked transformations would further contribute to eventually eliminating the attacks associated with ret2spec and pht.
**Answer to RQ3**: Software diversification is effective at synthesizing WebAssembly binaries that mitigate Spectre-like attacks. Wasm-Mutate generates variants of bb_breakout and bb_leakage that are totally protected against the considered attack. For ret2spec and pht, it generates hardened variants that are more resilient to the attack than the original program: 70% of the diversified variants exhibit a reduced attack effectiveness (reduced data leakage bandwidth) compared to the original program.
## 6 Discussion
**Fuzzing WebAssembly compilers with Wasm-Mutate** In fuzzing campaigns, generating well-formed inputs is a significant challenge [46]. This is particularly true for fuzzing compilers, where the inputs should be executable yet intricate enough programs to probe various compiler components. Wasm-Mutate could address this challenge by generating semantically equivalent variants from an original WebAssembly binary, enhancing the scope and efficiency of the fuzzing process. A practical example of this occurred in 2021, when this approach led to the discovery of a wasmtime security CVE [18]. Through the creation of semantically equivalent variants, the spill/reload component of cranelift was stressed, resulting in the discovery of the before-mentioned CVE.
**Mitigating Port Contention with Wasm-Mutate** Rokicki et al. [39] showed the practicality of a covert side-channel attack using port contention within WebAssembly code in the browser. This attack fundamentally relies on the precise prediction of Wasm instructions that trigger port contention. To combat this security concern, Wasm-Mutate could be conveniently implemented as a browser plugin. Wasm-Mutate has the ability to replace the WebAssembly instructions used as port contention predictor with other instructions. This would inevitably remove the port contention in the specific port used to conduct the attack, hardening browsers against such malicious maneuvers.
## 7 Related Work
Static software diversification refers to the process of synthesizing, and distributing unique but functionally equivalent programs to end users. The implementation of this process can take place at any stage of software development and deployment - from the inception of source code, through the compilation phase, to the execution of the final binary [24, 34]. Wasm-Mutate, a static diversifier, can be placed at the final stage, keeping in mind that the code will subsequently undergo final compilation by JIT compilers. The concept of software diversification owes much to the pioneering work of Cohen [12]. His suite of code transformations aimed to increase complexity and thereby enhance the difficulty of executing a successful attack against a broad user base [12]. Wasm-Mutate's rewriting rules draw significantly from Cohen and Forrest seminal contributions [12, 19].
Jackson and colleagues [24] proposed that the compiler can play a pivotal role in promoting static software diversification. In the context of WebAssembly, CROW leverages compiler technology for diversification. It is a superdiversifier [25],for WebAssembly that is built in the LLVM compilation tool chain. However, integrating the diversifier directly into the LLVM compiler, restricts the tool's applicability to WebAssembly binaries generated through LLVM. This implies that any WebAssembly source code that lacks an LLVM frontend implementation cannot take advantage of CROW's capabilities. In contrast, Wasm-Mutate provides a more versatile and faster WebAssembly to WebAssembly diversification solution, maintaining compatibility with any compiler. Secondly, unlike CROW, Wasm-Mutate does not rely on an SMT solver to validate the generated variants. Instead, it guarantees semantic equivalence by design, resulting in greater efficiency in generating WebAssembly variants, as discussed in subsection 5.1. As a WebAssembly to WebAssembly diversification tool, Wasm-Mutate augments the range of tools capable of generating WebAssembly programs, a topic explored comprehensively throughout this work.
The process of diversifying a WebAssembly program can be conceptualized as a three-stage procedure: parsing the program, transforming it, and finally re-encoding it back into WebAssembly. Our review of the literature has revealed several studies that have employed parsing and encoding components for WebAssembly binaries across various domains. This indicates that these works accept a WebAssembly binary as an input and output a unique WebAssembly binary. These domains span optimization [47], control flow [2], and dynamic analysis [31, 43, 2, 3]. When the transformation stage introduces randomized mutations to the original program, the aforementioned tools could potentially be construed as diversifiers. Wasm-Mutate is related to these previous works, as it can serve as an optimizer or a test case reducer due to the incorporation of an e-graph at the heart of its diversification process [44]. To the best of our knowledge, the introduction of an e-graph into Wasm-Mutate marks the first endeavor to integrate an e-graph into a WebAssembly to WebAssembly analysis tool.
BREWasm [10] offers a comprehensive static binary rewriting framework for WebAssembly and can be considered to be the most similar to Wasm-Mutate. For instance, it can be used to model a diversification engine. It parses a Wasm binary into objects, rewrites them using fine-grained APIs, integrates these APIs to provide high-level ones, and re-encodes the updated objects back into a valid Wasm binary. The effectiveness and efficiency of BREWasm have been demonstrated through various Wasm applications and case studies on code obfuscation, software testing, program repair, and software optimization. The implementation of BREWasm follows a completely different technical approach. In comparison with our work, the authors pointed out that our tool employs lazy parsing of Wasm. Although they perceived this as a limitation, it is eagerly implemented to accelerate the generation of WebAssembly binaries. Additionally, our tool leverages the parser and encoder of wasmtime, a standalone compiler and interpreter for Wasm, thereby boosting its reliability and lowering its error-prone nature.
Another similar work to Wasm-Mutate is WASMizer [11]. WASMizer focuses on three code obfuscation methods for WebAssembly binaries: memory access encryption, control flow flattening, and the insertion of opaque predicates. Their strategy is specifically designed for obfuscating Wasm binaries. In contrast, while Wasm-Mutate does not employ memory access encryption or control flow flattening, it can still function effectively as an obfuscator. Previous evaluations confirm that Wasm-Mutate has been successful in evading malware detection [8]. On the same topic, Madvex [32] also aims to modify Wasm binaries to achieve malware evasion, but their approach is principally driven by a generic reward function and is largely confined to altering only the code section of a Wasm binary. Wasm-Mutate, however, adopts a more flexible strategy by applying a broader array of transformations, which are not limited to the code section. Consequently, Wasm-Mutate is capable of generating malware variants without negatively affecting either their code or performance.
## 8 Conclusion
Wasm-Mutate is a fast and effective diversification tool for WebAssembly, with a 100% diversification rate across the 303 programs of the considered benchmark. With respect to speed, it creates over 9000 unique variants per hour. The Wasm-Mutate workflow ensures that all final variants offer different and unique execution traces. We have proven that Wasm-Mutate is able to mitigate Spectre attacks in WebAssembly, producing fully protected variants of two versions of the btb attack, and variants of ret2spec and pht that leak less data than the original ones.
In future work, we aim to fine-tune the diversification process, balancing broad diversification with the needs of specific scenarios. Besides, the creation of rewriting rules for Wasm-Mutate is currently a manual task, yet we have
identified potential for automation. For instance, Wasm-Mutate could be enhanced through data-driven methods such as rule mining. Furthermore, we have observed that the impact of Wasm-Mutate on ret2spec and pht attacks is considerably less compared to bb attacks. These attacks exploit the returning address of executed functions in the program stack. One mitigation of this would be multi-variant execution strategy, implemented on top of Wasm-Mutate. By offering different execution paths, the returning addresses on the stack at each function execution would vary, thereby improving the hardening of binaries against ret2spec attacks.
|
2309.09615 | Bright blazar flares with CTA | The TeV extragalactic sky is dominated by blazars, radio-loud active galactic
nuclei with a relativistic jet pointing towards the Earth. Blazars show
variability that can be quite exceptional both in terms of flux (orders of
magnitude of brightening) and time (down to the minute timescale). This bright
flaring activity contains key information on the physics of particle
acceleration and photon production in the emitting region, as well as the
structure and physical properties of the jet itself. The TeV band is accessed
from the ground by Cherenkov telescopes that image the pair cascade triggered
by the interaction of the gamma ray with the Earth's atmosphere. The Cherenkov
Telescope Array (CTA) represents the upcoming generation of imaging atmospheric
Cherenkov telescopes, with a significantly higher sensitivity and larger energy
coverage with respect to current instruments. It will thus provide us with
unprecedented statistics on blazar light-curves and spectra. In this
contribution we present the results from realistic simulations of CTA
observations of bright blazar flares, taking as input state-of-the-art
numerical simulations of blazar emission models and including all relevant
observational constraints. | M. Cerruti, J. Finke, G. Grolleron, J. P. Lenain, T. Hovatta, M. Joshi, E. Lindfors, P. Morris, M. Petropoulou, P. Romano, S. Vercellone, M. Zacharias | 2023-09-18T09:38:56Z | http://arxiv.org/abs/2309.09615v1 | # Bright blazar flares with CTA
###### Abstract:
The TeV extragalactic sky is dominated by blazars, radio-loud active galactic nuclei with a relativistic jet pointing towards the Earth. Blazars show variability that can be quite exceptional both in terms of flux (orders of magnitude of brightening) and time (down to the minute timescale). This bright flaring activity contains key information on the physics of particle acceleration and photon production in the emitting region, as well as the structure and physical properties of the jet itself. The TeV band is accessed from the ground by Cherenkov telescopes that image the pair cascade triggered by the interaction of the gamma ray with the Earth's atmosphere. The Cherenkov Telescope Array (CTA) represents the upcoming generation of imaging atmospheric Cherenkov telescopes, with a significantly higher sensitivity and larger energy coverage with respect to current instruments. It will thus provide us with unprecedented statistics on blazar light-curves and spectra. In this contribution we present the results from realistic simulations of CTA observations of bright blazar flares, taking as input state-of-the-art numerical simulations of blazar emission models and including all relevant observational constraints.
Introduction
The current generation of Imaging Atmospheric Cherenkov Telescopes (IACTs), composed of the three arrays MAGIC, H.E.S.S., and VERITAS, has greatly increased our knowledge of the very-high-energy \(\gamma\)-ray (VHE, energies greater than 100 GeV) sky, bringing the number of known VHE sources from a dozen to about 250, in a bit less than twenty years of data taking [10]. The extra-galactic component of the VHE sky is dominated by active galactic nuclei (AGN), i.e. accreting super-massive black holes, of the blazar type. Within the unified AGN model, a blazar is a radio-loud AGN whose relativistic jet points in the direction of the observer. The relativistic boosting of the emission is what makes blazars particularly bright within the AGN population. They are characterized by non-thermal emission over a broad range of wavelengths, from radio up to VHE, a high degree of polarization in radio, optical, and X-rays, and they exhibit remarkable variability in both brightness (with significant increases spanning orders of magnitude) and time-scales (reaching as short as minute-scale variability). The rapid variability is of particular interest, because time changes in the emission encode important information about the physical properties of the emitting region, the emission processes at work in it, as well as the acceleration processes that are energizing the particles in the jet (leptons or hadrons) [2, 4].
The next generation IACT, the Cherenkov Telescope Array, CTA [5], is currently under construction. It will consist of two arrays, one in the Northern Hemisphere, on the Canary island of La Palma, close to the running MAGIC telescopes, and one in the Southern Hemisphere, at the Paranal Observatory in Chile. In order to maximize the scientific return of the instrument, the CTA Consortium is currently working on simulations of the expected outcomes of the observations. The work presented in this contribution is part of the preparation for the CTA AGN Key Science Project [5]. What is discussed here represents a part of this larger effort, and focuses on the simulation of future CTA observations of blazar flares, concentrating on the study of rapid variability with a particular emphasis on the capability to reconstruct spectral variability. A complementary study (shown in these proceedings by Grolleron et al. [9]) focuses on the long-term variability. The preliminary results of this work have been presented in Cangemi et al. [3].
## 2 Simulations
The first step of the simulation is to input theoretical models that have been developed to describe data from current observatories. In order to be as general as possible, we do not fit existing data, but we rather produce theoretical models that can approximately reproduce (in terms of flux and time variability) observed flares. In this contribution, we limit ourselves to two different models that approximately describe the variability observed in the well known VHE blazar Mrk 421. Input models are provided in the form of time-dependent spectral energy distributions, produced over a broad spectral range, from radio to VHE. The next step is then to simulate CTA observations: this is done using the CTAAGNVAR pipeline1, which is built upon the official CTA high-level analysis tool, Gammapy [7]. CTAAGNVAR reads the theoretical AGN spectrum as input, and produces a
simulated CTA observation including realistic observational constraints as outputs. As the zenith angle of the source will vary during an observing period, the software implements source tracking and selects the appropriate instrumental response functions (IRF). Once the CTA simulated spectra are produced, they are then fitted using phenomenological spectral functions in order of increasing complexity (i.e. a simple power-law, a log-parabola, a power-law with exponential cut-off; the more complex model is considered only if it improves the fit), as done by observers on real data. Absorption on the extragalactic background light is included when performing the fit. The best-fit model parameters can then be studied, in order to investigate the capability of CTA to reconstruct the input models and ultimately discriminate among them. In this contribution we focus on specific observational properties: the capability of CTA to reconstruct spectral variability and hysteresis whenever present in the input model. This is a very important feature, already detected in the X-ray band in blazars, predicted in the VHE band by some of the models, but as yet undetected in the VHE band [1]. In the following, we show the results from two different theoretical inputs: a single-zone leptonic model in which the acceleration mechanism is not explicit, and electrons are assumed to be injected with a power-law shape and then cool down as they radiate (in the following, model A)[8]; and a flaring activity triggered by magnetic reconnection (in the following, model B)[6]. The input models are shown in Figure 1.
## 3 Results
The results of the simulations are shown in Figures 2 to 4. In Figure 2 we show simulated CTA light-curves (using the CTA North IRFs) for both models: model A represents a fast flare happening during a single observing night, while model B covers a larger data set of approximately two weeks, even though during the brightest nights fast intra-night variability can also be observed.
Figure 1: Theoretical SEDs provided as input for the CTA simulations. Left: model A; Right: model B (see text for details). The color code, from violet to red, represents the elapsed time.
In Figure 3 we show the results of a power-law fit to CTA data, plotted as amplitude vs photon index: these simulations indicate that model A has intrinsic spectral variability that can be detected by CTA; on the other hand model B shows weaker spectral variability in the CTA data. As an alternative to this visualization plot, we also produce two hardness-ratio plots, which is a common display tool in X-ray astronomy: in Figure 4 we show the evolution of the integral flux as a function of the hardness ratio between a high and low CTA energy band. Here as well we clearly observe the hysteresis cycle in the CTA data for model A.
Figure 3: Differential flux versus best-fit power-law index. Left: model A; Right: model B (see text for details).
Figure 2: Simulated CTA light-curves, expressed as differential flux. Left: model A; Right: model B (see text for details).
## 4 Conclusions
CTA will provide unprecedented sensitivity in the VHE band, giving us access to much increased statistical sample on blazar flares compared to current IACTs. In this contribution we have shown two simulated CTA light-curves on bright blazar flares, taking as input two different state-of-the-art numerical models. The preliminary results indicate that CTA might be able to detect, for the first time, hysteresis cycles in the VHE band, if they are indeed produced by the acceleration and radiative processes at work in the jet. This will give us a new observable to further constrain theoretical models. The results presented here are a small sub-set of the simulations that we are currently performing.
## Acknowledgments
Please see the full CTA acknowledgments at [https://www.cta-observatory.org/consortium_acknowledgments/](https://www.cta-observatory.org/consortium_acknowledgments/)
|
2309.16053 | Diagnosis of Helicobacter pylori using AutoEncoders for the Detection of
Anomalous Staining Patterns in Immunohistochemistry Images | This work addresses the detection of Helicobacter pylori a bacterium
classified since 1994 as class 1 carcinogen to humans. By its highest
specificity and sensitivity, the preferred diagnosis technique is the analysis
of histological images with immunohistochemical staining, a process in which
certain stained antibodies bind to antigens of the biological element of
interest. This analysis is a time demanding task, which is currently done by an
expert pathologist that visually inspects the digitized samples.
We propose to use autoencoders to learn latent patterns of healthy tissue and
detect H. pylori as an anomaly in image staining. Unlike existing
classification approaches, an autoencoder is able to learn patterns in an
unsupervised manner (without the need of image annotations) with high
performance. In particular, our model has an overall 91% of accuracy with 86\%
sensitivity, 96% specificity and 0.97 AUC in the detection of H. pylori. | Pau Cano, Álvaro Caravaca, Debora Gil, Eva Musulen | 2023-09-27T22:19:15Z | http://arxiv.org/abs/2309.16053v1 | Diagnosis of Helicobacter pylori using AutoEncoders for the Detection of Anomalous Staining Patterns in Immunohistochemistry Images
###### Abstract
This work addresses the detection of Helicobacter pylori a bacterium classified since 1994 as class 1 carcinogen to humans. By its highest specificity and sensitivity, the preferred diagnosis technique is the analysis of histological images with immunohistochemical staining, a process in which certain stained antibodies bind to antigens of the biological element of interest. This analysis is a time demanding task, which is currently done by an expert pathologist that visually inspects the digitized samples.
We propose to use autoencoders to learn latent patterns of healthy tissue and detect _H. pylori_ as an anomaly in image staining. Unlike existing classification approaches, an autoencoder is able to learn patterns in an unsupervised manner (without the need of image annotations) with high performance. In particular, our model has an overall 91% of accuracy with 86% sensitivity, 96% specificity and 0.97 AUC in the detection of _H. pylori_.
Keywords:digital pathology helicobacter pylori anomaly detection autoencoders.
## 1 Introduction
The bacterium _Helicobacter pylori_ (H. pylori) is the main cause of gastritis, an inflammation of the gastric mucosa that can lead to other serious diseases, such as gastric ulcer and even cancer. Early detection of this bacterium is essential for the effective diagnosis and treatment of these pathologies. In addition, studies show that more than 50% of the world's population has been infected by the bacterium, with a prevalence that exceeds 80% in adults over fifty [5].
The diagnosis of _H. pylori_ is usually made by conventional histology on gastric biopsies using different techniques for staining tissue samples. Usual stainings
include the generic hematoxylin and eosin (H&E) and more specific stains such as Giemsa, Warthin-Starry silver (W-S), Genta or immunohistochemical staining. Among them, the more specific one is immunohistochemical staining [1]. This technique allows the visualization of the bacterium through the staining of specific proteins present in its membrane. In this manner, _H. pylori_ stains with a color different from the one of other tissue, which avoids false detection of _H. pylori_ due to other gram-negative bacteria present in the sample. Immunohistochemistry-chemical staining gives the specific protein of _H. pylori_ a reddish hue, while other tissue remains in a blue hue. Although this facilitates the visual identification of _H. pylori_, a pathologist must carefully inspect the whole immunohistochemistry images in order to identify areas with _H. pylori_. Since the bacteria are only located at the borders of the tissue samples, the pathologists must carefully inspect a zoom-up area for all points belonging to the border. Given the huge size of images (120000x16000 pixels) and the fact that several tissue samples can be in the same image, this manual inspection is a highly time consuming task that becomes harder the lower the concentration of _H. pylori_ is.
Figure 1 shows an immunohistochemical image with presence of _H. pylori_ in a sample and three close-up of tissue border regions with different density (negative, low and high) of the bacteria on the window images shown on the the right side of the figure. While the window with high presence of H. pylori is easily identified, the window with low density needs a more careful inspection in order to detect the reddish spots of H. pylori and avoid confusion with other artifacts that can be in the sample.
Figure 1: Left: Histology sample with immunohistochemical staining. Right: 3 windows of the same histological sample showing different levels of _Helicobacter pylori_ density
Due to the recent digitalization of histopathological images, there is a lack of artificial intelligence methods for their analysis. In this work, we propose a method to automatically analyze an image of a histological sample of gastric tissue that has been immunohistochemistry stained for the detection of _H. pylori_.
### State-of-the-Art
Although Deep Learning, DL (and other Artificial Intelligence) models have demonstrated good performance on several histopathologic tasks [4], there are not many works addressing the detection of _H. pylori_. Existing works [3, 2, 6] are DL methods based on convolutional neural networks for the classification of cropped images extracted from tissue samples into _H. pylori_ positive and negative samples.
In [2] the authors trained a compact VGG-style architecture on, both, Giemsa and H&E slides. The trained network was used to highlight regions of _H. pylori_ presence and tested as decision support system to pathologists. The network was able to classify Giemsa stained samples with a sensitivity of 1 with a low specificity of 0.66. In [3] the authors also use a model similar to [2] but trained on silver staining samples. The performance was also tested in cropped patches achieving a sensitivity and specificity of, respectively, 0.89 and 0.87 at the cost of a significant amount of false positives having only 77% of precision in the detection of patches with _H. pylori_.
In [6], the authors proposed an ensemble model of the output probabilities of 3 ResNet-18 and 3 DenseNet-21 models trained on patches cropped from H&E-stained Whole-Slide Images (WSI). Patch-level probabilities were aggregated into WSI-level probabilities by averaging the top 10 patch-level probabilities from a each section. The ensemble achieved a sensitivity of 0.87, a specificity of 0.92 and F1-score of 0.89 for the diagnosis of WSI. The model was also tested as DL support system to improve the performance of a pathologist, who improved the accuracy and performance when diagnosing _H. pylori_ positive samples, but resulted in higher uncertainty when diagnosing _H. pylori_ negative samples.
As far as we know there are no works addressing the diagnosis of immunohistochemistry stained WSI. One of the main challenges for the use of classification approaches for the identification of _H. pylori_ in histological images is the collection of enough annotated data, since this implies a time consuming visual inspection and identification of patches containing the bacteria.
In this work we pose the detection of _H. pylori_ as a detection of anomalies in the staining of tissue by means of an autoencoder able to learn patterns of non-infected tissue without the need of annotated data. An autoencoder is a type of neural network with an encoder-decoder architecture that learns a latent representation space of the input data. The encoder transforms the input data into a lower-dimensional representation called latent code, while the decoder reconstructs the original data from this latent space. The latent space is learned to minimize the reconstruction mean square error between the original input image and the one reconstructed by the decoder and, thus, it can be trained in an unsupervised fashion.
By training the autoencoder with patches (windows) extracted from patients without _H. pylori_, the latent space is a representation of non-infected tissue and, thus, windows with the presence of _H. pylori_ are poorly reconstructed. A function of this reconstruction error in HSV color space allows the detection of windows with _H. pylori_ and a final diagnosis of the WSI by the aggregation of the diagnosis of windows extracted from the tissue borders.
## 2 Detection of _H. pylori_ using Autoencoders
Our method has the following steps (sketched in figure 2): detection of areas of interest in the image, detection of anomalous stained elements in each region of interest, and aggregation of each region of interest in the image for the diagnosis of the sample. Since _H. pylori_ is located along the border, first a series of contour detections around an automatically detected mask are used to detect the borders of the tissue sample. Patches are defined by sliding windows of size 256x256 pixels cropped along pixels belonging to such borders. This set of windows are the input to the autoencoder for their classification into positive (there is _H. pylori_ presence in the window) or negative (there is not _H. pylori_ in the window) cases, using a metric based on the loss of red-like pixels in reconstructions. Finally, the percentage of positive windows defines a probability for the final classification of the sample.
The autoencoder is trained with windows extracted from patients without _H. pylori_ for learning a representation space of normality (non-infected tissue). The proposed autoencoder has 3 convolutional blocks with one convolutional layer, batch normalization and leakyrelu activation each. The size of the convolutional kernel is 3 and the number of neurons and stride of each layer are, respectively, [32, 64, 64] and [1, 2, 2]. Figure 3 shows the difference in the reconstructions of a non-infected window, fig.3(a), and a window with _H. pylori_, fig.3(b). The reconstruction of the healthy window looks like the original input image, while the autoencoder has modified the coloration of the tissue in the reconstruction of the
Figure 2: Schema of the main steps in the detection of H. pylori
window with _H. pylori_. In particular, the reconstruction has a color conversion to the blue hue and has lost the red-like areas associated with the presence of _H. pylori_. We use this difference in reconstructions to detect the presence of _H. pylori_ as follows.
The presence of _H. pylori_ in a window is computed using the fraction of red-like pixels, labelled \(F_{red}\), lost between original and reconstructed images. If \(F_{red}>1\), it indicates a loss of red-like pixels, and the window is labelled as having _H. pylori_. The red-like pixels are computed applying a filter in HSV color space. In this color space, pixels with presence of _H. pylori_ have a hue in the range \([-20,20]\) and, thus, the area of red-like pixels is given by the number of pixels with hue in \([-20,20]\).
The percentage of patches in an histological image with \(F_{red}>1\) defines the probability of presence of _H. pylori_ in the sample. The optimal threshold of this probability is obtained from the ROC curve as the probability of the closest point to \((0,1)\). Samples with a percentage of positive patches above this threshold are diagnosed as _H. pylori_ positive.
## 3 Experiments
Our method was tested on our own database from the Department of Pathology of the Hospital Universitari General de Catalunya-Grupo Quironsalud. The database consisted of 245 gastric biopsies scored by a pathologist according to _H. pylori_ density as NEGATIVE (Healthy), LOW DENSITY and HIGH DENSITY. Of the 245 patients included in the study, 117 (47.8% of the total) are classified as NEGATIVE, while 128 are classified as POSITIVE (LOW and HIGH DENSITY) with presence of _H. pylori_.
Biopsies from the Department of Pathology of the Hospital Universitari General de Catalunya-Grupo Quironsalud of antral or body gastric mucosa were used. Formalin-fixed, paraffin-embedded tissue sections were analyzed using standard IHC techniques: immunostaining was performed automatically using a Ventana BenchMark ULTRA machine (_Roche, Basel, Switzerland_) using the monoclonal primary antibody anti-Hp (_clone SP48, Ventana Medical Systems,
Figure 3: Reconstructions of a healthy, (a), and infected, (b), windows. For each subfigure, left images are the original inputs and rigth images, the reconstructions.
Inc., 1910 E. Innovation Park Drive, Tucson, Arizona 85755 USA_). An external positive control was included on each slide. All stained slides were scanned with an Ultra-Fast 180 slide scanner provided by Philips (_Philips IntelliSite Pathology Solution_) to obtain WSI.
Each image has 3 WSI containing several tissue samples each, of which two are used for the pathological diagnosis, and the third one is a quality control slide. We have used the first diagnostic slide of the healthy cases to train the autoencoder and the second one of all patients to test the performance of the system in the diagnosis of _H. pylori_. For each healthy patient, 50 windows where randomly cropped from tissue borders of the first sample slide, which gives a total number of 5850 windows for training models. For the sake of a higher computational speed, windows were resize from \(224\times 224\) pixels to \(28\times 28\).
The performance metrics we have considered are the precision, recall and F-1 score for each diagnostic class (positive _H. pylori_ or negative _H. pylori_). In order to allow for statistical assessment of the performance, the test set was split in 10 folds stratified by patient. For each fold, the optimal cutting point of the ROC curve was calculated from the training fold and tested in the independent set of patients.
Table 1 reports statistical summaries (average \(\pm\) standard deviation) for the quality metrics. The proposed system has a optimal average specificity of 0.96 with a good average sensitivity of 0.86, which yields a F1-score of 0.91 and accuracy of 0.91 for the detection of _H. pylori_. Comparing to existing methods using other staining, we achieve a higher specificity with similar sensitivity. Table 2 shows the confusion matrix of the samples' diagnosis. Of 245 patients, only 23 have been incorrectly classified.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & negative _H. pylori_ & positive _H. pylori_ & Average \\ \hline Precision & 0.86 \(\pm\) 0.1 & 0.96 \(\pm\) 0.07 & 0.91 \\ \hline Recall & 0.96 \(\pm\) 0.09 & 0.86 \(\pm\) 0.13 & 0.91 \\ \hline f-1 score & 0.91 \(\pm\) 0.06 & 0.90 \(\pm\) 0.07 & 0.91 \\ \hline \end{tabular}
\end{table}
Table 1: Statistical Summary of the 10-fold Validation
\begin{table}
\begin{tabular}{|c|c|c|} \hline Ground Truth Predicted & _H. pylori_ & No _H. pylori_ \\ \hline _H. pylori_ & 110 (TP) & 18 (FP) \\ \hline No _H. pylori_ & 5 (FN) & 112 (TN) \\ \hline \end{tabular}
\end{table}
Table 2: Confusion matrix of the samples’ diagnosis and the diagnosis predicted by the autoencoder
Figure 4 shows the ROC curve averaged for the 10 folds with the point defining the optimal threshold highlighted in red. It is noticeable the stability of this cutting point across folds, with a variability of 0.1 in the ranges (6.18% \(\pm\) 0.10%) of the thresholding value of the probability of H. pylori. Additionally, the ROC curves have an average AUC of 0.961, which is superior to the ones achieved by other systems mentioned in section 1.1.
Finally, figure 5 shows boxplots for the percentage of positive windows detected by the autoencoder for POSTIVE and NEGATIVE diagnosis. There is a substantial difference between the two distributions. In particular, for NEGATIVE cases, the percentage of windows detected as positive is in most cases under 5% with only some outliers, which explains the high specificity of out approach.
## 4 Conclusions
We have presented a first DL system for the diagnosis of _H. pylori_ on immunohistochemically stained samples based on autoencoders trained to obtain a normality pattern from non-infected samples. Autoencoders are able to detect _H.
Figure 4: ROC curve averaged for the 10 folds
_pylori_ as an anomaly in staining in a self-learning approach that does not require annotation of image patches. This is a main advantage over existing classification approaches working with other kinds of staining and yields higher specificity (0.96 vs 0.92) with similar sensitivity, which is a clinical requirement to avoid unnecessary treatments.
Additionally, modifying by a small value the threshold in the system that separates between _H. pylori_ positive and _H. pylori_ negative cases based on the percentage of windows detected with the bacterium would allow for increased precision without affecting much the recall of the system, or viceversa.
|
2302.14536 | On the Road to 6G: Visions, Requirements, Key Technologies and Testbeds | Fifth generation (5G) mobile communication systems have entered the stage of
commercial development, providing users with new services and improved user
experiences as well as offering a host of novel opportunities to various
industries. However, 5G still faces many challenges. To address these
challenges, international industrial, academic, and standards organizations
have commenced research on sixth generation (6G) wireless communication
systems. A series of white papers and survey papers have been published, which
aim to define 6G in terms of requirements, application scenarios, key
technologies, etc. Although ITU-R has been working on the 6G vision and it is
expected to reach a consensus on what 6G will be by mid-2023, the related
global discussions are still wide open and the existing literature has
identified numerous open issues. This paper first provides a comprehensive
portrayal of the 6G vision, technical requirements, and application scenarios,
covering the current common understanding of 6G. Then, a critical appraisal of
the 6G network architecture and key technologies is presented. Furthermore,
existing testbeds and advanced 6G verification platforms are detailed for the
first time. In addition, future research directions and open challenges are
identified for stimulating the on-going global debate. Finally, lessons learned
to date concerning 6G networks are discussed. | Cheng-Xiang Wang, Xiaohu You, Xiqi Gao, Xiuming Zhu, Zixin Li, Chuan Zhang, Haiming Wang, Yongming Huang, Yunfei Chen, Harald Haas, John S. Thompson, Erik G. Larsson, Marco Di Renzo, Wen Tong, Peiying Zhu, Xuemin, Shen, H. Vincent Poor, Lajos Hanzo | 2023-02-28T12:47:29Z | http://arxiv.org/abs/2302.14536v1 | # On the Road to 6G:
###### Abstract
Fifth generation (5G) mobile communication systems have supported by the National Key R&D Program of China under Grant 2018YFB1801101, the National Natural Science Foundation of China (NSFC) under grants 619692000606 and 621202020, the Key Technologies R&D Program of Jiangsu (Prospective and Key Technologies for Industry) under Grants BE202067, BE2022067-1, and BE2022067-5, the EU H2020 RISE TESTBED project under Grant 872172, the EU H2020 ARIADNE project under Grant 871464, the EU H2020 RISE-6G project under Grant 101017011, the US National Science Foundation under Grants CCF-1908308 and CNS-2128448, the Engineering and Physical Sciences Research Council project under Grants EP/W016605/1 and EP/C0122827/1, and the European Research Council's Advanced Fellow Grant Quantum Computer Grant 789028. Thanks are also extended to Xichen Mao, Yinglan Bu, Wenke Ji, Zhao Zhou, Yue Yang, Lijian Xin, Hengeta Chang, and Duxotian Huang, who have provided valuable assistance and advice during this work.
C.-X. Wang (corresponding author), X. H. You (corresponding author), X. Q. Gao, X. M. Zhu, Z. X. Li, C. Zhang, and Y. M. Huang are with the National Mobile Communications Research Laboratory, School of Information Science and Engineering, Southeast University, Nanjing 2110096, China, and also with the Purple Mountain Laboratories, Nanjing 211111, China (email: {chwang, xhya, xqgao, xm_zhu, lixixin, chzhang, huangang}@seu.edu.cn).
H. M. Wang is with the School of Information Science and Engineering and the State Key Laboratory of Millimeter Waves, Southeast University, Nanjing 210096, China, and also with the Pervasive Communication Research Center, Purple Mountain Laboratories, Nanjing 211111, China (email: [email protected]). Y. F. Chen is with the School of Engineering, the University of Warwick, Coventry CV4 7AL, U.K. (e-mail: [email protected]).
H. Haas is with the LiFi Research and Development Center, Department of Electronic and Electrical Engineering, University of Strathclyde, Glasgow G1 1XQ, U.K. (e-mail: [email protected]).
J. S. Thompson is with the Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh EH9 3JL, U.K. (e-mail: [email protected]).
E. G. Larsson is with with the Department of Electrical Engineering (ISY), Linkoping University, 581 83 Linkoping, Sweden (e-mail: [email protected]).
M. Di Renzo is with Universite Paris-Saclay, CNRS, CentraleSupelec, Laboratoire des Signaux et Systemes, 3 Rue Joliot-Curie, 91192 Gif-sur-Yvette, France. ([email protected])
W. Tong is with the Wireless Advanced System and Competency Centre, HUAWENI Technologies Co., Ltd., Ottawa, ON K2K 3J1, Canada (e-mail: [email protected]).
P. Y. Zhu is with HUAWEI Technologies Canada Co. Ltd., Ottawa, ON K2K 3J1, Canada (e-mail: [email protected]).
X. Shen is with the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada (e-mail: [email protected]).
H. V. Poor is with the Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544, USA (e-mail: [email protected]).
L. Hanzo is with the School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: [email protected])
I. Hanzo is with the School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: [email protected])
I. Hanzo is with the School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, U.K. (e-mail: [email protected]).
## I Introduction
With the rapid development of communication applications, communication technologies are undergoing revolutionary changes generation after generation. Up till now, the development of cellular mobile communication systems has undergone five generations. From the first generation (1G) analog communication systems to fifth generation (5G) digital communication systems, each generation incorporates higher frequencies, larger bandwidths, and higher data rates. Starting from 2019, 5G has been officially commercialized, employing sub-6 GHz and millimeter wave (mmWave) bands, with a peak rate of 20 Gbps. From the architecture's perspective, mobile communication systems have been evolving towards more antennas, more advanced multiple access technologies, and richer services, as shown in Fig. 1. The 5G base stations exploit massive multiple-input multiple-output (MIMO) [1], mmWave, and ultra-dense networking (UDN) technologies [2], supporting up to 64 transceiver chains with more antenna elements. Currently, commercial 5G base station products |
2309.09173 | First-order Quantum Phase Transitions and Localization in the 2D Haldane
Model with Non-Hermitian Quasicrystal Boundaries | The non-Hermitian extension of quasicrystals (QC) are highly tunable system
for exploring novel material phases. While extended-localized phase transitions
have been observed in one dimension, quantum phase transition in higher
dimensions and various system sizes remain unexplored. Here, we show the
discovery of a new critical phase and imaginary zeros induced first-order
quantum phase transition within the two-dimensional (2D) Haldane model with a
quasicrystal potential on the upper boundary. Initially, we illustrate a phase
diagram that evolves with the amplitude and phase of the quasiperiodic
potential, which is divided into three distinct phases by two critical
boundaries: phase (I) with extended wave functions, PT-restore phase (II) with
localized wave functions, and a critical phase (III) with multifunctional wave
functions. To describe the wavefunctions in these distinct phases, we introduce
a low-energy approximation theory and an effective two-chain model.
Additionally, we uncover a first-order structural phase transition induced
(FOSPT) by imaginary zeros. As we increase the size of the potential boundary,
we observe the critical phase splitting into regions in proportion to the
growing number of potential zeros. Importantly, these observations are
consistent with groundstate fidelity and energy gap calculations. Our research
enhances the comprehension of phase diagrams associated with high-dimensional
quasicrystal potentials, offering valuable contributions to the exploration of
unique phases and quantum phase transition. | Xianqi Tong, Su-Peng Kou | 2023-09-17T06:02:28Z | http://arxiv.org/abs/2309.09173v1 | First-order Quantum Phase Transitions and Localization in the 2D Haldane Model with Non-Hermitian Quasicrystal Boundaries
###### Abstract
The non-Hermitian extension of quasicrystals (QC) are highly tunable system for exploring novel material phases. While extended-localized phase transitions have been observed in one dimension, quantum phase transition in higher dimensions and various system sizes remain unexplored. Here, we show the discovery of a new critical phase and imaginary zeros induced first-order quantum phase transition within the two-dimensional (2D) Haldane model with a quasicrystal potential on the upper boundary. Initially, we illustrate a phase diagram that evolves with the amplitude and phase of the quasiperiodic potential, which is divided into three distinct phases by two critical boundaries: phase (I) with extended wave functions, PT-restore phase (II) with localized wave functions, and a critical phase (III) with multifunctional wave functions. To describe the wavefunctions in these distinct phases, we introduce a low-energy approximation theory and an effective two-chain model. Additionally, we uncover a first-order structural phase transition induced (FOSPT) by imaginary zeros. As we increase the size of the potential boundary, we observe the critical phase splitting into regions in proportion to the growing number of potential zeros. Importantly, these observations are consistent with groundstate fidelity and energy gap calculations. Our research enhances the comprehension of phase diagrams associated with high-dimensional quasicrystal potentials, offering valuable contributions to the exploration of unique phases and quantum phase transition.
## I Introduction
The exploration of open systems, characterized by non-Hermitian quantum systems, has unraveled intriguing phenomena absent in their Hermitian counterparts [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. Notable examples include PT symmetry and exceptional points [4; 5; 6; 7; 8; 9; 10; 11], non-Bloch bulk-boundary correspondence [12; 11], and non-Hermitian skin effects [12; 14; 15; 16; 17; 18; 19; 20; 21; 22]. Many of these phenomena are related to parity-time (PT) symmetric Hamiltonians. These Hamiltonians typically exhibit two phases as parameters vary: the PT-symmetric phase with real eigenvalues and the PT-breaking phase with complex eigenvalues [2; 3; 4; 5]. These phenomena have been experimentally observed in open systems [23; 24; 25; 26; 27; 28; 29; 30], with promising applications in precision measurements, nonreciprocal quantum devices, and topological transport. The higher-order non-trivial interplay between the non-Hermitian skin effect and the topological effect has led to the concept of a hybrid skin-topological effect [31; 32; 33; 34; 35; 36].
Quasicrystals (QC) in closed quantum systems exhibit a plethora of fascinating properties [37; 38; 39; 40; 41; 42; 43; 44; 45]. For instance, in the one-dimensional (1D) Aubry-Andre-Harper (AAH) model, the introduction of finite quasiperiodic strength leads to a transition from a metallic (extended) state to an Anderson insulator (localized) [46; 47; 48; 49]. Critical phases are vital for understanding the transitions from localized to extended states, showing a range of fascinating phenomena including dynamical evolutions [50; 51; 52], critical spectral behavior [53; 54; 55; 56], and the multifractal nature of wave functions [57; 58; 59; 60]. Expanding upon the AAH model, variations incorporating different forms of quasiperiodic disorder and interactions give rise to exotic phases, including critically localized states [61; 62; 63] and many-body localization [64; 65; 66]. Recent research on non-Hermitian extensions of the one-dimensional AAH model has uncovered such a multicritical point marking the transition from localized to extended states, accompanied by PT symmetry breaking and topological phase transitions [67; 68; 69]. However, the investigation of the interplay between the two-dimensional (2D) chiral topological edge modes and the non-Hermitian quasicrystal dissipation edge has not been previously explored.
In this context, our study not only reveals a complex phase diagram, but also establishes a profound relationship between the size of the non-Hermitian quasicrystal, the presence of imaginary zeros, and the increasing occurrence of first-order structural phase transitions. In the phase diagram, there are three distinct phases separated by two-phase boundaries: the extended phase (I), the localized phase (III), and the critical phase (II), as illustrated in Fig. 1. Additionally, we have also discovered an increasing number of phase transitions (NPT) with the enlargement of the non-Hermitian quasicrystal size. We explain these first-order structure phase transitions (FOSPT) in the picture of phase splitting driven by imaginary zeros. Firstly, the quasi-periodic modulated potential contains certain points where the potential becomes zero. As the system parameters increase, a FOSPT occurs [70; 71]. Secondly, as the size of the system grows, the number of points with zero quasiperiodic imaginary potential increases. These zero points divide the imaginary potential into distinct domains, each having different positions for undergoing phase transitions as parameters vary. Consequently, the NPT increases with the size of the system. We have found that the NPT is equivalent to the number of zero points, which also matches the count of non-Hermitian domains. This phenomenon is unique to non-Hermitian systems and is absent in their Hermitian counterparts.
In Sec. II, we establish the phase diagram by the inverse
participation rate and provide an interpretation in terms of the effective low-energy non-Hermitian model. In Sec. III, we explore the relationship between the first-order structure phase transition and the dimensions of the system. Section IV is devoted to our conclusion.
## II Model and phase diagram
The foundation of our study lies in the Hamiltonian, which exhibits different forms under varying conditions:
\[H=\begin{cases}H_{\text{AAH}},&L_{y}=1\\ \text{two-chains},&L_{y}=2,\\ H_{\text{Haldane}}+H_{\text{AAH}}^{\text{edge}},&L_{y}\to\infty,\end{cases} \tag{1}\]
with two critical points of longitudinal dimensions \(L_{y}=1,2\). When \(L_{y}=1\), the system returns back to the 1D non-Hermitian AAH model which plays a key role in this paper
\[H_{\text{edge}}^{\text{AAH}}=\sum_{n}V\cos(2\pi\alpha n+ih)c_{n}^{\dagger}c_{n}, \tag{2}\]
where \(c_{n}^{\dagger}\) and \(c_{n}\) are the creation and annihilation operators for a particle at the \(n\)-th site. \(V\) and \(h\) are the amplitude and imaginary phase of the potential, and \(\alpha\) is an irrational number for a QC. Throughout this paper, we set \(V=1\) as the energy unit.
Since \(\alpha\) is an irrational number, it can be approximated by a sequence of rational numbers \(p_{n}/q_{n}\), where \(p_{n}\), \(q_{n}\) are prime numbers and \(p_{n}\), \(q_{n}\to\infty\) as \(n\to\infty\). In numerical simulations, it is common practice to consider a finite (yet arbitrarily large) number of sites \(L=q_{n}\) on a ring with periodic boundary conditions, where the occupation amplitudes are \(\psi_{n+L}=\psi_{n}\).
When \(L_{y}=2\), the system is no more than one-dimension, but a two-chain system with a 1D AAH chain coupled to a hopping-only chain [72, 56]. Here we focus on the non-Hermitian Haldane model with quasiperiodic complex potential \(H_{\text{edge}}^{\text{A}\text{A}\text{A}\text{M}}\) on the upper boundary [Fig. 1(a)], where height and circumference are \(L_{y}\) and \(L_{x}\). In the limits \(L_{y}\to\infty\), the Hamiltonian \(H=H_{\text{Haldane}}+H_{\text{edge}}^{\text{A}\text{A}\text{M}}\) and the Haldane model is an important model for describing the topological insulator [73, 74],
\[H_{\text{Haldane}}=t_{1}\sum_{\langle nm\rangle}c_{n}^{\dagger}c_{m}+t_{2}\sum _{\langle\langle nm\rangle\rangle}e^{i\phi_{nm}}c_{n}^{\dagger}c_{m}, \tag{3}\]
where the nearest-neighbor (NN) couplings are denoted by \(t_{1}=1\), and the next-nearest-neighbor (NNN) coupling coefficients are \(t_{2}e^{i\phi_{nm}}\) with amplitude \(t_{2}\) and phase \(\phi_{nm}\). The symbols \(\langle n,m\rangle\) and \(\langle\langle n,m\rangle\rangle\) denote the NN and NNN hopping, shown in Fig. 1(a) as black solid and black dotted lines, respectively. The complex phase \(e^{i\phi_{nm}}\) accounts for the NNN hopping, and we set the positive phase direction to be clockwise (\(|\phi_{nm}=\frac{\pi}{2}|\)). Below we consider both x direction as a periodic boundary condition (PBC) and y direction as an open boundary condition (OBC), i.e., a cylindrical geometry.
Our focus is on revealing the critical phase emerging in the non-Hermitian quasiperiodic boundary, which offers valuable insights into the boundary effects in dissipative systems. To determine the phase diagram of the Hamiltonian (1) under \(L_{y}\to\infty\) condition, we compute the inverse of the participation ratio (IPR)
\[\text{IPR}_{n}=\frac{\sum_{m}\left|\langle\psi_{n,m}^{R}|\psi_{n,m}^{R}\rangle \right|^{4}}{\left|\sum_{m}\left\langle\psi_{n,m}^{R}|\psi_{n,m}^{R}\rangle \right|^{2}}, \tag{4}\]
as a function of \(V\) and \(h\), shown in Fig. 1(b). Here, \(|\psi_{n,m}^{R}\rangle\) represents the right eigenstate of the \(H\) corresponding to the energy eigenvalue \(E_{n}\), and \(m=1,\dots,2L_{x}\). Specifically, when \(n\) corresponds to the ground state (denoted as \(g\)), \(|\psi_{g}\rangle\) represents the ground state, and \(\text{IPR}_{g}\) quantifies the localization of the ground state. Phase (I) with delocalized states has an \(\text{IPR}_{g}\simeq 1/L\simeq 0\), the PT-restore phase (III) with fully localized states, on the other hand, has an \(\text{IPR}_{g}\simeq 1\), and the critical phase (II) with fractional states fall in between, with \(\text{IPR}_{g}\) values ranging from 0 to 1, indicating an intermediate level of localization, as shown in Fig. 1(b).
In contrast to the one-dimensional case, where only a transition from the extended phase to the localized phase is observed [67, 68, 69], and distinct from the mobility edge resulting from the coupling of two chains [60], our scenario gives rise to a unique critical phase. Within this context, FOSPT occurs in the ground state [see Appendix A]. In the following
Figure 1: (a)-(e) xBPC/yOBC. (a) Schematic of the 2D Haldane model with quasicrystal imaginary potential at the upper boundary. Quasiperiodic and zero potentials are denoted by orange/yellow spheres against blue/green backgrounds, respectively. Black solid lines represent NN hopping, black dotted lines represent NNN hopping, and black dashed lines indicate intermediate hidden layers (\(L_{y}-1\)). (b) The inverse participation ratio as a function of \(V\) and \(h\), revealing three phases separated by two critical lines: phase (I) with extended wave functions, PT-restored phase (III) featuring spatially localized wave functions, and critical phase (II) with fractional wave functions. Parameters: \(L_{x}=20\), \(L_{y}=20\). (c)-(e) The density \(|\psi|^{2}\) as functions of \(x\) at three distinct points, \(h=0.3,1.3,2.3\), with \(V=1\), following Eq. (7). (c) IPR=0.11, (d) IPR=0.25, (e) IPR=0.75.
Appendix B, we will present a more comprehensive analysis of the complete phase diagram and offer analytically derived phase boundaries using an effective model.
For a 2D chiral topological insulator, chiral modes can only exist on the boundary of topological materials. In the continuous limit, the effective Hamiltonian in the low-energy is described as \(H_{chiral}=v_{f}k\), where \(v_{f}=\frac{\partial H_{chiral}}{\partial k}_{k=k_{f}}\) is fermi velocity, and \(k\) is the vector of the chiral modes. In the long-wavelength and low-frequency regimes, excitations are restricted to one-direction propagation and protected by non-trivial bulk topology.
Then, we consider the effects of dissipation which the boundary potential has a non-zero imaginary part, and the effective low-energy Hamiltonian [34; 35; 36] reads as
\[H_{chiral}=v_{f}k+iV \tag{5}\]
where the imaginary part of eigenvalues is dependent on the on-site dissipation potential \(V\). The Schrodinger equation of the dissipation chiral modes is
\[\left[-iv_{f}\frac{d}{dx}+iV(x)\right]\psi(x)=(\epsilon_{r}+i\epsilon_{i})\psi (x), \tag{6}\]
where \(\epsilon_{i}\) is the imaginary part of the eigenenergy.
Then, we get the solution of Eq. (6)
\[\psi(x)=\frac{1}{\sqrt{C}}\exp\left(i\frac{\epsilon_{r}}{v_{f}}x\right)\exp \left(\int_{0}^{L_{x}}dx^{\prime}\frac{V\left(x^{\prime}\right)-\epsilon_{i}} {v_{f}}\right), \tag{7}\]
where \(1/\sqrt{C}\) is the normalization factor and the integration region (0, \(L_{x}\)) is on the dissipation boundary.
Since the 2D Haldane model is periodic in x, the periodic boundary condition gives \(\psi(L_{x})=\psi(0)\). Then, we have
\[i\frac{\epsilon_{r}}{v_{f}}L_{x}-\frac{\epsilon_{i}}{v_{f}}L_{x}+\frac{1}{v_{ f}}\int_{0}^{L_{x}}dx^{\prime}V\left(x^{\prime}\right)=2i\pi n, \tag{8}\]
where \(n\in\mathbb{Z}\). Then, the Eq. (8) can be reduced to
\[\epsilon_{r} =\frac{1}{v_{f}}\frac{2\pi n}{L_{x}}, \tag{9}\] \[\epsilon_{i} =\frac{1}{L_{x}}\int_{0}^{L_{x}}dx^{\prime}V\left(x^{\prime} \right)=\widetilde{V}, \tag{10}\]
where the imaginary part of eigenenergy \(\epsilon_{i}\) is the average value of imaginary potential \(\widetilde{V}\).
The first two components (7), similar to plane waves \(exp(ikx)\), are uniformly distributed throughout the entire space. However, when the sign of \(v_{f}\) is fixed, the third component (7) introduces exponential growth or decay is possible, depending on the sign of \(sgn(V\left(x\right)-\epsilon_{i})\) (either 1 or -1). For example, at position \(x_{c}\), a change in the sign of the imaginary potential occurs like a step function, specifically when \(V(x<x_{c})<0\) and \(V(x>x_{c})>0\). If \(v_{f}<0\), the edge state will exhibit a peak at \(x_{c}\). Conversely, if \(V(x<x_{c})>0\) and \(V(x>x_{c})<0\), and \(v_{f}>0\), the edge wave function will also display a peak at \(x_{c}\). These are exactly the black circles in Figs. 1(c)-(e), which are the positive imaginary potentials.
In the specific case of our study, the dissipative potential takes the form of a quasi-periodic pattern, as shown in Eq. (2). Substituting Eq. (2) into Eq. (10), it can be reduced to
\[\epsilon_{i}=-\frac{V\sin^{2}(\pi\alpha L_{x})}{\pi\alpha L_{x}}\sinh(h). \tag{11}\]
Consequently, according to the above discussion, there will be a peak approximating a period, due to the quasi-periodic nature of the imaginary potential, as depicted in Figs. 1(c)-(e). To better visualize the phase diagram [Fig. 1(b)] in different phases, we selected three positions along the \(V=1\) line in Fig. 1(b): \(h=0.3,1.3\), 2.3. Among these positions, two critical points are evident: the extended-critical transition point at \(h_{1}=0.97\) and the critical-localized phase transition at \(h_{2}=1.41\).
The red and black circles represent positive and negative on-site potentials, respectively, with a system size of \(L_{x}=10\). As shown in Fig. 1(c), when the imaginary phase \(h=0.3\), the overall density of the wave function exhibits minor fluctuations along the x-direction, corresponding to IPR\(=0.11\approx 1/L_{x}\). At this point, the quasi-periodic potential is weak enough to be seen as a perturbation, \(V\) can be seen as part of the elliptical complex energy spectrum, as shown in Appendix C. As the imaginary potential gradually increases \((h=1.3)\), certain randomly distributed positions experience higher density, while others exhibit decreased density. This leads to an intermediate value IPR\(=0.25\), indicating a wave function reminiscent of a fractal-like state, as shown in Fig. 1(d). Finally, for a larger imaginary potential \(h=2.3\), the wave function localizes randomly at any position, with the maximum occupation \(|\psi|_{max}^{2}\approx 0.9\) is observed at \(L_{x}=1\) (IPR=0.75), and the sharp peaks occur precisely at the transitions from black (positive) to red (negative), as shown in Fig. 1(e).
## III First-order structure phase transition
Here, we explore how the extended-critical and the critical-localized phase transition points \(h_{1},h_{2}\) are also affected by the dimensions \(L_{x},L_{y}\). The \(log(L_{x})\) as a function of \(h_{1}\) is shown in Fig. 2(a), where we can see \(h_{1}\approx 0.97\) is almost a constant and \(log(L_{x})=kh_{2}+c\) is a linear function, where \(k\) and \(c\) are constants. As the size of the system increases, the alternation of positive and negative imaginary potentials becomes more pronounced. It is due to the effect of positive imaginary potential gain that the wave function (Eq. (7)) tends to disperse more to locations marked by positive imaginary potentials as the system dimension expands. This inherent tendency leads to an increase in the localization transition points of the wave function as the system dimensions (\(L_{x}\)) increase. In turn, \(h_{1},h_{2}\) is shown in Fig. 2(b) as \(L_{y}\) varies, when \(h_{1}\), \(h_{2}\) are all essentially constants.
We also find another interesting phenomenon that the critical phase fragments into multiple parts, which cannot happen in the Hermitian system. We have plotted the fidelity of the ground state \(F_{g}\) as a function of \(h\) for different sizes \(L_{x}=21,34,55,89,144,233\) in Fig. 2(c). It can be seen
that the NPT increases as \(L_{x}\) increases. To know the relationship between NPT and system size, we plot NPT as a function of \(L_{x}/L_{y}\), as shown in the subplot of Fig. 2 (c). It can be seen that when \(L_{y}\) increases, the NPT is kept constant because there is no increase in the size of \(V(x)\), as shown by the green line. However, the red line represents the NPT vs \(L_{x}\). The result given by the linear fit is \(\text{NPT}=0.05L_{x}+0.98\). 0.05 means that for every 20 lattices, there is a zero point increase in the imaginary potential, and 0.98 means that there is a phase transition from a delocalized phase to a localized phase at the beginning. Then for the \(V(x)\) as in Eq. (2), there will be sites where \(Im(V(x))\approx 0\), which is where the FOSPT occurs with parameters change. As the \(L_{x}\) increases, the number of zero imaginary potential points increases. These zeros partition the potential into distinct domains, each hosting different phase transition points as the parameters vary. This is the reason why the NPT increases as the \(length(V(x))\) increases.
In this following, we aim to establish the connection between the FOSPT and the imaginary potential. Fig. 3 illustrates the complex energy spectrum of two scenarios: one with nearest-neighbor imaginary potential [Fig. 3(a)], defined as
\[H_{NN}=H_{Hermitian}+iV_{1}+iV_{2}/2, \tag{12}\]
and another with next-nearest-neighbor imaginary potential [Fig. 3(b)] given by
\[H_{NNN}=H_{Hermitian}+iV_{1}+iV_{3}/2, \tag{13}\]
where \(H_{Hermitian}\) is an arbitrary Hermitian matrix, we consider the 2D Haldane model \(H_{Hermitian}=H_{Haldane}\) ( the simplest PT-symmetric matrix is presented in Appendix D). In \(V_{i_{i}}\), "\(i=1,2,3\)" represent the on-site potential of any position on the boundary and are increasing in order. The color of points represents the density of edge \(|\psi_{edge}(h)|^{2}\).
In the first plot with \(V=1.600\), points close to the x-axis depict topological boundary states influenced by the imaginary potentials. Their energy spectra form a semicircle with the \(x\)-axis, exhibiting a skinning effect, as shown in Fig. 3(a). The two points with the largest imaginary part correspond to two non-topological boundary states that are progressively confined towards the boundary due to the impact of the imaginary potential. The colors in the figure represent the density at the boundary, revealing that the topological boundary states primarily occupy the boundary, while states outside the energy gap are bulk states, except for the non-topological boundary states. At \(V=V_{c}=3.356\), a critical phase transition occurs the two non-topological boundary states are degenerate. If the imaginary potential is slightly larger, i.e., \(V=3.500\), the real part of these two states remains the same, but the imaginary part differs, resulting in a PT phase transition. With further increases in \(V\), the energy of the non-topological boundary states increases in tandem with the imaginary potential, and the imaginary parts of the topological boundary states become nearly zero, restoring PT symmetry.
We also consider the case of next-nearest-neighbor imaginary potentials for \(V=1.600,2.000,3.400,6.000\). The energy spectra are more or less the same at the beginning, and the energy of the two non-topological boundary states undergoes a degenerate at \(V=V_{c1}=2.000\), which is also the first critical point. However, when \(V=V_{c2}=3.400\), two additional non-topological boundary states emerge near the topological boundary states. These two non-topological boundary states undergo a second PT phase transition. As in Fig. 3(b), after the occurrence of two PT phase transitions, there are only non-topological boundary states localized at the two imaginary potential points, as in Fig. 3(b). This observation from the simplest case of two imaginary potentials can be extended to scenarios involving multiple imaginary potentials, where PT phase transitions occur whenever non-adjacent imaginary potentials are present, see Fig. 2(c).
If these zeros are replaced by finite imaginary potentials, we find that the NPT returns to the situation in Fig. 1(b) and does not vary with \(L_{x}\), as shown by the green line. Moreover, when the length of the imaginary potential is constant, i.e. \(\text{length}(V(x))\) = 20, we plot the NPT as a function of \(L_{y}\), again still with the green line. This means the phase transition depends only on the imaginary potential change.
## IV Conclusion
We uncover the rich phase diagram of the two-dimensional Haldane model with edge quasi-periodic dissipation. Overall, the fractional dimension, the largest imaginary part of the eigenvalues, the scaling exponent, and the ground state fidelity provide valuable insights into the localization properties and
Figure 2: (a) The transverse dimension \(L_{x}\) is a function of \(h_{1}/h_{2}\). (b) The longitudinal dimension \(L_{y}\) is a function of \(h_{1}/h_{2}\). (c) The ground fidelity \(F_{g}\) plotted against \(h\) with \(L_{x}=21,34,55,89,144,233\). Inset: The NPT as a function of \(L_{x}/L_{y}\). The red line represents \(L_{x}\), and the fit yields \(\text{NPT}=0.05L_{x}+0.98\). The green line corresponds to \(L_{y}\).
phase transitions in the system. The system exhibits extended, critical, and localized phases, with the criticality appearing between the extended-critical and critical-localized transitions. In the low-energy approximation, we show phase transitions of the wave function in different phases. We then projected the original Hamiltonian onto the boundary subspace and obtained an effective two-chain model which yielded a phase diagram similar to that of the original Hamiltonian, effectively capturing its phase transition properties.
We also analyzed the effect of transverse and longitudinal dimensions on the phase transition. The results show that the longitudinal dimensions do not affect the phase transition, but the critical point from the critical phase to the localized phase becomes larger with increasing transverse dimensions, which is due to the gain effect of the positive imaginary potential, which weakens the localization of the wave function. Then we also find the phenomenon of critical phase tearing, which is unique in non-Hermitian systems, due to the first-order structural phase transition of the system caused by the zeros of the quasi-periodic imaginary potential.
###### Acknowledgements.
We are grateful to Gao Xianlong, Yiling Zhang, Xin-Ran Ma, Qian Du, Yufei Zhu for valuable suggestions on the manuscript. This work is supported by NSFC Grants No. 11974053 and No. 12174030.
## Appendix A First-order phase transition
The quantum phase transition in the Hermitian system refers to the nonanalyticity of avoided level-crossing or actual level-crossing [75]. However, the spectra of the non-Hermitian system are generally complex. There are some line gaps and point gaps that have been found in the non-Hermitian AA systems [6]. When the parameters are changed, the line gaps may be experienced many times close to reopening processes if there is more than one zero point. Traditionally, quantum phase transition is characterized by singularities of the ground state energy and the energy gap between the ground state and the first excited state. First-order quantum phase transition is identified by abrupt changes in the first derivative of the energy.
In Fig. 4 (a), we plot the logarithm of the energy gap \(log(\Delta_{g})\) as a function of \(h\), where
\[\Delta_{g}=E_{f}-E_{g}. \tag{1}\]
\(E_{f}\) is the first excited state energy and \(E_{g}\) is the ground state energy. There are two discontinuous points \(h_{1}\) and \(h_{2}\), at which the gap is closed. It also corresponds to the phase transition in Fig. 1(b).
To determine the phase transition type, we calculated the
Figure 4: (a) The logarithm of energy gap \(log(\Delta_{g})\) as a function of \(h\). (b) The first-order derivative of the ground state energy \(dE_{g}/dh\) as a function of \(h\). Both \(log(\Delta_{g})\) and \(dE_{g}/dh\) exhibit discontinuities at \(h_{1}\) and \(h_{2}\). The parameters are chosen as \(L_{x}=20\), \(L_{y}=20\).
Figure 3: Energy spectrum of the Haldane model with different imaginary potential impurities. (a) The energy spectra with two nearest-neighbor imaginary impurities for \(\gamma=1.600\), 3.356, 3.500, 6.000 from left to right. (b) The energy spectra with two next nearest-neighbor imaginary impurities for \(\gamma=1.600\), 2.000, 3.400, 6.000 from left to right. The parameters are chosen as \(L_{x}=20\), \(L_{y}=20\).
ground state energy \(E_{g}\) and its first derivative \(dEg/dh\) as a function of \(h\). In Fig. 4(b), the black dots represent the ground state energy, while the red dots represent its first derivative. Although \(E_{g}\) is continuous, the discontinuity of \(dEg/dh\) at \(h_{1}\) and \(h_{2}\) shows the first-order nature of quantum phase transition, similar to in Fig. 4(a).
## Appendix B Fraction dimension, PT-symmetry breaking, and fidelity
The localization behavior of the system's wave function is a crucial observable that requires precise measurements. Wave functions are commonly characterized by their fractal dimension, quantified by the IPR, which follows a scaling relation of
\[\text{IPR}\sim(L_{x})^{-\tau}, \tag{2}\]
where \(\tau\) represents the fractal dimension (FD). The FD provides a valuable perspective to understand how states expand and fluctuate as the system size increases. Similar to IPR, When \(\lim_{L_{x}\rightarrow\infty}\tau=1\), it indicates an extended wave function. Conversely, if the wave function is localized with peaks only at a few lattice points and negligible amplitudes elsewhere, it implies \(\lim_{L_{x}\rightarrow\infty}\tau=0\). Fractal wave functions exhibit FD values within the range of \(0<\lim_{L_{x}\rightarrow\infty}\tau<1\).
In Figs. 5(a-c), we compare the fractal dimensions of the extended, critical, and localized phases. Specifically, we examine three points along the \(V=1\) line in Fig. 1(b): \(h=0.2,1.4\), and \(2.6\). The two black dashed lines in all plots correspond to \(Re(E)=\pm 1\) positions, between which states represent topological boundary states when \(h\) is small. In Fig. 5(a), the majority of states are concentrated at the top, indicating extended states with \(\tau=1\). Fig. 5(b) shows a decreasing trend in both topological and non-topological boundary states, with \(\tau\) values ranging from \(0.1\) to \(0.6\). These non-topological edge states refer to those influenced by the presence of an imaginary potential, which tends to localize at the boundary. Furthermore, it is worth noting that at \(\tau=0.6\), we observe the separation between extended states and fractal states localized at the dissipative boundary.
Under a strong imaginary potential at \(h=2.6\) in Fig. 5(c), we discover that all non-topological boundary states move to the bottom with \(\tau=0\), and their count precisely matches the number of topological boundary states, which is \(L_{x}\). However, the topological boundary states return to the top of the plot, indicating their return to extended states unaffected by the imaginary potential.
In fact, the impact of the imaginary potential on the system's topological states is observed in the critical phase, as depicted in Figs. 5(c). A captivating question arises: How does the variation of the parameter influence the imaginary part of the topological boundary states? Fig. 5(d) illustrates the behavior of the largest imaginary part \(|Im(E)|\) vs \(h\). Remarkably, a sudden surge from zero occurs at \(h_{1}\approx 0.97\), followed by an abrupt decline to zero at \(h_{2}\approx 1.41\). Remarkably, when \(h<0.97\) (PT-symmetric) and \(h>1.41\) (PT-restored), \(max(Im(E_{e}))\) remains close to zero, in agreement with Figs. 5(a) and (c). However, it is in the intermediate region that we observe the most pronounced impact of the imaginary potential, resulting in non-zero values of \(Im(E_{e})\). This observation verifies our earlier conjecture and highlights the intricate interplay between the parameter variation and the imaginary part of the topological boundary states, see Eq. (6).
One calculation similar to the fractal dimension is the scaling exponent, which can be obtained from the on-site probabilities of any wave function \(\psi_{n}\). The on-site probability is restricted to the boundary. According to the fractal theorem, the scaling of the maximum on-site probability is expressed as
\[max(p_{n,\text{edge}})\sim(2L_{x})^{-\beta_{n}^{\text{edge}}}, \tag{3}\]
where \(p_{n,edge}=|\psi_{n,\text{edge}}|^{2}\) and edge \(=1,\dots,2L_{x}\). To determine the extended, critical, and localized wave functions, we only need to investigate the minimum value of the exponent \(\beta_{min}^{\text{edge}}\).
Referring to Fig. 5(e), we focus on the boundary scaling exponent of the ground state. Strikingly, we observe a precipitous decline in \(\beta_{min}^{edge}\) precisely at the critical points \(h_{1}\) and \(h_{2}\). When \(h<h_{1}\), the system manifests an extended phase, thereby approximating \(\beta_{min}^{edge}\) to 1. Conversely, for \(h>h_{2}\), the system assumes phase (III), whereby the boundary wave functions localize, yielding \(\beta_{min}^{edge}\approx 0\). Within the critical phase, \(\beta_{min}^{edge}\) spans the interval (0.1, 0.25), providing evidence of a fractal nature of the ground state.
We have carried out various calculations, some of which are based on knowledge of the physical properties of the system. However, in Fig. 5(f), we investigate the behavior of the fidelity \(F_{g}\) vs \(h\). Fidelity has a distinct advantage in that it does not require prior familiarity with the order parameters or symmetries of the system. Typically, we can expect that as the ground state structure undergoes sharp changes, the fidelity will abruptly decrease near the critical points of the system. We focus on the boundary subspace and explore the boundary fidelity, quantified as:
\[F_{g}(h,\delta h)=\left|\left\langle\psi_{g,\text{edge}}(h)|\psi_{g,\text{edge }}(h+\delta h)\right\rangle\right|, \tag{4}\]
where \(\delta h\) is a small quantity, \(|\psi_{g,\text{edge}}(h)\rangle=\sum_{edge}|\psi_{edge}\rangle\left\langle \psi_{edge}|\psi_{g}(h)\right\rangle\) and \(|\psi_{g}(h)\rangle\) satisfies the eigenvalue equation \(H(h)\left|\psi_{g}(h)\right\rangle=E_{g}\left|\psi_{g}(h)\right\rangle\). At the critical points near \(h_{1}\) and \(h_{2}\), the overlap of the ground states \(F_{g}\) undergoes a dramatic decrease, decreasing from \(1\) to \(0.76\) and from \(1\) to \(0\), respectively.
Based on the above description, the physical properties of the original Haldane model can essentially be captured by the characteristics of its boundaries. Therefore, we introduce the concept of the boundary effective Hamiltonian \(H_{\text{edge}}\) to further comprehend the expanded-critical and critical-localized phase transitions depicted in Fig. 1(b). Directly, we project the \(H\) onto the boundary subspace using the boundary projection operator \(P_{\text{edge}}\). The effective edge Hamiltonian is then given by:
\[H_{\text{eff}} =P_{edge}\ H\ P_{edge} \tag{10}\] \[=H_{\text{AA}}+H_{\text{free}}+H_{c},\]
where \(H_{\text{AA}}=\sum_{m}\left(a_{m}^{\dagger}a_{m+1}+\text{h.c.}\right)+V\cos(2\pi \alpha m+ih)a_{m}^{\dagger}a_{m}\) represents the non-Hermitian Aubry-Andre-Harper model, \(H_{\text{free}}=\sum_{m}b_{m+1}^{\dagger}b_{m}+\text{h.c.}\) is the free chain with only the nearest-neighbor hopping term, and \(H_{c}=a_{m}^{\dagger}b_{m}+\text{h.c.}\) represents their coupling, as seen in Eq. (1) with \(L_{y}=2\). The \(H_{\text{eff}}\) is reduced to a two-chain model
\[H_{\text{eff}}= \sum_{j=1}^{2}\sum_{n}\left[V_{j,m}c_{j,m}^{\dagger}c_{j,m}+t \left(c_{j,m}^{\dagger}c_{j,m+1}+\text{ H.c. }\right)\right] \tag{11}\] \[+\lambda\sum_{m=odd}\left(c_{1,m}^{\dagger}c_{2,m}+\text{ H.c. }\right).\]
where \(V_{1,m}=V\cos(2\pi\alpha m+ih)\) when \(j=1\) and when \(j=2\), \(V_{2,m}=0\).
The phase diagram is almost unchanged, as shown in the subplot of Fig. 5(f). In the subplot of Fig. 5(f), two critical lines can also be seen dividing the whole phase diagram into three regions: extended, localized, and critical phase. This phase diagram is almost the same as Fig. 1(c), except that the critical line from the extended phase to the critical phase is not particularly obvious, and the extended phase is also reduced. This is due to size effects, and in the case of \(L_{y}\rightarrow\infty\) the subplot of Fig. 5(f) will change back to Fig. 1(c).
## Appendix C Dissipation in Weak Dissipation Case
Energy bands that encircle a point gap with nonzero winding numbers under PBC will exhibit non-Hermitian skin effects under OBC. All eigenmodes are localized at the boundary. However, in the above cases, periodic boundary conditions ensure unidirectional chiral currents. Therefore, we introduce OBC at the gain-loss boundary, creating a dissipation domain wall. We define the Hamiltonian for the domain as
\[H_{domain}=H+\text{domain}, \tag{12}\]
where \(H\) is the Hamiltonian in Eq. (1) under \(L_{y}\rightarrow\infty\), and \(\text{domain}=H_{G}(x)+H_{L}(x)\), with subscripts \(G\) and \(L\) representing the "-gain" and "loss" domains, respectively. In each domain, gain (\(x<0\)) or loss (\(x>0\)) generates a constant on-site imaginary potential \(\pm i\gamma\). This results in a purely imaginary shift in the spectra of \(H\).
Fig. 6(a) shows the complex energy spectrum of this Hamiltonian \(H_{domain}\) in the \(|Re(E)|<1\) regime under
Figure 5: (a)-(c). The fractal dimensions \(\tau\) as a function of the real part of eigenenergies \(Re(E)\) for three different \(h=0.2,1.4,2.6\). In (d), the largest value of \(|Im(E)|\) is a function of \(h\). (e) The minimal scaling exponent as a function of \(h\). (f) The fidelity of ground states \(F_{g}\) versus \(h\). Abrupt changes at \(h_{1}\) and \(h_{2}\) are present in (d)-(f). Inset: Phase diagram of the effective two-chain model \(H_{eff}\). The parameters are chosen the same as Fig. 3.
Figure 6: The phenomena in the regime of small imaginary potentials under gain/loss domain wall condition. (a) A comparison between numerical results (yellow dots) and theoretical predictions (blue line) for the complex energy spectrum of topological states. Inset: Energy spectra in whole complex energy space. (b) The topological protected edge state on gain/loss boundaries (red line) and the wave function for low-energy approximation. The parameters are chosen as \(L_{x}=20\), \(L_{y}=20\), \(h=0.2\) and \(\gamma=0.2\).
weakly dissipative conditions. where the vertical axis represents \(\left|Im(E)\right|\). It corresponds to the region marked by the red dashed lines in the entire complex energy spectrum of the Hamiltonian, as shown in the subplots of Fig. 6(a). While the colors indicate the probability density of the eigenstates at the boundaries \(density_{edge}=\left|\psi_{edge}\right|^{2}\). The complex energy satisfy half-ellipse equation, i.e., \(\frac{\mathrm{Re}(E)^{2}}{a^{2}}+\frac{\left|\mathrm{Im}(E)\right|^{2}}{b^{2} }=1\).
In the limit of \(h\to 0\), the quasiperiodic dissipation potential in Eq. (2) undergoes a Taylor expansion
\[\lim_{h\to 0}V_{n} = \lim_{h\to 0}\cos(2\pi\alpha n+ih) \tag{22}\] \[\approx \cos(2\pi\alpha n)+i\sin(2\pi\alpha n)h+O(h^{2}).\]
The imaginary potential perturbs the edge modes, leading to a correction in their eigenenergies, given by
\[E^{1} = \left\langle\psi_{e}^{0}|H_{edge}|\psi_{e}^{0}\right\rangle \tag{23}\] \[= \left\langle\psi_{e}^{0}|\cos(2\pi\alpha n)+i\sin(2\pi\alpha n)h |\psi_{e}^{0}\right\rangle\] \[= \sum_{n=0}^{L_{x}}\left|\psi_{n}^{0}\right|^{2}\cos(2\pi\alpha n )+ih\sum_{n=0}^{L_{x}}\left|\psi_{n}^{0}\right|^{2}\sin(2\pi\alpha n)\] \[= \mathrm{Re}(E^{1})+i\,\mathrm{Im}(E^{1}),\]
where \(\left|\psi_{e}^{0}\right\rangle\) is an eigenfunction of \(H\) satisfying \(H\left|\psi_{e}^{0}\right\rangle=E^{0}\left|\psi_{e}^{0}\right\rangle\), \(\mathrm{Re}(E^{1})=\sum_{n=0}^{L_{x}}\left|\psi_{n}^{0}\right|^{2}\cos(2\pi \alpha n)\) represents the real part of the correction to the eigenvalues, while \(\mathrm{Im}(E^{1})=h\sum_{n=0}^{L_{x}}\left|\psi_{n}^{0}\right|^{2}\sin(2\pi \alpha n)\) denotes the imaginary part.
Notably, this correction also depends on the \(\left|\psi_{n}^{0}\right|^{2}\) distribution, as seen in Eq. (23). Additionally, the distribution of topological edge states at the boundary is energy-dependent. As we shift from the energy gap to higher or lower energy bands, the distribution of topological edge states inside the material becomes increasingly significant. Consequently, the energy spectrum follows an elliptic function dependence that is governed by these edge states, expressed as:
\[\frac{\left[\mathrm{Re}(E^{0})+\mathrm{Re}(E^{1})\right]^{2}}{a^{2}}+\frac{ \left[\mathrm{Im}(E^{1})\right]^{2}}{b^{2}}=1. \tag{24}\]
The discrepancy between the theory and the numerical results is attributed to the higher order perturbation, as shown in Fig. 6(a). As the imaginary potential becomes smaller, the perturbation theory aligns more closely with the numerical outcomes
Due to the inclusion of gain/loss domain walls in the up boundary, the effective chiral Hamiltonian in Eq. (5) is substituted with \(H_{chiral}=v_{f}k+iV+\)domain. And the Schrodinger equation with OBC can be rewritten as
\[\left[-iv_{f}\frac{d}{dx}+\text{domain}+iV(x)\right]\psi(x)=( \epsilon_{r}+i\epsilon_{i})\psi(x). \tag{25}\]
The solution of the wave function is given by
\[\psi(x)=\frac{1}{\sqrt{C}}\exp\left(i\frac{\epsilon_{r}}{v_{f}}x\right)\exp \left(\int_{0}^{L_{x}}dx^{\prime}\frac{\text{domain}+V\left(x^{\prime} \right)-\epsilon_{i}}{v_{f}}\right). \tag{26}\]
Here \(\epsilon_{i}\) is still the average imaginary potential, see Eq. (10). Since the domain walls have opposite signs, \(\int_{0}^{L_{x}}\text{domain}=0\).
To validate our prediction, we performed numerical calculations (blue line), which closely match our analytical results (red star-line), as shown in Fig. 6(b). Consequently, \(H_{G}(H_{L})\) still exhibits localized topological edge states due to the inherent topological properties of the photonic topological insulator, in the presence of non-Hermitian effects. Additionally, the localization length of the wave function in Fig. 6(b) is inversely proportional to \(\frac{1}{\xi}\propto\text{domain}+V\left(x^{\prime}\right)-\epsilon_{i}\).
## Appendix D Phase transition in imaginary potential
We consider a four lattices model [Eq. (12)] where two non-adjacent lattices are dissipative and the other two are not, and the Hamiltonian \(H_{NNN}=H_{a}\)
\[H_{a}=\left(\begin{array}{cccc}i\gamma&t&0&0\\ t&0&t&0\\ 0&t&i\gamma&t\\ 0&0&t&0\end{array}\right), \tag{27}\]
where the \(t\) is the nearest hopping term and the \(\gamma\) is the non-adjacent on-site imaginary potential. \(H_{a}\) is PT-symmetric, i.e., \(PTH_{a}(PT)^{-1}=H_{a}\). The exceptional points are
Figure 7: The energy spectrum of the simplest four lattice model. (a) The real and imaginary parts of the energy spectrum of \(H_{a}\) as a function of \(\gamma\). (b) The real and imaginary parts of the energy spectrum of \(H_{b}\) as a function of \(\gamma\).
\((\sqrt{5}\pm 1)t\), and the eigenvalues are
\[\lambda_{1} =\frac{1}{2}\left(-\sqrt{-\gamma^{2}-2\left(\sqrt{5}-3\right)t^{2}} +i\gamma\right),\] \[\lambda_{2} =\frac{1}{2}\left(\sqrt{-\gamma^{2}-2\left(\sqrt{5}-3\right)t^{2} }+i\gamma\right),\] \[\lambda_{3} =\frac{1}{2}\left(-\sqrt{2\left(\sqrt{5}+3\right)t^{2}-\gamma^{2 }}+i\gamma\right),\] \[\lambda_{4} =\frac{1}{2}\left(\sqrt{2\left(\sqrt{5}+3\right)t^{2}-\gamma^{2 }}+i\gamma\right). \tag{10}\]
As the eigenvalues are symmetric, we only need to consider \(\lambda_{i}>0\) (\(i=1,2,3,4\)), see Fig.7 (a). The system hold \(PT\) symmetry when \(\gamma<(\sqrt{5}-1)t\), where \(Im(\lambda_{i})\) are all equal to each other. When \((\sqrt{5}-1)t<\gamma<(\sqrt{5}+1)t\), the imaginary part \(Im(\lambda_{1},2)\) are different, so \(\gamma=(\sqrt{5}-1)t\) is the first critical point. Moreover, the \(PT\)-symmetry is broken again when \(\gamma>(\sqrt{5}+1)t\), that the real part of the four modes vanishes.
Conversely, the Hamiltonian with two adjacent imaginary potentials is
\[H_{b}=\left(\begin{array}{cccc}i\gamma&t&0&0\\ t&i\gamma&t&0\\ 0&t&0&t\\ 0&0&t&0\\ \end{array}\right), \tag{11}\]
where \(H_{b}\) is also PT-symmetric. Then the eigenvalues of \(H_{b}\) is
\[\lambda_{1} =\frac{1}{2}\left(-\sqrt{-\gamma^{2}-2t\sqrt{5t^{2}-4\gamma^{2}} +6t^{2}}+i\gamma\right),\] \[\lambda_{2} =\frac{1}{2}\left(\sqrt{-\gamma^{2}-2t\sqrt{5t^{2}-4\gamma^{2}} +6t^{2}}+i\gamma\right),\] \[\lambda_{3} =\frac{1}{2}\left(-\sqrt{-\gamma^{2}+2t\sqrt{5t^{2}-4\gamma^{2}} +6t^{2}}+i\gamma\right),\] \[\lambda_{4} =\frac{1}{2}\left(\sqrt{-\gamma^{2}+2t\sqrt{5t^{2}-4\gamma^{2}} +6t^{2}}+i\gamma\right). \tag{12}\]
However, there is only one exceptional point where \(Im(\lambda_{1})=Im(\lambda_{2})\) and \(Im(\lambda_{3})=Im(\lambda_{4})\), as shown in Fig. 7(b).
|
2309.05991 | Gravitational Radiation from Eccentric Binary Black Hole System in
Dynamical Chern-Simons Gravity | Dynamical Chern-Simons (DCS) gravity, a typical parity-violating
gravitational theory, modifies both the generation and propagation of
gravitational waves from general relativity (GR). In this work, we derive the
gravitational waveform radiated from a binary black hole system with eccentric
orbits under the spin-aligned assumption in the DCS theory. Compared with GR,
DCS modification enters the second-order post-Newtonian (2PN) approximation,
affecting the spin-spin coupling and monopole-quadrupole coupling of binary
motion. This modification produces an extra precession rate of periastron. This
effect modulates the scalar and gravitational waveform through a quite low
frequency. Additionally, the dissipation of conserved quantities results in the
secular evolution of the semimajor axis and the eccentricity of binary orbits.
Finally, the frequency-domain waveform is given in the post-circular scheme,
requiring the initial eccentricity to be $\lesssim0.3$. This ready-to-use
template will benefit the signal searches and improve the future constraint on
DCS theory. | Zhao Li, Jin Qiao, Tan Liu, Rui Niu, Shaoqi Hou, Tao Zhu, Wen Zhao | 2023-09-12T06:40:42Z | http://arxiv.org/abs/2309.05991v2 | # Gravitational Radiation from Eccentric Binary Black Hole System
###### Abstract
Dynamical Chern-Simons (DCS) gravity, a typical parity-violating gravitational theory, modifies both the generation and propagation of gravitational waves from general relativity (GR). In this work, we derive the gravitational waveform radiated from a binary black hole system with eccentric orbits under the spin-aligned assumption in the DCS theory. Compared with GR, DCS modification enters the second-order post-Newtonian (2PN) approximation, affecting the spin-spin coupling and monopole-quadrupole coupling of binary motion. This modification produces an extra precession rate of periastron. This effect modulates the scalar and gravitational waveform through a quite low frequency. Additionally, the dissipation of conserved quantities results in the secular evolution of the semimajor axis and the eccentricity of binary orbits. Finally, the frequency-domain waveform is given in the post-circular scheme, requiring the initial eccentricity to be \(\lesssim 0.3\). This ready-to-use template will benefit the signal searches and improve the future constraint on DCS theory.
## I Introduction
General relativity (GR) is always considered the most successful theory of gravity [1]. However, various difficulties of this theory are also well known. On the theoretical side, GR has singularity and quantization problems [2; 3; 4]. On the experimental side, all the observations in the cosmological scale indicate the existence of so-called dark matter and dark energy [5; 6; 7; 8; 9], which might mean that GR is invalid at this scale. For these reasons, we now must experimentally test GR in a variety of different spacetime environments and astrophysical scales. Since then, the gravitational tests on the submillimeter scale [10; 11], in the solar system [12; 13; 14; 15; 16; 17; 18], in the binary-pulsar systems [19; 20; 21; 22; 23], and in the astrophysical and cosmological scales [24; 25; 26; 27] have been found to agree remarkably with Einstein's theory.
The direct observation of gravitational waves (GWs) provided a new probe to test gravity in extreme-gravity environments. As predicted by GR, the currently observable GW can only be generated in strong gravitational fields and hardly interacts with matter, carrying information about the nature of gravity in the strong-field regime. In recent years, Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo collaboration have detected 90 GW signals radiated from compact binary coalescence events [28; 29; 30; 31], for example, the well-known GW150914 [32] and GW190521 [33] from binary black holes (BBH) system, GW170817 [34] from binary neutron stars system, and GW200105 and GW200115 from neutron stars-black hole system [35]. More information about these identified signals can be found in Refs. [28; 29; 30; 31]. Numerous works have used GW data to perform gravitational tests [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. Currently, the third-generation GW observatories, Einstein Telescope (ET) [48] and Cosmic Explorer (CE) [49], as well as the space-based detectors, e.g., LISA [50], TianQin [51] and Taiji [52; 53], are under construction. These detectors are expected to observe several different astrophysical GW events, among which the compact binary inspirals are still one of the most promising sources [54; 55; 56].
This work focuses on the BBH systems and the radiated GWs during the process of inspiraling to merging [57; 13; 58]. Because of the extremely strong gravitational field near the binary system in the pre-merger phase, the radiated GW
may encode the distinction between the alternative gravitational theory and GR. This is why the detection of GWs generated from BBH is an essential aspect of the gravitational experiment [59; 60; 61]. However, for the detection of GW signals and subsequent extraction of physical parameters from them, a set of precise theoretical templates is required [62]. GW events will be confirmed only when a sufficiently high signal-to-noise ratio (SNR) of the signal with the template is achieved. Thus, the templates have to be highly accurate, because a systematic inaccuracy can underestimate the SNR and lead to a missed detection. However, these templates for highly nonlinear and strongly general relativistic processes, e.g., the evolution of BBH systems, can be constructed only by numerical-relativity simulations [63; 64], which is computationally expensive, especially for modeling the theories beyond GR [65; 66; 67]. The post-Newtonian (PN) approximation [68; 69] is an alternative method to model the gravitational waveform from BBH in the pre-merger phase, where the separation between two bodies is much larger than the gravitational radii of them and, equivalently, the relative velocity is much smaller than the speed of light in the vacuum, i.e., \(v^{2}\sim m/r\ll 1\). In this framework, thus, the bodies can be regarded effectively as point particles [68; 69]. Until recently, the analytical PN expansion was known for non-spinning systems up to the 3.5PN, i.e., up to the \(v^{7}\) correction beyond the leading-order quadrupole formula [69; 70; 71; 58; 72]. Refs. [73; 74; 75; 76; 77] have pushed the accuracy to the next 4PN level. Adopting such an approach, Refs. [13; 58] gave the templates in Newtonian order for non-spinning BBH systems with circular orbits, or quasi-circular orbits, more accurately, due to the radiation reaction. And the higher-order corrections can be found in Refs. [78; 79; 80; 81].
Quasi-circular orbits are a reasonable choice because the GW dissipation circularizes the orbits of isolated BBH with an initial eccentricity, by the time their orbital frequency reaches the sensitive bands of ground-based GW observatories [82]. However, recent analyses [83; 84; 85] found evidence of eccentricity in GW150921 and several other events, indicating these binary systems formed dynamically in densely populated environments. Ignoring eccentricity will result in an illusion of a deviation from GR [86]. The templates for eccentric binary are also an essential topic in both numerical simulations [87] and analytical calculations [82; 88]. When regarding the two bodies as point masses in the PN framework, the bound orbits are accurately elliptic, parameterized by semimajor axis and eccentricity, at the Newtonian-order approximation. In this simplest case, Refs. [82; 89] gave the energy dissipation, and Yunes et al. [88] obtained the analytic expression for such a frequency-domain template in the small-eccentricity limit. However, the higher-order effects bring some difficulties in getting the templates. On the one hand, starting from the 1PN approximation, such correction induces the well-known periastron advance effect [13], causing the binary orbit to be no longer closed. To solve this problem, the quasi-Kerperian (QK) parameterization is usually adopted to integrate the equations of motion (EOM). The results for non-spinning BBH systems were derived by Refs. [13; 90] at 1PN order, by Refs. [91; 92] at 2PN order, and by Ref. [93] at 3PN order. Based on the precise descriptions of binary motions, the gravitational waveforms are logically obtained [94; 95; 96; 97; 98; 99], where the periastron advance modulates the waveforms with a much lower frequency. On the other hand, starting from the 1.5PN approximation, the non-aligned spins of the objects influence the motion via spin-orbit (SO), spin-spin (SS), and monopole-quadrupole (MQ) couplings [100; 101; 102; 103; 104; 105]. In particular, the spins' components, perpendicular to the orbit, produce the orbital precession [45] and prevent the orbital circularization during energy dissipation, disappearing for the spin-aligned case [106; 107]. The efforts involving the spin effects can be found in Refs. [106; 108; 109; 110; 111]. The corresponding frequency-domain waveform can be found in Refs. [107; 112; 113; 114; 115; 116; 117; 118].
The GW-based gravitational test entails comparing the signals predicted by GR and those by the alternative theories and constraining their differences by observations [119; 120; 121; 122; 123; 45; 124; 125; 126; 127; 128; 129; 130; 46; 131; 47; 132; 48; 133], in which the modified templates via PN approximation are required. Modified gravitational waveforms have been derived in various extended gravitational theories for quasi-circular [134; 135; 136; 137; 138; 139; 140; 141] and quasi-elliptic [142; 143] binary systems at the leading Newtonian order, that is the leading-order beyond-GR modification and the sub-leading effects are negligible. Unlike the above theories, as a kind of parity-violating theory [144; 130; 145; 146], dynamical Chern-Simons (DCS) gravity [147; 148] modifies the waveform from BBH at 2PN approximation, leaving behind a lower-order waveform that is completely consistent with GR [149; 150; 151; 152]. The template under quasi-circular and spin-aligned assumption has been reported in our previous works [151; 152]. Then, the quasi-Keplerian case is taken into consideration in this paper, including the periastron advance but still assuming the spin vectors to be aligned with the orbital angular momentum (OAM). So the spin precession effects [153] exceed the range of this article.
This work aims to derive the quasi-Keplerian motion, the time-domain gravitational waveform, orbital evolution due to the radiation reaction, and the frequency-domain waveform for the non-precessing BBH in DCS gravity. Because of the spin-aligned assumption, the bodies are always in the orbital plane. So the quasi-Keplerian parameterization [13; 89; 90; 92; 93] is successfully applied to the DCS extension, involving two elements, the "radial" semimajor axis \(a_{r}\) (equivalently the orbital frequency \(F\) or its dimensionless version \(x\)) and "radial" eccentricity \(e_{r}\). The final result presents a doubly periodic structure and predicts the precession rate of the periastron. Precession makes the azimuth angle to be no longer a suitable periodic variable. Two alternative angular variables, the true anomaly, and the mean anomaly are used to express the waveform. Both of these versions show the low-frequency modulation from the precession effect. Based on the motion and time-domain waveform, the dissipation rate of energy and orbital angular
momentum are obtained and we can get the secular evolution of elements. The orbital circularization is described by equation \(dx/de_{r}\), with analytical solution \(x=x(e_{r})\). Although the extra scalar field slightly affects the rate of circularization, the eccentric orbits are still finally reduced to be quasi-circular through radiation reaction. To finally calculate the waveform in Fourier space, the eccentricity is written as a function of frequency, \(e_{r}(F)\), up to \(\mathcal{O}(e_{r}^{4})\) order, valid for \(e_{0}\lesssim 0.3\). Therefore, the stationary phase approximation (SPA) [107; 112] gives the Fourier transformation of the time-domain waveform and the ready-to-use template. These results will benefit the signal searches and improve the theoretical constraints of DCS theory in the future.
This paper is organized as follows. In Section II, we briefly review the DCS theory. In Section III, we give conserved quantities, EOM, and its QK parameterization solution up to the leading-order DCS modification. Section IV calculates the scalar and tensor gravitational waveforms and their polarizations. Furthermore, the energy, angular-momentum fluxes carried by radiation, and the secular evolutions of QK orbital elements are presented in Section V. The post-circular frequency-domain waveform is shown in VI. Finally, we make a summary and discussion in Section VII. Some complicated but unimportant modified coefficients are listed in Appendices A, B, C, and D. Throughout the paper, we work in geometric units in which \(c=G=1\), where \(c\) is the speed of light in the vacuum and \(G\) is the gravitational constant.
## II DCS gravity
In this section, we outline the basic knowledge of DCS theory [147; 148]. The full action of the DCS theory is
\[S=\int\mathrm{d}^{4}x\sqrt{-g}\left[\frac{1}{16\pi}R+\frac{\alpha}{4}\vartheta R \hat{R}-\frac{\beta_{0}}{2}(\nabla_{\mu}\vartheta)(\nabla^{\mu}\vartheta)+ \mathcal{L}_{m}\right], \tag{1}\]
where the gravity is described by a pseudo scalar field \(\vartheta\) and the metric \(g_{\mu\nu}\). The scalar-Pontryagin density coupling term, \(R\hat{R}\), in the Einstein-Hilbert action is a kind of parity-violating modification, causing the non-conservation of the DCS topological current. As in Ref. [150], we do not consider the potential of the scalar field. In Eq. (1), \(g\) is the determinant of the metric \(g_{\mu\nu}\) and \(R\) is the Ricci scalar. \(\mathcal{L}_{m}\) is the Lagrangian density of the matter field. \(\alpha\) and \(\beta_{0}\) are the coupling parameters. \(R\hat{R}\equiv(1/2)\varepsilon^{\rho\sigma\alpha\beta}R_{\nu\mu\rho\sigma}R^{ \mu\nu}_{\ \ \ \alpha\beta}\) is the Potryagin density, with \(R_{\nu\mu\rho\sigma}\) being the Riemann tensor and \(\varepsilon^{\rho\sigma\alpha\beta}\) being the Levi-Civita tensor defined in terms of the antisymmetric symbol \(\epsilon^{\rho\sigma\alpha\beta}\) as \(\varepsilon^{\rho\sigma\alpha\beta}=(1/\sqrt{-g})\epsilon^{\rho\sigma\alpha\beta}\), where \(\epsilon^{0123}=1\).
The variation of the full action (1) with respect to the metric \(g^{\mu\nu}\) yields the modified field equation [147; 148],
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R+16\pi\alpha C_{\mu\nu}=8\pi\left[T^{(\mu)}_ {\mu\nu}+T^{(\vartheta)}_{\mu\nu}\right], \tag{2}\]
where \(R_{\mu\nu}\) is Ricci tensor and \(C_{\mu\nu}\) is Cotton tensor defined as
\[C^{\mu\nu}=-\varepsilon^{\rho(\mu|\alpha\beta|}\left[\nabla_{\alpha}R^{\nu}_{ \ \beta}\right](\nabla_{\rho}\vartheta)-\hat{R}^{\kappa(\mu|\rho|\nu)}(\nabla_{ \kappa}\nabla_{\rho}\vartheta). \tag{3}\]
Note that the Cotton tensor \(C_{\mu\nu}\) is traceless, \(g^{\mu\nu}C_{\mu\nu}=0\), and satisfies the Bianchi identity, \(\nabla^{\mu}C_{\mu\nu}=0\). \(T^{(m)}_{\mu\nu}\) and \(T^{(\vartheta)}_{\mu\nu}\) denote the energy-momentum tensors of the matter field and the DCS scalar field,
\[T^{(\vartheta)}_{\mu\nu}=\beta_{0}\left[(\nabla_{\mu}\vartheta)(\nabla_{\nu} \vartheta)-\frac{1}{2}g_{\mu\nu}(\nabla_{\alpha}\vartheta)(\nabla^{\alpha} \vartheta)\right]. \tag{4}\]
The equation of the scalar field can also be derived by variation of the action (1) to the scalar field \(\vartheta\), which is
\[\beta_{0}\Box_{g}^{2}\vartheta=-\frac{\alpha}{4}R\hat{R}. \tag{5}\]
We would like to mention here that when the coupling \(\beta_{0}\) is \(0\), the full action (1) reduces to that of the non-dynamical Chern-Simons gravity. In this case, the scalar field equation (5) becomes an additional differential constraint, i.e., the _Pontryagin constraint_ on the space of the allowed solutions, \(R\hat{R}=0\). This work will not consider this case but only focus on the DCS gravity, in which the parameter \(\beta_{0}\neq 0\).
This modification to GR leads to a series of parity-violating effects. One of the most important predictions is that the amplitude of the left-handed circular polarization mode of GWs increases (or decreases) during the propagation while the amplitude of the right-handed mode decreases (or increases). This phenomenon is always called amplitude birefringence of GWs [148; 152; 130]. The similar phenomenons (as well as the velocity birefringence) are investigated
in other parity-violating theories, such as ghost-free parity-violating theory [154; 155], Nieh-Yan gravity [156; 157; 154], Horava-Lifshitz gravity [158; 159; 160], parity-violating symmetric teleparallel gravity [161; 162; 146], spatially covariant gravity [163; 164; 133], and reviewed by Refs. [165; 166; 130]. This effect greatly promotes the testing of parity symmetry in the gravitational sector by GW observation [167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 189].
Due to the parity-violating property, the Pontryagin density generally disappears in spherically symmetric spacetime. For this reason, the Schwarzschild black hole is still an exact solution to DCS theory [170]. The GWs radiated from binary Schwarzschild BHs are the same as those in GR unless the tidal deformation is considered [171; 172]. In Refs. [150; 151; 152] and this article, the spinning BBH is investigated. The DCS theory modifies the SS and MQ coupling [149] between bodies and affects the gravitational waveforms. As explained in Ref. [152], since the time scale of the binary merger is much smaller than that of the cosmological expansion, the parity-violating effect of the DCS theory does not appear in the process of GW generation. And because the scalar field is coupled with the second-order metric perturbation and the Cotton tensor encoding the parity violation is traceless, none of the extra modes appears in this theory up to the leading order [173; 152], which is much different from other modified gravity. However, the scalar radiation still carries the energy and angular momentum, accelerating the orbital decay and binary coalescence.
## III Eccentric motion
### Conserved Energy and Orbital Angular Momentum
This work aims to extend the previous calculation [150; 151; 152] in the quasi-circular case to the eccentric case. As a beginning, we first solve the EOM of the spin-aligned BBH system in DCS theory. For simplicity, throughout this article, only the Newtonian-order and DCS-modified terms are retained, and the PN correction of GR is dropped, as these contents can be found in previous publications, e.g., [69] and references therein.
We start from the conserved quantities of a BBH system, consisting of two spinning, well-separated bodies, whose mass and spin angular momentum vector are denoted by \(m_{A}\) and \(\mathbf{S}_{A}\), respectively. The DCS modified binding energy of such system in center-of-mass (COM) frame is given by Refs. [151; 152]
\[\varepsilon=\varepsilon_{\rm N}+\delta\varepsilon. \tag{6}\]
The Newtonian energy is
\[\varepsilon_{\rm N}=\frac{1}{2}v^{2}-\frac{m}{r} \tag{7}\]
and its DCS modification is determined by so-called "guess-work" [72], which gives
\[\delta\varepsilon=-\frac{1}{3}\delta\varpi\left(\frac{m}{r}\right)^{3}, \tag{8}\]
where \(\mathbf{v}\) is the relative velocity of the BBH system, \(m\equiv m_{1}+m_{2}\) is the total mass, and \(r\) is the distance between two bodies. Following Ref. [152], the correction coefficient is defined as
\[\delta\varpi\equiv\zeta\left\{\frac{75}{256}\frac{1}{\nu}\left(\frac{\mathbf{S}_{ 1}}{m_{1}^{2}}\cdot\frac{\mathbf{S}_{2}}{m_{2}^{2}}\right)-\frac{603}{3584}\left[ \frac{m^{2}}{m_{1}^{2}}\left(\frac{\mathbf{S}_{1}}{m_{1}^{2}}\right)^{2}+\frac{m^ {2}}{m_{2}^{2}}\left(\frac{\mathbf{S}_{2}}{m_{2}^{2}}\right)^{2}\right]\right\}. \tag{9}\]
This coefficient shows two interactions between black holes induced by DCS modification, SS and MQ couplings. The symmetric mass ratio \(\nu\) is defined by \(\nu\equiv m_{1}m_{2}/m^{2}\) and the dimensionless coupling \(\zeta\) is
\[\zeta\equiv 16\pi\frac{\alpha^{2}}{\beta_{0}m^{4}}. \tag{10}\]
\(\alpha\) and \(\beta_{0}\) are the coupling parameters introduced in the DCS action (1). As we have mentioned, the DCS theory only modifies the motion of the BBH system in quadratic-spin coupling due to parity violation, leaving the non-spin effects and SO coupling to be fully consistent with GR. It is noted that the terms with \(\hat{\mathbf{n}}\cdot\mathbf{S}_{A}\) have been removed from Eq. (8) because of the spin-aligned assumption.
Another important conserved quantity, the OAM, in the COM frame is
\[\mathbf{h}=r(\hat{\mathbf{n}}\times\mathbf{v}). \tag{11}\]
Unlike the conserved energy, however, the quadratic-spin effect does not modify the OAM up to 2PN approximation [13]. One can find the lowest-order, 3PN, correction in Refs. [174, 102]. The conservation of OAM indicates that the motions of bodies are constrained on the orbital plane.
It is worth noting that these modifications to conserved quantities are valid in the following three approximations, small-coupling of \(\sim\mathcal{O}(\zeta)\), slowly-rotating of \(\sim\mathcal{O}(S^{2})\), and 2PN of \(\sim\mathcal{O}(v^{4})\). Assuming the parameter \(\alpha\), representing the strength of the interaction between scalar and tensor fields, to be weak and the bodies' spin to be sufficiently small admits an analytic solution to black hole spacetime [175, 176]. The PN approximation allows an expansion in terms of the typical velocities of bodies [69]. The quadratic-spin and 2PN approximation are equivalent to each other because the quadratic-spin effects first enter the 2PN correction [102, 174].
### Equation of Motion
The motions of the non-precessing BBH systems are constrained on the orbital plane. It is convenient to describe the motion using polar coordinate, radial coordinate \(r\), and azimuth coordinates \(\phi\). Thus, one can define the relative direction vector, \(\hat{\mathbf{n}}=(\cos\phi,\sin\phi,0)\), pointing 2-nd body from 1-st body and \(\hat{\mathbf{\lambda}}=(-\sin\phi,\cos\phi,0)\) as another orthogonal direction on the orbital plane. Therefore, the relative velocity \(\mathbf{v}\) can be expanded as \(\mathbf{v}=\dot{r}\hat{\mathbf{n}}+r\phi\hat{\mathbf{\lambda}}\), where "dot" means the derivative with respect to the time \(t\). Combining the above definitions and conserved quantifies (6, 11), one can re-derive the EOM of the BBH system. The radial and azimuth equations are
\[\dot{r}^{2}=2\varepsilon+2\gamma-j^{2}\gamma^{2}+\frac{2}{3}\delta\varpi \gamma^{3},\quad\text{and}\quad m\dot{\phi}=j\gamma^{2}, \tag{12}\]
respectively, where \(\gamma\equiv m/r\) and \(mj\equiv h=|\mathbf{h}|\). The radial equation can be solved through the following integration,
\[t-t_{0}=\pm\frac{m}{j}\int\frac{(1+\frac{2}{3}\frac{\delta\varpi}{j^{4}})+ \frac{1}{3}\frac{\delta\varpi}{j^{2}}\gamma}{\gamma^{2}\sqrt{(\gamma_{+}- \gamma)(\gamma-\gamma_{-})}}\mathrm{d}\gamma=\pm\int T(\gamma)\mathrm{d}\gamma. \tag{13}\]
In Eq. (13), \(\gamma_{\pm}\) are defined as the periastron and apastron of the binary system, which is given perturbatively by
\[\gamma_{\pm}=\frac{1\pm\sqrt{1+2j^{2}\varepsilon}}{j^{2}}\left[1\pm\frac{1}{ 3}\frac{\delta\varpi}{j^{4}}\frac{(1\pm\sqrt{1+2j^{2}\varepsilon})^{2}}{\sqrt {1+2j^{2}\varepsilon}}\right]. \tag{14}\]
\(t_{0}\) is an integration constant, representing the time when the bodies first pass through the periastron.
Similarly, the solution to the azimuth equation has the following form,
\[\phi-\phi_{0}=\pm\int\frac{(1+\frac{2}{3}\frac{\delta\varpi}{j^{4}})+\frac{1} {3}\frac{\delta\varpi}{j^{2}}\gamma}{\sqrt{(\gamma_{+}-\gamma)(\gamma-\gamma_ {-})}}\mathrm{d}\gamma=\pm\int\Phi(\gamma)\mathrm{d}\gamma, \tag{15}\]
where \(\phi_{0}\) is another integration constant, representing the initial azimuth coordinate of the periastron.
Finally, the extra signs in the integrations (13, 15) are determined by whether the bodies move from the periastron to the apastron or vice versa. When the black hole moves from the periastron to the apastron, these integrations are evaluated as
\[t-t_{0}=\int_{\gamma}^{\gamma_{+}}T(\gamma)\mathrm{d}\gamma,\quad\text{and} \quad\phi-\phi_{0}=\int_{\gamma}^{\gamma_{+}}\Phi(\gamma)\mathrm{d}\gamma, \tag{16}\]
and, inversely, they are evaluated as
\[t-t_{0}=\int_{\gamma_{-}}^{\gamma}T(\gamma)\mathrm{d}\gamma+\int_{\gamma_{-}} ^{\gamma_{+}}T(\gamma)\mathrm{d}\gamma,\quad\text{and}\quad\phi-\phi_{0}=\int _{\gamma_{-}}^{\gamma}\Phi(\gamma)\mathrm{d}\gamma+\int_{\gamma_{-}}^{\gamma_ {+}}\Phi(\gamma)\mathrm{d}\gamma. \tag{17}\]
We would like to note that, when the orbits are no longer to be closed, the concept of the period is obscured. In general, a time period is considered as the time interval between two consecutive passes of a body through the periastron, and the azimuth interval that the bodies pass during this period is considered an azimuth period. In the Newtonian limit, these two periods are related by Kepler's third law. However, after considering the PN correction, the situation became different. It is convenient for us to introduce an alternative angular variable, true anomaly [177] denoted by \(V\), which is defined as the difference between the azimuth coordinate and the current periastron. The true anomaly passes through \(2\pi\) during a period, while the azimuth is more advanced. This means that the eccentric motion presents a _doubly periodic structure_ in the PN framework [69].
### Quasi-Keplerian Parameterization
The above integrations (13, 15) in the limit (\(\delta\varpi=0\)) give an elliptic orbit described by two unique parameters, the eccentricity \(e\) and the semimajor axis \(a\). The trajectory, \(r=r(\phi)\), of relative motion is \(r=a(1-e^{2})/[1-e\cos(\phi-\phi_{0})]\). However, when considering the PN corrections, the gravitational interaction violates the inverse-square law. The eccentricity and semimajor axis of an unclosed orbit cannot be well defined. Such that, the corresponding QK parameterization introduces two new elements, the "radial" eccentricity \(e_{r}\) and the "radial" semimajor axis \(a_{r}\) (eccentricity and semimajor axis in short), defined through periastron and apastron by
\[a_{r}\equiv 2m\frac{\gamma_{+}\gamma_{-}}{\gamma_{+}+\gamma_{-}},\quad\text{ and}\quad e_{r}\equiv\frac{\gamma_{+}-\gamma_{-}}{\gamma_{+}+\gamma_{-}}, \tag{18}\]
respectively, to simplify integrations (13, 15). The modified trajectory is parameterized through
\[\gamma=\frac{\xi}{1-e_{r}\cos u},\quad\text{and}\quad\xi=\frac{m}{a_{r}}. \tag{19}\]
\(u\) is the eccentric anomaly related to the true anomaly by some geometric relationships,
\[\cos V=\frac{\cos u-e_{r}}{1-e_{r}\cos u},\quad\text{and}\quad\sin V=\frac{ \sqrt{1-e_{r}^{2}}\sin u}{1-e_{r}\cos u}, \tag{20}\]
or equivalently, the direct definition is
\[V\equiv 2\arctan\left[\sqrt{\frac{1+e_{r}}{1-e_{r}}}\tan\left(\frac{u}{2} \right)\right]. \tag{21}\]
When setting the eccentricity, \(e_{r}\), as zero, the relative distance \(r\) reduces to a constant, such that the quasi-elliptic orbits tend to a quasi-circular ones.
Using Eq. (14), one can re-express the elements (18) in terms of the energy and OAM,
\[\xi=-2\varepsilon\left(1-\frac{2}{3}\delta\varpi\frac{\varepsilon}{j^{2}} \right),\quad\text{and}\quad e_{r}=\sqrt{1+2j^{2}\varepsilon}\left(1-\frac{4} {3}\delta\varpi\frac{\varepsilon}{j^{2}}\frac{1+j^{2}\varepsilon}{1+2j^{2} \varepsilon}\right). \tag{22}\]
We note that the above elements are not geometric quantities like that in the Newtonian case, but just two parameters related to the conserved quantities, \(\varepsilon\) and \(j\). Inversely, the conserved energy and OAM also can be represented by these elements, they are
\[\varepsilon=-\frac{\xi}{2}\left(1-\frac{1}{3}\delta\varpi\frac{\xi^{2}}{1-e_{ r}^{2}}\right),\quad\text{and}\quad j=\frac{\sqrt{1-e_{r}^{2}}}{\sqrt{\xi}} \left[1+\frac{1}{6}\delta\varpi\left(\frac{\xi}{1-e_{r}^{2}}\right)^{2}(3+e_{r }^{2})\right], \tag{23}\]
respectively. The parameterized time integration (13) and azimuth integration (15) are given by
\[\begin{split} t&=\frac{m}{j}\frac{\sqrt{1-e_{r}^{2} }}{\xi^{2}}\int_{0}^{u}\left[-\left(1+\frac{2}{3}\frac{\delta\varpi}{j^{4}} \right)(1-e_{r}\cos u)-\frac{1}{3}\frac{\delta\varpi}{j^{2}}\xi\right]\mathrm{ d}u,\\ \text{and}\quad\phi&=\sqrt{1-e_{r}^{2}}\int_{0}^{u} \frac{1}{(1-e_{r}\cos u)^{2}}\left[-\left(1+\frac{2}{3}\frac{\delta\varpi}{j^ {4}}\right)(1-e_{r}\cos u)-\frac{1}{3}\frac{\delta\varpi}{j^{2}}\xi\right] \mathrm{d}u.\end{split} \tag{24}\]
Without the loss of generality, the integration constants, \(t_{0}\) and \(\phi_{0}\), have been taken as zero.
### Solution and Time, Azimuth Period
After parameterization, the integrations (24) can be evaluated directly through integral formulas
\[\int\frac{\mathrm{d}u}{(1-e_{r}\cos u)^{2}}=\frac{1}{(1-e_{r}^{2})^{3/2}}(e_{r }\sin V+V),\quad\text{and}\quad\int\frac{\cos u\mathrm{d}u}{(1-e_{r}\cos u)^{2 }}=\frac{1}{(1-e_{r}^{2})^{3/2}}(\sin V+e_{r}V). \tag{25}\]
Starting from the Eqs. (24) and (25), we get
\[\begin{split} t(u)&=\frac{m}{j}\frac{\sqrt{1-e_{r}^{2} }}{\xi^{2}}\left\{\left[\left(1+\frac{2}{3}\frac{\delta\varpi}{j^{4}}\right)+ \frac{1}{3}\frac{\delta\varpi}{j^{2}}\xi\right]u-\left(1+\frac{2}{3}\frac{ \delta\varpi}{j^{4}}\right)e_{r}\sin u\right\},\\ \text{and}\quad\phi(u)&=\left[\left(1+\frac{2}{3} \frac{\delta\varpi}{j^{4}}\right)+\frac{1}{3}\frac{\delta\varpi}{j^{2}}\left( \frac{\xi}{1-e_{r}^{2}}\right)\right]V+e_{r}\left[\frac{1}{3}\frac{\delta \varpi}{j^{2}}\left(\frac{\xi}{1-e_{r}^{2}}\right)\right]\sin V.\end{split} \tag{26}\]
From the integration results (26), we can conclude two important parameters, the time period and azimuth period mentioned in III.2, they are given by
\[T\equiv t(u=2\pi)=2\pi m(-2\varepsilon)^{-3/2},\quad\text{and}\quad K\equiv \phi(u=2\pi)=2\pi\left(1+\frac{\delta\varpi}{j^{4}}\right), \tag{27}\]
respectively. In the Newtonian order, the orbits of the BBH system are standard ellipses. Such that, the BHs can accurately return to their periastron, completing an azimuth period, within a time period. However, the higher-PN order approximation makes the azimuth motion of BBH more than \(2\pi\) within a time period. The residual azimuth motion is the well-known periastron-advance effect [90; 91; 92; 13].
### Final Parameterization
In this subsection, we summarize the final results of quasi-Keplerian motion. The time and azimuth of the bodies are shown as the function of the true anomaly or eccentric anomaly (26), which are
\[\frac{2\pi}{T}t(u)=u-e_{t}\sin u, \tag{28}\]
and
\[\frac{2\pi}{K}\phi(u)=2\arctan\left[\sqrt{\frac{1+e_{\phi}}{1-e_{\phi}}}\tan \left(\frac{u}{2}\right)\right]\equiv v, \tag{29}\]
respectively. Eq. (28) is usually called _modified Keplerian equation_. Together with \(r=a_{r}(1-e_{r}\cos u)\), we complete the all steps for parameterization. In the final results (28, 29), introducing the "time" and "azimuth" eccentricities, \(e_{t}\) and \(e_{\phi}\), aims to simplify the complicated expressions. They are not independent elements but also depend on the conserved quantities by
\[e_{t}=e_{r}\left(1+\frac{2}{3}\delta\varpi\cdot\frac{\varepsilon}{j^{2}} \right),\quad\text{and}\quad e_{\phi}=e_{r}\left(1-\frac{2}{3}\delta\varpi \cdot\frac{\varepsilon}{j^{2}}\right). \tag{30}\]
At the same time, we can also write them in terms of "radial" elements by
\[e_{t}=e_{r}\left(1-\frac{1}{3}\delta\varpi\cdot\frac{\xi^{2}}{1-e_{r}^{2}} \right),\quad\text{and}\quad e_{\phi}=e_{r}\left(1+\frac{1}{3}\delta\varpi \cdot\frac{\xi^{2}}{1-e_{r}^{2}}\right). \tag{31}\]
Additionally, the time and azimuth periods are also expressed by elements \(a_{r}\) and \(e_{r}\) as follows,
\[T=2\pi\frac{m}{\xi^{3/2}}\left(1+\frac{1}{2}\delta\varpi\cdot\frac{\xi^{2}}{1 -e_{r}^{2}}\right),\quad\text{and}\quad K=2\pi\left[1+\delta\varpi\left(\frac {\xi}{1-e_{r}^{2}}\right)^{2}\right]. \tag{32}\]
\(\xi\) is one of the tracers of the PN order. An \(n\)-PN term generally contains a factor \(\xi^{n}\). And we can find again the DCS theory modifies the BBH motion at 2PN approximation.
The portion of the azimuth period \(K\) that exceeds \(2\pi\) represents the periastron-advance effect, and the precession rate is defined as
\[\beta\equiv\frac{K}{2\pi}-1=\delta\varpi\cdot\frac{\xi^{2}}{(1-e_{r}^{2})^{2 }}. \tag{33}\]
After a periodic motion, the periastron advances an extra angle \(2\pi\beta\) to the last period. The well-known 1PN analogues is \(6\pi\xi/(1-e_{r}^{2})\), successfully explaining the perihelion advance of Mecury [13]. One can find that the DCS modification
enters 2PN correction, as we have shown in previous work [152]. Another important property of the precession rate of periastron is nonvanishing when setting eccentricity as zero. In other words, the precession effect still exists even in circular-orbit motion, which is incomprehensible. This anomaly originates from the ill definition of the periastron of the circular orbit. This anomaly can be eliminated in subsequent derivation by defining the gauge-invariant parameter, orbital frequency
\[\Omega\equiv\frac{K}{T}=\frac{\xi^{3/2}}{m}\left[1+\frac{\delta\varpi}{2}\frac{ \xi^{2}}{(1-e_{r}^{2})^{2}}(1+e_{r}^{2})\right] \tag{34}\]
or the dimensionless frequency
\[x\equiv(m\Omega)^{2/3} \tag{35}\]
which is another tracer of the PN order. \(n\)-PN terms generally present a factor \(x^{n}\). Additionally, Kepler's third law with higher-order modification, relating the elements and the frequency, is
\[\xi=x\left[1-\frac{\delta\varpi}{3}\frac{(1+e_{r}^{2})}{(1-e_{r}^{2})^{2}}x^{2 }\right]. \tag{36}\]
For the calculation in the next section, we also present the parameterization of the time derivatives of radial and azimuth coordinates, \(\dot{r}\) and \(\dot{\phi}\),
\[\begin{split}\dot{r}&=\sqrt{\xi}\frac{e_{r}\sin u}{ 1-e_{r}\cos u}\left[1-\frac{1}{6}\delta\varpi\cdot\frac{\xi^{2}(3-e_{r}\cos u )}{(1-e_{r}^{2})(1-e_{r}\cos u)}\right],\\ \text{and}&\dot{\phi}&=\frac{\xi^{3/2} }{m}\frac{\sqrt{1-e_{r}^{2}}}{(1-e_{r}\cos u)^{2}}\left[1+\frac{1}{6}\delta \varpi\cdot\frac{\xi^{2}(3+e_{r}^{2})}{(1-e_{r}^{2})^{2}}\right].\end{split} \tag{37}\]
## IV Gravitational radiation
At present, we have provided a complete solution to BBH motion in DCS gravity, Eqs. (28) and (29), equipped by parameterization, \(r=a_{r}(1-e_{r}\cos u)\). Now, we turn to consider the radiation field observed at the far zone with inclination angle \(\iota\) and azimuth angle \(\omega\). The line-of-sight vector is \(\tilde{\mathbf{N}}=(\sin\iota\sin\omega,\sin\iota\cos\omega,\cos\iota)\) and distance is \(R\). As we have mentioned above, due to the periastron advance, the azimuth coordinate \(\phi\) is not considered a suitable periodic angular variable. Contrastively, the true anomaly \(V\) and mean anomaly \(\ell\) are usually used to describe the waveform. In this section, we focus on the true-anomaly representation. From Eq. (29), the relationship between \(\phi\) and \(V\) is given by
\[\phi=2K\arctan\left[\sqrt{\frac{1+e_{\phi}}{1-e_{\phi}}}\tan\left(\frac{u}{2} \right)\right]\simeq V(1+\beta)+\frac{1}{3}\delta\varpi\cdot\frac{e_{r}\xi^{2 }}{(1-e_{r}^{2})^{2}}\sin V, \tag{38}\]
up to the linear order of coupling \(\zeta\). This relation gives the periodic function,
\[\begin{split}\sin\phi&=\sin[(1+\beta)V]+\frac{1}{3} \delta\varpi\cdot\frac{e_{r}\xi^{2}}{(1-e_{r}^{2})^{2}}\cos[(1+\beta)V]\sin V,\\ \text{and}&\cos\phi&=\cos[(1+\beta)V]- \frac{1}{3}\delta\varpi\cdot\frac{e_{r}\xi^{2}}{(1-e_{r}^{2})^{2}}\sin[(1+ \beta)V]\sin V.\end{split} \tag{39}\]
Although the parameter \(\beta\) is always a perturbative quantity of order \(\mathcal{O}(\zeta\xi^{2})\), \(\beta V\) cannot be regarded as a small one as \(V\) increases infinitely. In comparison, the function \(\sin V\) and \(\cos V\), with upper and lower limits, always allow a Taylor expansion in terms of \(\delta\varpi\sin V\) and \(\delta\varpi\cos V\).
Eq. (39) presents a doubly-periodic structure in the BBH motion. The first period is given by \(\sin(1+\beta)V\) and \(\cos(1+\beta)V\), the azimuth of bodies passes through a \(2\pi\) angle within this period. The second one is given by \(\sin V\) and \(\cos V\), the bodies return to the periastron within this period. This structure will enter the gravitational waveform that will be displayed soon.
### Scalar Radiation
The full scalar radiation at infinity is given by our previous work [152],
\[\vartheta=\frac{2m\nu}{R}\cdot\frac{5}{16}\gamma^{2}\frac{\alpha}{\beta_{0}m^{2}} \frac{1}{\nu}[(\hat{\mathbf{n}}\cdot\tilde{\mathbf{\Delta}})+(\hat{\mathbf{N}}\cdot \tilde{\mathbf{\Delta}})(\hat{\mathbf{N}}\cdot\hat{\mathbf{n}})]. \tag{40}\]
The difference between bodies' spins is
\[\tilde{\mathbf{\Delta}}\equiv\frac{m_{2}}{m}\frac{\mathbf{S}_{1}}{m_{1}^{2}}-\frac{m_{ 1}}{m}\frac{\mathbf{S}_{2}}{m_{2}^{2}}, \tag{41}\]
producing a minus sign when exchanging labels \(1\leftrightarrow 2\), and vanishing when the masses and spins are exactly equal for these two black holes, \(m_{1}=m_{2}\) and \(\mathbf{S}_{1}=\mathbf{S}_{2}\), meaning again that scalar field is a pseudoscalar field and the scalar radiation is also a kind of parity-violating effect. For non-precessing binaries, in which \((\hat{\mathbf{n}}\cdot\tilde{\mathbf{\Delta}})=0\), the radiation field reduces to
\[\vartheta=\frac{2m\nu}{R}\cdot\frac{5}{16}\frac{\alpha}{\beta_{0}m^{2}}\frac{ \gamma^{2}}{\nu}(\hat{\mathbf{N}}\cdot\tilde{\mathbf{\Delta}})(\hat{\mathbf{N}} \cdot\hat{\mathbf{n}}). \tag{42}\]
Substituting Eq. (39) into (42), we get
\[\vartheta=-\frac{2m\nu}{R}\frac{5}{16}\frac{\alpha}{\beta_{0}m^{2}}\frac{ \xi^{2}}{\nu}\frac{(1+e_{r}\cos V)^{2}}{(1-e_{r}^{2})^{2}}\tilde{\Delta}\sin \iota\cos\iota\sin(V+\omega), \tag{43}\]
with \(\tilde{\Delta}\equiv|\tilde{\mathbf{\Delta}}|\). When setting the eccentricity and observation azimuth as zero, this expression returns to that in the quasi-circular case [150; 152].
### Tensor Radiation
In the PN framework, the gravitational waveform contains an "instantaneous" term, depending on the state of the binary at the retarded time only, and a "tail" term, which is sensitive to the wave field at all previous time [78]. The quadratic-spin correction does not change the tail terms in a non-precessing system [102; 152], such that we focus on the "instantaneous" term only. In the transverse-traceless (TT) gauge, the metric tensor of GW is
\[(h_{ij}^{\rm TT})_{\rm inst}=\frac{2\nu m}{R}\xi_{ij}^{\rm TT}=\frac{2\nu m}{ R}\hat{\Lambda}_{ij,kl}\xi_{kl}, \tag{44}\]
where \(\hat{\Lambda}_{ij,kl}\) is the TT-projection operator, defined as \(\hat{\Lambda}_{ij,kl}(\hat{\mathbf{N}})\equiv\Pi_{ik}\Pi_{jl}-(1/2)\Pi_{ij}\Pi _{kl}\), with \(\Pi_{ij}\equiv\delta_{ij}-\hat{N}_{i}\hat{N}_{j}\). The reduced metric tensor is decomposed as a Newtonian term and the DCS modification,
\[\xi_{ij}=\xi_{ij}^{(0)}+\delta\xi_{ij}. \tag{45}\]
The Newtonian term is
\[\xi_{ij}^{(0)}=2(v^{i}v^{j}-\gamma\hat{n}^{i}\hat{n}^{j}), \tag{46}\]
and its components are
\[\xi_{11}^{(0)} =2\left[(\dot{r}\cos\phi-r\dot{\phi}\sin\phi)^{2}-\frac{m}{r}\cos ^{2}\phi\right],\] \[\xi_{12}^{(0)} =2\left[(\dot{r}\sin\phi+r\dot{\phi}\cos\phi)(\dot{r}\cos\phi-r \dot{\phi}\sin\phi)-\frac{m}{r}\sin\phi\cos\phi\right], \tag{47}\] \[\xi_{22}^{(0)} =2\left[(\dot{r}\sin\phi+r\dot{\phi}\cos\phi)^{2}-\frac{m}{r}\sin ^{2}\phi\right].\]
The DCS modification is [152]
\[\delta\xi_{ij}=-2\cdot\delta\varpi\cdot\gamma^{3}\hat{n}_{i}\hat{n}_{j}=- \delta\varpi\left(\frac{m}{r}\right)^{3}\left(\begin{array}{cc}2\cos^{2} \phi&2\sin\phi\cos\phi\\ 2\sin\phi\cos\phi&2\sin^{2}\phi\end{array}\right). \tag{48}\]
Using the rotation matrix,
\[\mathbf{\mathcal{R}}\equiv\mathbf{\mathcal{R}}_{z}(\omega)\mathbf{\mathcal{R}}_{x}(\iota)= \left(\begin{array}{ccc}\cos\omega&-\sin\omega&0\\ \sin\omega&\cos\omega&0\\ 0&0&1\end{array}\right)\left(\begin{array}{ccc}1&0&0\\ 0&\cos\iota&-\sin\iota\\ 0&\sin\iota&\cos\iota\end{array}\right), \tag{49}\]
we transform the metric tensor from the binary frame to the propagation frame along the observational direction, \(\xi_{ij}(\tilde{\mathbf{N}})=\mathcal{R}_{ik}\mathcal{R}_{jl}\cdot\xi_{kl}\). TT projecting, \(\hat{\Lambda}_{ij,kl}\xi_{kl}\), gives the waveforms \(\xi_{ij}^{\rm TT}\) in TT gauge and the plus mode and the cross mode are just
\[\xi_{+}\equiv\xi_{11}^{\rm TT}=\xi_{+}^{(0)}+\delta\xi_{+}\quad\text{and} \quad\xi_{\times}\equiv\xi_{12}^{\rm TT}=\xi_{\times}^{(0)}+\delta\xi_{\times}, \tag{50}\]
where
\[\begin{split}\xi_{+}^{(0)}&=\left[\left(\dot{r}^{2}-r^{2} \dot{\phi}^{2}-\gamma\right)\cos(2\omega)-2r\dot{r}\dot{\phi}\sin(2\omega) \right]\frac{1+\cos^{2}\iota}{2}\cos(2\phi)\\ &+\left[-\left(\dot{r}^{2}-r^{2}\dot{\phi}^{2}-\gamma\right)\sin(2 \omega)-2r\dot{r}\dot{\phi}\cos(2\omega)\right]\frac{1+\cos^{2}\iota}{2}\sin( 2\phi)+\frac{1}{2}\sin^{2}\iota\left[(\dot{r}^{2}+r^{2}\dot{\phi}^{2})-\gamma \right],\\ \xi_{\times}^{(0)}&=\left[\sin(2\omega)\left(\dot{r}^{2}-r^{2} \dot{\phi}^{2}-\gamma\right)+2r\dot{r}\dot{\phi}\cos(2\omega)\right]\cos\iota \cos(2\phi)\\ &+\left[\cos(2\omega)\left(\dot{r}^{2}-r^{2}\dot{\phi}^{2}-\gamma \right)-2r\dot{r}\dot{\phi}\sin(2\omega)\right]\cos\iota\sin(2\phi),\end{split} \tag{51}\]
and
\[\begin{split}\delta\xi_{+}&=-\delta\varpi\cdot \gamma^{3}\left[\frac{1+\cos^{2}\iota}{2}+\frac{\sin^{2}\iota}{2}\cos(2\phi+2 \omega)\right],\\ \delta\xi_{\times}&=-\delta\varpi\cdot\gamma^{3} \cos\iota\sin(2\phi+2\omega).\end{split} \tag{52}\]
As we have commented at Sec II, there are no extra modes in the gravitational radiation.
### Waveforms in Terms of True Anomaly
In this subsection, we present the final results of the GW polarizations in terms of the true anomaly. Substituting the Eqs. \(r=a_{r}(1-e_{r}\cos u)\), and (39, 37, 20) into (51, 52), the final waveforms are expressed as
\[\begin{split}\xi_{+}^{(0)}&=\frac{\xi}{1-e_{r}^{2}} \left\{-\frac{1}{2}e_{r}^{2}\Big{[}(1+\cos^{2}\iota)\cos(2\beta V+2\omega)- \sin^{2}\iota\Big{]}\right.\\ &\qquad\left.-\frac{5}{4}e_{r}(1+\cos^{2}\iota)\cos[(1+2\beta)V +2\omega]\cos V+\frac{1}{2}e_{r}\sin^{2}\iota\cos V\right.\\ &\qquad\left.-(1+\cos^{2}\iota)\cos[(2+2\beta)V+2\omega]-\frac{1} {4}e_{r}(1+\cos^{2}\iota)\cos[(3+2\beta)V+2\omega]\right\},\\ \xi_{\times}^{(0)}&=\frac{\xi}{1-e_{r}^{2}}\left\{-e _{r}^{2}\sin(2\beta V+2\omega)-\frac{5}{2}e_{r}\sin[(1+2\beta)V+2\omega]\right. \\ &\qquad\left.-2\sin[(2+2\beta)V+2\omega]-\frac{1}{2}e_{r}\sin[(3+2 \beta)V+2\omega]\right\}\cos\iota,\end{split} \tag{53}\]
and
\[\begin{split}\delta\xi_{+}&=\frac{\xi^{3}}{(1-e_{r}^{2} )^{3}}\delta\varpi\left\{-\frac{1}{12}e_{r}(11+6e_{r}^{2})(1+\cos^{2}\iota)\cos [(1+2\beta)V+2\omega]\right.\\ &-\frac{1}{8}e_{r}(4+e_{r}^{2})\sin^{2}\iota\cos V+\frac{3}{8}e_ {r}^{3}(1+\cos^{2}\iota)\sin(2\beta V+2\omega)\sin V\\ &-\frac{1}{4}e_{r}^{2}\sin^{2}\iota\cos 2V-\frac{1}{4}(4+7e_{r}^{2} )(1+\cos^{2}\iota)\cos[(2+2\beta)V+2\omega]\\ &-\frac{1}{24}e_{r}^{3}\sin^{2}\iota\cos 3V-\frac{1}{48}e_{r}(76+1 3e_{r}^{2})(1+\cos^{2}\iota)\cos[(3+2\beta)V+2\omega]\\ &\left.-\frac{13}{24}e_{r}^{2}(1+\cos^{2}\iota)\cos[(4+2\beta)V +2\omega]-\frac{1}{16}e_{r}^{3}(1+\cos^{2}\iota)\cos[(5+2\beta)V+2\omega] \right\},\end{split} \tag{54}\]
\[\begin{split}\delta\xi_{\times}&=\frac{\xi^{3}}{(1-e _{r}^{2})^{3}}\delta\varpi\left\{-\frac{1}{6}e_{r}(11+6e_{r}^{2})\cos\iota\sin [(1+2\beta)V+2\omega]-\frac{3}{4}e_{r}^{3}\cos\iota\cos(2\beta V+2\omega)\sin V \right.\\ &-\frac{1}{2}(4+7e_{r}^{2})\cos\iota\sin[(2+2\beta)V+2\omega]- \frac{1}{24}e_{r}(76+13e_{r}^{2})\cos\iota\sin[(3+2\beta)V+2\omega]\\ &\left.-\frac{13}{12}e_{r}^{2}\cos\iota\sin[(4+2\beta)V+2\omega]- \frac{1}{8}e_{r}^{3}\cos\iota\sin[(5+2\beta)V+2\omega]\right\}.\end{split} \tag{55}\]
The waveforms for quasi-circular case with \(e_{r}=0\) and \(\omega=0\) are
\[\begin{split}\xi_{+}^{(0)}&=-\xi(1+\cos^{2}\iota) \cos(2+2\beta)V=-\xi(1+\cos^{2}\iota)\cos 2\phi,\\ \xi_{\times}^{(0)}&=-2\xi\cos\iota\sin(2+2\beta)V=- 2\xi\cos\iota\sin 2\phi,\end{split} \tag{56}\]
and
\[\begin{split}\delta\xi_{+}&=-\delta\varpi\cdot\xi^ {3}(1+\cos^{2}\iota)\cos(2+2\beta)V=-\delta\varpi\cdot\xi^{3}(1+\cos^{2} \iota)\cos 2\phi\\ \delta\xi_{\times}&=-2\cdot\delta\varpi\cdot\xi^{3} \cos\iota\sin(2+2\beta)V=-2\cdot\delta\varpi\cdot\xi^{3}\cos\iota\sin 2\phi, \end{split} \tag{57}\]
returning to that predicted by our previous work [152].
We briefly summarize the features of these waveforms. Firstly and most importantly, there are no extra polarization modes. This is also the conclusion of Ref. [173; 152]. The scalar field does not produce the breathing mode as massless Brans-Dicke gravity [134] and longitudinal mode as massive Brans-Dicke gravity [135]. Secondly, There are no parity-violating effects, because of the neglect of cosmic expansion. Thirdly, there is a more complicated frequency spectrum compared with the quasi-circular case. The GWs are emitted at a set of discrete phases \(\{V,2V,3V\}\) at Newtonian order and \(\{V,2V,3V,4V,5V\}\) at DCS order, rather than at a single phase \(2V\) for circular motion. That, these extra frequency modes will disappear at the circular limit. Finally, the waveforms are modulated by the periastron-advance effect as we have discussed when calculating the BBH motion. This modulation is non-vanishing for \(e_{r}=0\) because the periastron is ill-defined in the circular limits. Such that, we need to take the replacement \((1+\beta)V\rightarrow\phi\) to complete the calculation in Eqs. (56) and (57).
## V Radiation Reaction
### Energy Flux
Although the scalar radiation does not influence the GW polarizations, it also carries the energy and angular momentum, changing the orbital elements of BBH orbit. The total radiated energy carried by the scalar radiation is defined as
\[\mathcal{F}_{S}=\beta_{0}R^{2}\oint_{\partial\Omega}\langle\dot{\vartheta}^{2} \rangle d\Omega, \tag{58}\]
in which the orbital average is defined as
\[\langle\dot{\vartheta}^{2}\rangle=\frac{1}{T}\int_{0}^{T}\left(\frac{\partial \vartheta}{\partial t}\right)^{2}dt=\frac{1}{T}\int_{0}^{2\pi}\left(\frac{ \partial\vartheta}{\partial V}\right)^{2}\frac{dV}{du}\frac{du}{dt}dV. \tag{59}\]
Using the definition of true anomaly (20) and the time integration (13), we obtain
\[\frac{dV}{du}=\frac{\sqrt{1-e_{r}^{2}}}{1-e_{r}\cos u}=\frac{1+e_{r}\cos V}{\sqrt{ 1-e_{r}^{2}}} \tag{59}\]
and
\[\frac{dt}{du}\approx\frac{m}{\xi^{3/2}}\frac{1-e_{r}^{2}}{1+e_{r}\cos V}\left[ 1+\frac{\delta\varpi}{6}\frac{\xi^{2}}{(1-e_{r}^{2})^{2}}(3+2e_{r}\cos V-e_{r} ^{2})\right] \tag{60}\]
up to the linear order of the coupling. After a long calculation, the orbital average (58) presents as
\[\beta_{0}\langle\dot{\vartheta}^{2}\rangle=\frac{1}{8\pi}\frac{25}{256}\frac{ \nu^{2}\xi^{7}}{(1-e_{r}^{2})^{11/2}}\Delta^{2}\sin^{2}\iota\cos^{2}\iota\left[ 1+\frac{19}{2}e_{r}^{2}+\frac{69}{8}e_{r}^{4}+\frac{9}{16}e_{r}^{6}-\frac{1} {4}e_{r}^{2}\left(1+5e_{r}^{2}+\frac{9}{16}e_{r}^{4}\right)\cos 2\omega \right], \tag{61}\]
where
\[\Delta^{2}\equiv(\zeta/\nu^{2})\tilde{\Delta}^{2}=\zeta\left\{-\frac{2}{\nu} \left(\frac{\mathbf{S}_{1}}{m_{1}^{2}}\cdot\frac{\mathbf{S}_{2}}{m_{2}^{2}}\right)+ \left[\frac{m^{2}}{m_{1}^{2}}\left(\frac{\mathbf{S}_{1}}{m_{1}^{2}}\right)^{2}+ \frac{m^{2}}{m_{2}^{2}}\left(\frac{\mathbf{S}_{2}}{m_{2}^{2}}\right)^{2}\right] \right\}. \tag{62}\]
This new symbol presents the same structure of \(\delta\varpi\) (9) and also shows the modification to SS and MQ coupling induced by DCS theory. The solid angle integration in Eq. (57) is
\[R^{2}\oint_{\partial\Omega}d\Omega=R^{2}\int_{0}^{2\pi}d\omega\int_{0}^{\pi} \sin\iota d\iota. \tag{63}\]
The above calculation finally gives the final expression of energy flux carried by scalar radiation,
\[\mathcal{F}_{S}=\frac{32}{5}\frac{\nu^{2}x^{5}}{(1-e_{r}^{2})^{7/2}}\cdot \frac{25}{24576}\Delta^{2}\frac{x^{2}}{(1-e_{r}^{2})^{2}}\left[1+\frac{19}{2} e_{r}^{2}+\frac{69}{8}e_{r}^{4}+\frac{9}{16}e_{r}^{6}\right]. \tag{64}\]
Now, we turn to calculate the flux of tensor radiation, which is defined as follows,
\[\mathcal{F}_{T}=\frac{1}{32\pi}R^{2}\oint_{\partial\Omega}\langle\dot{h}_{jk} ^{\rm TT}\dot{h}_{jk}^{\rm TT}\rangle d\Omega=\frac{1}{16\pi}R^{2}\oint_{ \partial\Omega}\langle\dot{h}_{+}^{2}+\dot{h}_{\times}^{2}\rangle d\Omega= \frac{1}{4\pi}(\nu m)^{2}\oint_{\partial\Omega}\langle\dot{\xi}_{+}^{2}+\dot{ \xi}_{\times}^{2}\rangle d\Omega. \tag{65}\]
Through some similar mathematical processes, we get
\[\mathcal{F}_{T}=\frac{32}{5}\frac{\nu^{2}x^{5}}{(1-e_{r}^{2})^{7/2}}\left[ \left(1+\frac{73}{24}e_{r}^{2}+\frac{37}{96}e_{r}^{4}\right)+\delta\varpi \cdot\frac{x^{2}}{(1-e_{r}^{2})^{2}}\left(\frac{4}{3}+\frac{449}{36}e_{r}^{2}+ \frac{1195}{144}e_{r}^{4}+\frac{11}{48}e_{r}^{6}\right)\right]. \tag{66}\]
Finally, the total dissipative energy is the sum of scalar flux (64) and tensor flux (66), shown as
\[\mathcal{F}\equiv\mathcal{F}_{S}+\mathcal{F}_{T}=\frac{32}{5} \frac{\nu^{2}x^{5}}{(1-e_{r}^{2})^{7/2}}\Bigg{\{}\left(1+\frac{73}{24}e_{r}^{2 }+\frac{37}{96}e_{r}^{4}\right) \tag{67}\] \[\qquad\qquad+\frac{x^{2}}{(1-e_{r}^{2})^{2}}\Bigg{[}\left(\frac{2 5}{24576}\Delta^{2}+\frac{4}{3}\delta\varpi\right)+\left(\frac{475}{49152} \Delta^{2}+\frac{449}{36}\delta\varpi\right)e_{r}^{2}\] \[\qquad\qquad\qquad+\left(\frac{575}{65536}\Delta^{2}+\frac{1195} {144}\delta\varpi\right)e_{r}^{4}+\left(\frac{75}{131072}\Delta^{2}+\frac{1 1}{48}\delta\varpi\right)e_{r}^{6}\Bigg{]}\Bigg{\}}.\]
Taking the coupling as 0, we get the flux in GR, which can be found in many famous references and textbooks, such as [13; 57; 58; 69; 82]. Comparing the leading order, the DCS modification appears in the 2PN order. Taking the eccentricity \(e_{r}\) as 0, this result is consistent with that obtained by [152]. (We find there are some typos in our previous work [152]. The coefficient before coupling \(\delta\varpi\) in Eq. (135) should be 4/3 rather than 8/3. This typo influences all the subsequent related coefficients in Eqs. (140, 148, 157).)
### Angular-Momentum Flux
Because the eccentric orbits are described by two independent parameters (the semimajor axis and eccentricity), the secular evolution is influenced by not only the balance of conserved energy but also the OAM. The angular-momentum flux is also carried by both scalar and tensor radiations. The scalar sector is defined as
\[\mathcal{L}_{S}^{k}=-\beta_{0}R^{2}\int\epsilon_{ijk}\langle\dot{\vartheta}x_{i} \partial_{j}\vartheta\rangle d\Omega. \tag{68}\]
The vector \(\mathbf{x}\) is just the direction of the observer, \(\mathbf{x}\equiv\hat{\mathbf{N}}\), and the gradient operator is \(\partial_{j}\equiv\partial/\partial x_{j}\). For simplicity, we define the flux-density vector as \(\tau_{S}^{k}=-\epsilon_{ijk}\dot{\vartheta}x_{i}\partial_{j}\vartheta\), with components
\[\tau_{S}^{x} =-\cot\iota\sin\omega(\partial_{\omega}\vartheta)\dot{\vartheta} +\cos\omega(\partial_{i}\vartheta)\dot{\vartheta},\] \[\tau_{S}^{y} =-\sin\omega(\partial_{\iota}\vartheta)\dot{\vartheta}-\cot\iota \cos\omega(\partial_{\omega}\vartheta)\dot{\vartheta}, \tag{69}\] \[\text{and}\quad\tau_{S}^{z} =(\partial_{\omega}\vartheta)\dot{\vartheta},\]
respectively. Putting the scalar radiation (43) into this definition and averaging it, we get
\[\beta_{0}\langle\tau_{S}^{x}\rangle =-\frac{25}{65536}\frac{(2\nu m)^{2}}{\pi}\frac{\xi^{11/2}}{m} \Delta^{2}\cdot\frac{8+24e_{r}^{2}+3e_{r}^{4}}{(1-e_{r}^{2})^{4}}\cdot\sin \iota\cos^{3}\iota\sin\omega,\] \[\beta_{0}\langle\tau_{S}^{y}\rangle =-\frac{25}{65536}\frac{(2\nu m)^{2}}{\pi}\frac{\xi^{11/2}}{m} \Delta^{2}\cdot\frac{8+24e_{r}^{2}+3e_{r}^{4}}{(1-e_{r}^{2})^{4}}\cdot\sin \iota\cos^{3}\iota\cos\omega, \tag{70}\] \[\beta_{0}\langle\tau_{S}^{z}\rangle =\frac{25}{65536}\frac{(2\nu m)^{2}}{\pi}\frac{\xi^{11/2}}{m} \Delta^{2}\cdot\frac{8+24e_{r}^{2}+3e_{r}^{4}}{(1-e_{r}^{2})^{4}}\cdot\sin^{2 }\iota\cos^{2}\iota.\]
The \(x\) and \(y\)-components are proportional to \(\sin\omega\) and \(\cos\omega\), respectively, such that integrating them over the full solid angle gives zero. The only non-zero component is the \(\tau_{S}^{z}\). This is the subsequence of the non-precessing assumption. The full solid-angle integral gives the \(z\)-component of angular-momentum flux carried by scalar radiation,
\[\mathcal{L}_{S}^{z}=\frac{32}{5}\frac{m\nu^{2}x^{7/2}}{(1-e_{r}^{2})^{2}}\cdot \frac{25}{24576}\Delta^{2}\frac{x^{2}}{(1-e_{r}^{2})^{2}}\left(1+3e_{r}^{2}+ \frac{3}{8}e_{r}^{4}\right). \tag{71}\]
Now, we turn to the angular-momentum flux by tensor radiation. The definition of flux is
\[\mathcal{L}_{T}^{k} =\frac{1}{32\pi}R^{2}\int\epsilon^{ijk}\langle 2h_{il}^{\rm TT} \dot{h}_{jl}^{\rm TT}-\dot{h}_{lm}^{\rm TT}x_{i}\partial_{j}h_{lm}^{\rm TT}\rangle d\Omega \tag{72}\] \[=\frac{1}{8\pi}(\nu m)^{2}\int\epsilon^{ijk}\langle 2\xi_{il}^{ \rm TT}\dot{\xi}_{jl}^{\rm TT}-\dot{\xi}_{lm}^{\rm TT}x_{i}\partial_{j}\xi_{lm }^{\rm TT}\rangle d\Omega,\]
and of the flux density is
\[\tau_{T}^{k}=\epsilon^{ijk}(2\nu m)^{2}(2\xi_{il}^{\rm TT}\dot{\xi}_{jl}^{\rm TT }-\dot{\xi}_{lm}^{\rm TT}x_{i}\partial_{j}\xi_{lm}^{\rm TT}), \tag{73}\]
with components
\[\tau_{T}^{x} =(2\nu m)^{2}\left\{2\cos\omega\Big{[}(\partial_{\iota}\xi_{+}) \dot{\xi}_{+}+(\partial_{\iota}\xi_{\times})\dot{\xi}_{\times}\Big{]}-2\cot \iota\sin\omega\Big{[}(\partial_{\omega}\xi_{+})\dot{\xi}_{+}+(\partial_{\omega }\xi_{\times})\dot{\xi}_{\times}\Big{]}\right\}, \tag{74}\] \[\tau_{T}^{y} =(2\nu m)^{2}\left\{-2\sin\omega\Big{[}(\partial_{\iota}\xi_{+}) \dot{\xi}_{+}+(\partial_{\iota}\xi_{\times})\dot{\xi}_{\times}\Big{]}-2\cot \iota\cos\omega\Big{[}(\partial_{\omega}\xi_{+})\dot{\xi}_{+}+(\partial_{\omega }\xi_{\times})\dot{\xi}_{\times}\Big{]}\right\},\] \[\tau_{T}^{z} =(2\nu m)^{2}\left\{12\Big{[}\dot{\xi}_{\times}\xi_{+}-\dot{\xi}_{ +}\xi_{\times}\Big{]}+2\Big{[}(\partial_{\omega}\xi_{\times})\dot{\xi}_{\times }+(\partial_{\omega}\xi_{+})\dot{\xi}_{+}\Big{]}\right\}.\]
We find again that the \(x\) and \(y\)-components contain factors \(\cos\omega\) and \(\sin\omega\), which leads to zero results after full-solid angle integration. The remained component is
\[\mathcal{L}_{T}^{z}=\frac{32}{5}\frac{m\nu^{2}x^{7/2}}{(1-e_{r}^{2})^{2}}\cdot \left[\left(1+\frac{7}{8}e_{r}^{2}\right)+\delta\varpi\cdot\frac{x^{2}}{(1-e_{ r}^{2})^{2}}\left(\frac{4}{3}+\frac{16}{3}e_{r}^{2}+\frac{35}{48}e_{r}^{4} \right)\right]. \tag{75}\]
Combining Eqs. (71) and (75), we get the total dissipative OAM, which is
\[\begin{split}\mathcal{L}\equiv\mathcal{L}_{S}^{z}+\mathcal{L}_{T}^{ z}&=\frac{32}{5}\frac{m\nu^{2}x^{7/2}}{(1-e_{r}^{2})^{2}}\cdot\Bigg{\{} \left(1+\frac{7}{8}e_{r}^{2}\right)\\ &+\frac{x^{2}}{(1-e_{r}^{2})^{2}}\left[\left(\frac{25}{24576} \Delta^{2}+\frac{4}{3}\delta\varpi\right)+\left(\frac{25}{8192}\Delta^{2}+ \frac{16}{3}\delta\varpi\right)e_{r}^{2}+\left(\frac{25}{65536}\Delta^{2}+ \frac{35}{48}\delta\varpi\right)e_{r}^{4}\right]\Bigg{\}}.\end{split} \tag{76}\]
The first term is just the result given by GR at the leading order [58, 13]. When setting \(e_{r}\) as zero, the OAM flux is related to energy flux by \(\mathcal{F}=\Omega\cdot\mathcal{L}\)[58].
### Orbital Evolution
Without the radiation loss of energy and OAM, the orbital elements, \(a_{r}\) and \(e_{r}\), are two constants related to conserved quantities. However, the energy and OAM dissipation leads to the secular evolution of these elements. This evolution is determined by the balance equations,
\[\frac{d(\mu\varepsilon)}{d\tau}=-\mathcal{F},\quad\text{and}\quad\frac{d(\mu h )}{dt}=-\mathcal{L}, \tag{77}\]
where \(\mu\equiv m\nu\) is the reduced mass of the BBH system. The left-hand sides of Eq. (77) are
\[\frac{d(\mu\varepsilon)}{dt}=\frac{4}{3}\delta\varpi\frac{e_{r}x^{3}}{(1-e_{r} ^{2})^{3}}e_{r}^{\prime}(t)-\frac{1}{2}\left[1-2\cdot\delta\varpi\cdot\frac{x^ {2}}{(1-e_{r}^{2})^{2}}\right]x^{\prime}(t), \tag{78}\]
and
\[\frac{dh}{dt}=-\frac{m}{\sqrt{x}}\frac{e_{r}}{\sqrt{1-e_{r}^{2}}}\left[1- \frac{\delta\varpi}{3}\frac{x^{2}}{(1-e_{r}^{2})^{2}}(8+e_{r}^{2})\right]e_{r }^{\prime}(t)-\frac{m}{2}\frac{\sqrt{1-e_{r}^{2}}}{x^{3/2}}\left[1-\frac{ \delta\varpi}{2}\frac{x^{2}}{(1-e_{r}^{2})^{2}}(2+e_{r}^{2})\right]x^{\prime} (t). \tag{79}\]
The balance equation (77) gives the independent evolution equation of element \(e_{r}\) and gauge-invariant quantity \(x\) (as an equivalent substitutes of the semimajor axis \(a_{r}\)),
\[\begin{split} m\frac{dx}{dt}&=\frac{64}{5}\frac{ \nu x^{5}}{(1-e_{r}^{2})^{7/2}}\Bigg{\{}\left(1+\frac{73}{24}e_{r}^{2}+\frac{ 37}{96}e_{r}^{4}\right)\\ &\quad+\frac{x^{2}}{(1-e_{r}^{2})^{2}}\Bigg{[}\left(\frac{25}{245 76}\Delta^{2}+\frac{10}{3}\delta\varpi\right)+\left(\frac{475}{49152}\Delta^{ 2}+\frac{43}{3}\delta\varpi\right)e_{r}^{2}\\ &\quad+\left(\frac{575}{65536}\Delta^{2}+\frac{133}{18}\delta \varpi\right)e_{r}^{4}+\left(\frac{75}{131072}\Delta^{2}+\frac{11}{48}\delta \varpi\right)e_{r}^{6}\Bigg{]}\Bigg{\}},\end{split} \tag{80}\]
and
\[\begin{split} m\frac{de_{r}}{dt}&=-\frac{304}{15} \frac{\nu x^{4}}{(1-e_{r}^{2})^{5/2}}\cdot e_{r}\Bigg{\{}\left(1+\frac{121}{30 4}e_{r}^{2}\right)\\ &\quad+\frac{x^{2}}{(1-e_{r}^{2})^{2}}\Bigg{[}\left(\frac{375}{1 55648}\Delta^{2}+\frac{421}{114}\delta\varpi\right)+\left(\frac{1125}{311296} \Delta^{2}+\frac{907}{228}\delta\varpi\right)e_{r}^{2}+\left(\frac{375}{1245 184}\Delta^{2}+\frac{143}{456}\delta\varpi\right)e_{r}^{4}\Bigg{]}\Bigg{\}}. \end{split} \tag{81}\]
It is so hard to directly write down the analytic solution to Eqs. (80) and (81). An alternative method is to construct the evolution of the eccentricity with the frequency. Diving Eq. (80) by (81) gives
\[\frac{dx}{de_{r}}=-\frac{12}{19}\frac{x}{e_{r}}\frac{1+\frac{73}{24}e_{r}^{2}+ \frac{37}{96}e_{r}^{4}}{(1-e_{r}^{2})\left(1+\frac{121}{304}e_{r}^{2}\right)} \left\{1-\frac{x^{2}}{1-e_{r}^{2}}\frac{\mathcal{W}_{0}+\mathcal{W}_{2}e_{r}^{ 2}+\mathcal{W}_{4}e_{r}^{4}+\mathcal{W}_{6}e_{r}^{6}}{(1+\frac{121}{304}e_{r}^{ 2})\left(1+\frac{73}{245}e_{r}^{2}+\frac{37}{96}e_{r}^{4}\right)}\right\}, \tag{82}\]
with some DCS coefficients listed as following
\[\begin{split}\mathcal{W}_{0}&=\frac{325}{233472}\Delta^ {2}+\frac{41}{114}\delta\varpi,\qquad\mathcal{W}_{2}=\frac{16925}{7471104} \Delta^{2}-\frac{245}{2736}\delta\varpi,\\ \mathcal{W}_{4}&=\frac{2325}{1245184}\Delta^{2}+ \frac{7151}{10944}\delta\varpi,\qquad\mathcal{W}_{6}=\frac{2225}{19922944} \Delta^{2}-\frac{649}{21888}\delta\varpi.\end{split} \tag{83}\]
The zero-order term is fully consistent with that shown in Ref. [58]. The overall minus sign means that the eccentricity decreases from an initial value during the radiation reaction, in which the orbital frequency increases. This effect is generally called orbital circularization [58]. Although the sign of DCS modification cannot be determined for unknown bodies' masses and spins, the coefficients \(\mathcal{W}_{n}\) are always small quantifies, weakly changing the decay rate. Then the eccentricity tends to zero as the frequency increases. DCS theory does not modify this conclusion.
The above equation (83) can be solved perturbatively. Writing the solution as the summation of the GR part and DCS modification
\[x(e_{r})=x_{0}(e_{r})+\zeta\cdot x_{1}(e_{r}), \tag{84}\]
and putting it into Eq. (83), the zero-order solution is
\[x_{0}(e_{r})=c_{0}(1-e_{r}^{2})e_{r}^{-12/19}\left[1+\frac{121}{304}e_{r}^{2} \right]^{-870/2299}, \tag{85}\]
while the first-order one is
\[\begin{split} x_{1}(e_{r})&=\frac{c_{1}(1-e_{r}^{2 })}{e_{r}^{12/19}(304+121e_{r}^{2})^{870/2299}}+2^{-63/2299}19^{-1740/2299} \frac{c_{0}^{3}(1-e_{r}^{2})e_{r}^{-36/19}}{(121e_{r}^{2}+304)^{870/2299}}\\ &\times\Bigg{[}\left(-\frac{325}{3735552}\Delta^{2}-\frac{41}{182 4}\delta\varpi\right)\text{HyperGeometricF}\left(-\frac{12}{19},\frac{6338}{22 99};\frac{7}{19};-\frac{121}{304}e_{r}^{2}\right)\\ &+\left(\frac{16925}{69730304}\Delta^{2}-\frac{35}{3648}\delta \varpi\right)e_{r}^{2}\cdot\text{HyperGeometricF}\left(\frac{7}{19},\frac{6338 }{2299};\frac{26}{19};-\frac{121}{304}e_{r}^{2}\right)\\ &+\left(\frac{6975}{129499136}\Delta^{2}+\frac{7151}{379392} \delta\varpi\right)e_{r}^{4}\cdot\text{HyperGeometricF}\left(\frac{26}{19}, \frac{6338}{2299};\frac{45}{19},-\frac{121}{304}e_{r}^{2}\right)\\ &+\left(\frac{445}{239075328}\Delta^{2}-\frac{649}{1313280}\delta \varpi\right)e_{r}^{6}\cdot\text{HyperGeometricF}\left(\frac{45}{19},\frac{6338 }{2299};\frac{64}{19};-\frac{121}{304}e_{r}^{2}\right)\Bigg{]}.\end{split} \tag{86}\]
Here, "HyperGeometricF" is the Hypergeometric function of the first kind [178], and \(c_{0}\), \(c_{1}\) in Eqs. (85, 86) are two integration constants, determined by a specific initial condition \(x(e_{r}=e_{0})=x_{(0)}\), i.e., \(x_{0}(e_{0})=x_{(0)}\) and \(x_{1}(e_{0})=0\).
## VI Post-circular frequency-domain waveforms
This section focuses on the frequency-domain waveforms, requiring expressing \(e_{r}\) as a function of \(x\) and writing the time-domain waveforms in terms of another angular variable, mean anomaly \(\ell\). However, obtaining the inverse function from Eq. (86) is almost impossible. Such that, another necessary approximation, small-eccentricity approximation, is adopted in this procedure. We expand all involved functions in \(e_{r}\) and \(e_{0}\) up to the forth order, \(\sim\mathcal{O}(e_{0}^{4})\). This expansion is valid for initial eccentricity less than \(0.3\), i.e., \(e_{0}\lesssim 0.3\)[88]. This scheme will give an analytic calculation on the Fourier waveforms.
### Frequency-Domain Evolution of Eccentricity
Firstly, the solution (85, 86) can be expanded as
\[x_{0}(e_{r})\simeq x_{(0)}\left(\frac{e_{0}}{e_{r}}\right)^{12/19}\left[1+\frac {3323}{2888}(e_{0}^{2}-e_{r}^{2})+\frac{37765681}{33362176}e_{0}^{4}-\frac{1104 2329}{8340544}e_{0}^{2}e_{r}^{2}+\frac{6403635}{33362176}e_{r}^{4}\right], \tag{87}\]
and
\[x_{1}(e_{r}) \simeq x_{(0)}^{3}\left(\frac{e_{0}}{e_{r}}\right)^{12/19}\Bigg{\{} \bigg{[}\tilde{\mathcal{P}}_{-24/19}^{(0)}\left(\frac{e_{r}}{e_{0}}\right)^{-24/ 19}+\tilde{\mathcal{P}}_{0}^{(0)}\bigg{]} \tag{88}\] \[+e_{r}^{2}\Bigg{[}\tilde{\mathcal{P}}_{-24/19}^{(2)}\left(\frac{e _{r}}{e_{0}}\right)^{-24/19}+\tilde{\mathcal{P}}_{0}^{(2)}+\tilde{\mathcal{P}} _{14/19}^{(2)}\left(\frac{e_{r}}{e_{0}}\right)^{14/19}+\tilde{\mathcal{P}}_{2 }^{(2)}\left(\frac{e_{r}}{e_{0}}\right)^{2}\Bigg{]}\] \[+e_{r}^{4}\Bigg{[}\tilde{\mathcal{P}}_{-24/19}^{(4)}\left(\frac{e _{r}}{e_{0}}\right)^{-24/19}+\tilde{\mathcal{P}}_{0}^{(4)}+\tilde{\mathcal{P} }_{14/19}^{(4)}\left(\frac{e_{r}}{e_{0}}\right)^{14/19}+\tilde{\mathcal{P}}_{2 }^{(4)}\left(\frac{e_{r}}{e_{0}}\right)^{2}+\tilde{\mathcal{P}}_{52/19}^{(4)} \left(\frac{e_{r}}{e_{0}}\right)^{52/19}+\tilde{\mathcal{P}}_{4}^{(4)}\left( \frac{e_{r}}{e_{0}}\right)^{4}\Bigg{]}\Bigg{\}}.\]
It is easy to check that \(x=x_{(0)}\) when take \(e_{r}\) as \(e_{0}\). For subsequent derivation, we define the frequency normalized by its initial value,
\[\chi\equiv\frac{\Omega}{\Omega_{0}}=\left[\frac{x}{x_{(0)}}\right]^{3/2}\equiv \chi(e_{r}). \tag{89}\]
This quantify also can be regarded as \(F/F_{0}\), with \(F,F_{0}\) being \(F\equiv\Omega/2\pi\) and its initial value as used in Ref. [88].
The zero-order expression in terms of \(\chi\) from Eq. (87) is
\[\chi_{0}\simeq\left(\frac{e_{0}}{e_{r}}\right)^{18/19}\left[1+\frac{9969}{577 6}(e_{0}^{2}-e_{r}^{2})+\frac{73212015}{33362176}e_{0}^{4}-\frac{99380961}{333 62176}e_{0}^{2}e_{r}^{2}+\frac{13084473}{16681088}e_{r}^{4}\right], \tag{90}\]
and the first-order expression from Eq. (88) is
\[\chi_{1} \simeq x_{(0)}^{2}\left(\frac{e_{0}}{e_{r}}\right)^{18/19}\Bigg{\{} \bigg{[}\mathcal{P}_{0}^{(0)}+\mathcal{P}_{-24/19}^{(0)}\left(\frac{e_{r}}{e_ {0}}\right)^{-24/19}\bigg{]} \tag{91}\] \[+e_{r}^{2}\Bigg{[}\mathcal{P}_{0}^{(2)}+\mathcal{P}_{-24/19}^{(2) }\left(\frac{e_{r}}{e_{0}}\right)^{-24/19}+\mathcal{P}_{-2}^{(2)}\left(\frac{e _{r}}{e_{0}}\right)^{-2}+\mathcal{P}_{-62/19}^{(2)}\left(\frac{e_{r}}{e_{0}} \right)^{-62/19}\Bigg{]}\] \[+e_{r}^{4}\Bigg{[}\mathcal{P}_{0}^{(4)}+\mathcal{P}_{-24/19}^{(4) }\left(\frac{e_{r}}{e_{0}}\right)^{-24/19}+\mathcal{P}_{-2}^{(4)}\left(\frac{e _{r}}{e_{0}}\right)^{-2}+\mathcal{P}_{-62/19}^{(4)}\left(\frac{e_{r}}{e_{0}} \right)^{-62/19}\] \[+\mathcal{P}_{-4}^{(4)}\left(\frac{e_{r}}{e_{0}}\right)^{-4}+ \mathcal{P}_{-100/19}^{(4)}\left(\frac{e_{r}}{e_{0}}\right)^{-100/19}\Bigg{]} \Bigg{\}}.\]
The coefficients involved in Eq. (91) are listed in Appendix A. One can inversely solve Eqs. (90, 91) to express the eccentricity as a function of normalized frequency,
\[e_{r}=e_{r}(\chi) =e_{0}\chi^{-19/18}\Bigg{[}1+\frac{3323}{1824}e_{0}^{2}\left(1- \chi^{-19/9}\right) \tag{92}\] \[\qquad+\frac{15994231}{6653952}e_{0}^{4}\left(1-\frac{66253974}{ 15994231}\chi^{-19/9}+\frac{50259743}{15994231}\chi^{-38/9}\right)+\zeta\left( \delta\mathcal{E}_{0}+\delta\mathcal{E}_{2}e_{0}^{2}+\delta\mathcal{E}_{4}e_{0 }^{4}\right)\Bigg{]}.\]
The leading-order terms have been shown in Eq. (92) and the DCS modifications are
\[\delta\mathcal{E}_{0} =\chi^{-19/18}\left[\mathcal{S}_{0}^{(0)}+\mathcal{S}_{4/3}^{(0)} \chi^{4/3}\right], \tag{93}\] \[\delta\mathcal{E}_{2} =\chi^{-19/18}\left[\mathcal{S}_{-19/9}^{(2)}\chi^{-19/9}+ \mathcal{S}_{-7/9}^{(2)}\chi^{-7/9}+\mathcal{S}_{0}^{(2)}+\mathcal{S}_{4/3}^{(2) }\chi^{4/3}\right],\] \[\delta\mathcal{E}_{4} =\chi^{-19/18}\left[\mathcal{S}_{-38/9}^{(4)}\chi^{-38/9}+ \mathcal{S}_{-26/9}^{(4)}\chi^{-26/9}+\mathcal{S}_{-19/9}^{(4)}\chi^{-19/9}+ \mathcal{S}_{-7/9}^{(4)}\chi^{-7/9}+\mathcal{S}_{0}^{(4)}+\mathcal{S}_{4/3}^{(4) }\chi^{4/3}\right].\]
Involved coefficients in Eq. (93) can be found in Appendix B. The GR sector in Eq. (92) is consistent with the results in Ref. [88] and the DCS sector is firstly obtained by this work. In this way, we directly express the frequency-domain evolution instead of time evolution. This will play an important role in the next calculation of the waveforms in frequency space.
### Waveform in Terms of Mean Anomaly
The so-called mean anomaly is defined by the modified Keplerian equation \(\ell(u)\equiv\phi/K=u-e_{t}\sin u\)[(29)]. It grows evenly over time. Therefore, without considering the radiation reaction, it is achievable to express the mean anomaly through orbital frequency \(F\), as \(\ell=Ft(u)\), bringing great convenience in calculating Fourier waveforms. To complete this calculation, we firstly re-express Eqs. (51, 52) as an equivalent form in terms of azimuth and eccentric anomaly. The Newtonian approximation is
\[\begin{split}\xi^{(0)}_{+}&=\frac{\xi}{2}\frac{e_{r }\cos u}{1-e_{r}\cos u}\sin^{2}\iota-\frac{\xi}{2}\frac{1}{(1-e_{r}\cos u)^{2} }(1+\cos^{2}\iota)\\ &\qquad\times\Big{\{}[2(1-e_{r}^{2})-e_{r}\cos u+e_{r}^{2}\cos^{ 2}u]\cos(2\phi+2\omega)+2e_{r}\sqrt{1-e_{r}^{2}}\sin u\sin(2\phi+2\omega)\Big{\}},\\ \xi^{(0)}_{\times}&=\frac{\xi\cos\iota}{(1-e_{r} \cos u)^{2}}\Big{\{}2e_{r}\sqrt{1-e_{r}^{2}}\sin u\cos(2\phi+2\omega)-[2(1-e_ {r}^{2})-e_{r}\cos u+e_{r}^{2}\cos^{2}u]\sin(2\phi+2\omega)\Big{\}}.\end{split} \tag{94}\]
The DCS modification is
\[\begin{split}\delta\xi_{+}&=\frac{\delta\varpi}{6} \frac{\xi^{3}}{(1-e_{r}^{2})^{3/2}}\frac{1}{(1-e_{r}\cos u)^{3}}\sin^{2}\iota \Big{[}e_{r}^{2}-3(e_{r}\cos u)+3(e_{r}\cos u)^{2}-(e_{r}\cos u)^{3}\Big{]}\\ &\qquad+\frac{\delta\varpi}{6}\frac{\xi^{3}}{(1-e_{r}^{2})^{3/2} }\frac{1}{(1-e_{r}\cos u)^{3}}(1+\cos^{2}\iota)\Big{\{}2\Big{[}(1+e_{r}^{2}) (e_{r}\cos u)-2e_{r}^{2}\Big{]}(e_{r}\sin u)\sin(2\phi+2\omega)\\ &\qquad-\sqrt{1-e_{r}^{2}}\Big{[}6+e_{r}^{2}-(3+2e_{r}^{2})(e_{r }\cos u)-3(e_{r}\cos u)^{2}+(e_{r}\cos u)^{3}\Big{]}\cos(2\phi+2\omega)\Big{\}},\\ \delta\xi_{\times}&=\frac{\delta\varpi}{3}\frac{ \xi^{3}}{(1-e_{r}^{2})^{3/2}}\frac{1}{(1-e_{r}\cos u)^{3}}\cos\iota\Big{\{}2 \Big{[}2e_{r}^{2}-(1+e_{r}^{2})(e_{r}\cos u)\Big{]}(e_{r}\sin u)\cos(2\phi+2 \omega)\\ &\qquad-\sqrt{1-e_{r}^{2}}\Big{[}6+e_{r}^{2}-(3+2e_{r}^{2})(e_{r }\cos u)-3(e_{r}\cos u)^{2}+(e_{r}\cos u)^{3}\Big{]}\sin(2\phi+2\omega)\Big{\}}.\end{split} \tag{95}\]
The transformation from eccentric anomaly to mean anomaly is the famous solution to the modified Keplerian equation (29) by an infinite Bessel expansion [177; 179], which is
\[\begin{split} u-\ell&=\sum_{s=0}^{\infty}\frac{2}{ s}J_{s}(se_{t})\sin(s\ell)\\ &\simeq\left[1+\frac{1}{8}\mathrm{e}_{r}^{2}+\delta\varpi\cdot \xi^{2}\left(-\frac{1}{3}-\frac{11}{24}e_{r}^{2}\right)\right]e_{r}\sin(\ell) \\ &\qquad+\frac{1}{2}\left[1+\frac{1}{3}e_{r}^{2}+\delta\varpi\cdot \xi^{2}\left(-\frac{2}{3}-\frac{10}{9}e_{r}^{2}\right)\right]e_{r}^{2}\sin(2 \ell)\\ &\qquad+\frac{3}{8}\left(1-\delta\varpi\cdot\xi^{2}\right)e_{r}^{ 3}\sin(3\ell)+\frac{1}{3}\left(1-\frac{4}{3}\delta\varpi\cdot\xi^{2}\right)e_ {r}^{4}\sin(4\ell).\end{split} \tag{96}\]
\(J_{s}(z)\) is the \(s\)-order Bessel function of the first kind. The asymptotic behavior of the Bessel function for fixed \(s\) and small \(e_{t}\) is \(\propto e_{t}^{s}\), such that the higher-order Bessel functions are dropped in our small-eccentricity scheme. Another involved angular variable, azimuth, in Eqs. (94, 95), is also regarded as \(\phi=(K/2\pi)v=(1+\beta)v\), where \(v\) is defined in Eq. (29). Up to the order of \(\sim\mathcal{O}(\zeta)\) and \(\sim\mathcal{O}(e_{r}^{4})\), using above definition, we have
\[\begin{split} v=u&+\left[1+\frac{1}{4}e_{r}^{2}+ \delta\varpi\cdot\xi^{2}\left(\frac{1}{3}+\frac{7}{12}e_{r}^{2}\right)\right]e _{r}\sin(u)+\frac{1}{4}\left[1+\frac{1}{2}e_{r}^{2}+\delta\varpi\cdot\xi^{2} \left(\frac{2}{3}+\frac{4}{3}e_{r}^{2}\right)\right]e_{r}^{2}\sin(2u)\\ &\qquad+\frac{1}{12}\left(1+\delta\varpi\cdot\xi^{2}\right)e_{r}^ {3}\sin(3u)+\frac{1}{32}\left(1+\frac{4}{3}\delta\varpi\cdot\xi^{2}\right)e_{r }^{4}\sin(4u),\end{split} \tag{97}\]
where the "angular" eccentricity are written as Eq. (31). Substituting Eq. (96) into (97), we get
\[\begin{split} v=\ell&+\left[1-\frac{1}{8}e_{r}^{2}+ \frac{1}{6}\delta\varpi\cdot\xi^{2}e_{r}^{2}\right]e_{r}\sin(\ell)+\frac{5}{4} \left[1+\frac{11}{30}e_{r}^{2}+\delta\varpi\cdot\xi^{2}\left(-\frac{2}{15}+ \frac{4}{15}e_{r}^{2}\right)\right]e_{r}^{2}\sin(2\ell)\\ &\qquad+\frac{13}{12}\left[1-\frac{4}{13}\delta\varpi\cdot\xi^{2} \right]e_{r}^{3}\sin(3\ell)+\frac{103}{96}\left[1-\frac{52}{103}\delta\varpi \cdot\xi^{2}\right]e_{r}^{4}\sin(4\ell),\end{split} \tag{98}\]
Finally, combining Eq. (96) with (98), the waveforms in Eqs. (94, 95) are rewritten as
\[\begin{split}\xi_{+}^{(0)}&=x\left\{\frac{1}{48}e_{r} \Big{[}3(8-e_{r}^{2})\sin^{2}\iota+4(1+\cos^{2}\iota)(9-4e_{r}^{2})\cos(2\beta \ell+2\omega)\Big{]}\cos(\ell)\right.\\ &+\left.\frac{1}{24}\Big{[}e_{r}^{2}(12-4e_{r}^{2})\sin^{2}\iota -3(1+\cos^{2}\iota)(8-20e_{r}^{2}+11e_{r}^{4})\cos(2\beta\ell+2\omega)\Big{]} \cos(2\ell)\right.\\ &+\left.\frac{9}{32}e_{r}\Big{[}2e_{r}^{2}\sin^{2}\iota-(1+\cos^ {2}\iota)(8-19e_{r}^{2})\cos(2\beta\ell+2\omega)\Big{]}\cos(3\ell)\right.\\ &+\left.\frac{2}{3}e_{r}^{2}\Big{[}e_{r}^{2}\sin^{2}\iota-3(1+ \cos^{2}\iota)(2-5e_{r}^{2})\cos(2\beta\ell+2\omega)\Big{]}\cos(4\ell)\right. \\ &-\left.\frac{625}{96}(1+\cos^{2}\iota)e_{r}^{3}\cos(2\beta\ell+2 \omega)\cos(5\ell)-\frac{81}{8}(1+\cos^{2}\iota)e_{r}^{4}\cos(2\beta\ell+2 \omega)\cos(6\ell)\right.\\ &-\left.\frac{1}{48}(1+\cos^{2}\iota)e_{r}(36-23e_{r}^{2})\sin(2 \beta\ell+2\omega)\sin(\ell)+\frac{1}{2}(1+\cos^{2}\iota)(2-5e_{r}^{2}+3e_{r} ^{4})\sin(2\beta\ell+2\omega)\sin(2\ell)\right.\\ &+\left.\frac{9}{32}(1+\cos^{2}\iota)e_{r}(8-19e_{r}^{2})\sin(2 \beta\ell+2\omega)\sin(3\ell)+2(1+\cos^{2}\iota)e_{r}^{2}(2-5e_{r}^{2})\sin(2 \beta\ell+2\omega)\sin(4\ell)\right.\\ &\left.\left.+\frac{625}{96}(1+\cos^{2}\iota)e_{r}^{3}\sin(2 \beta\ell+2\omega)\sin(5\ell)+\frac{81}{8}(1+\cos^{2}\iota)e_{r}^{4}\sin(2 \beta\ell+2\omega)\sin(6\ell)\right\},\end{split} \tag{99}\]
\[\begin{split}\xi_{\times}^{(0)}&=x\left\{\frac{1}{6 }e_{r}(9-4e_{r}^{2})\cos\iota\sin(2\beta\ell+2\omega)\cos(\ell)-\frac{1}{4}(8- 20e_{r}^{2}+11e_{r}^{4})\cos\iota\sin(2\beta\ell+2\omega)\cos(2\ell)\right.\\ &-\left.\frac{9}{16}e_{r}(8-19e_{r}^{2})\cos\iota\sin(2\beta\ell+ 2\omega)\cos(3\ell)-4e_{r}^{2}(2-5e_{r}^{2})\cos\iota\sin(2\beta\ell+2\omega )\cos(4\ell)\right.\\ &-\left.\frac{625}{48}e_{r}^{3}\cos\iota\sin(2\beta\ell+2\omega) \cos(5\ell)-\frac{81}{4}e_{r}^{4}\cos\iota\sin(2\beta\ell+2\omega)\cos(6\ell) \right.\\ &+\left.\frac{1}{24}e_{r}(36-23e_{r}^{2})\cos\iota\cos(2\beta \ell+2\omega)\sin(\ell)-(2-5e_{r}^{2}+3e_{r}^{4})\cos\iota\cos(2\beta\ell+2 \omega)\sin(2\ell)\right.\\ &-\left.\frac{9}{16}e_{r}(8-19e_{r}^{2})\cos\iota\cos(2\beta\ell+ 2\omega)\sin(3\ell)-4e_{r}^{2}(2-5e_{r}^{2})\cos\iota\cos(2\beta\ell+2\omega) \sin(4\ell)\right.\\ &\left.-\frac{625}{48}e_{r}^{3}\cos\iota\cos(2\beta\ell+2\omega) \sin(5\ell)-\frac{81}{4}e_{r}^{4}\cos\iota\cos(2\beta\ell+2\omega)\sin(6\ell) \right\},\end{split} \tag{100}\]
\[\begin{split}\delta\xi_{+}&=\delta\varpi\cdot x^{3} \left\{\frac{1}{18}e_{r}\Big{[}-(12+15e_{r}^{2})\sin^{2}\iota+(1+\cos^{2} \iota)(45+41e_{r}^{2})\cos(2\beta\ell+2\omega)\Big{]}\cos(\ell)\right.\\ &-\left[e_{r}^{2}\left(1+\frac{224}{288}e_{r}^{2}\right)\sin^{2} \iota+\frac{1}{96}(1+\cos^{2}\iota)(64-768e_{r}^{2}-575e_{r}^{4})\cos(2\beta \ell+2\omega)\right]\cos(2\ell)\\ &-\left.\frac{3}{2}e_{r}\Big{[}e_{r}^{2}\sin^{2}\iota+(1+\cos^{2} \iota)(3-10e_{r}^{2})\cos(2\beta\ell+2\omega)\Big{]}\cos(3\ell)\right.\\ &-\left.\frac{1}{9}e_{r}^{2}\Big{[}20e_{r}\sin^{2}\iota+3(1+\cos^{2 }\iota)(35-83e_{r}^{2})\cos(2\beta\ell+2\omega)\Big{]}\cos(4\ell)\right.\\ &-\left.\frac{425}{18}(1+\cos^{2}\iota)e_{r}^{3}\cos(2\beta\ell+2 \omega)\cos(5\ell)-\frac{1365}{32}(1+\cos^{2}\iota)e_{r}^{4}\cos(2\beta\ell+2 \omega)\cos(6\ell)\right.\\ &-\left.\frac{1}{18}(1+\cos^{2}\iota)e_{r}(45+31e_{r}^{2})\sin(2 \beta\ell+2\omega)\sin(\ell)+\frac{1}{96}(1+\cos^{2}\iota)(64-768e_{r}^{2}-529 e_{r}^{4})\sin(2\beta\ell+2\omega)\sin(2\ell)\right.\\ &+\left.\frac{3}{2}(1+\cos^{2}\iota)(3-10e_{r}^{2})e_{r}\sin(2\beta \ell+2\omega)\sin(3\ell)+\frac{1}{3}(1+\cos^{2}\iota)e_{r}^{2}(35-83e_{r}^{2}) \sin(2\beta\ell+2\omega)\sin(4\ell)\right.\\ &\left.+\frac{425}{18}(1+\cos^{2}\iota)e_{r}^{3}\sin(2\beta\ell+2 \omega)\sin(5\ell)+\frac{1365}{32}(1+\cos^{2}\iota)e_{r}^{4}\sin(2\beta\ell+2 \omega)\sin(6\ell)\right\},\end{split} \tag{101}\]
and
\[\begin{split}\delta\xi_{\times}&=\delta\varpi\cdot x^{3} \left\{\frac{1}{9}e_{r}(45+41e_{r}^{2})\cos\iota\sin(2\beta\ell+2\omega)\cos( \ell)-\frac{1}{48}(64-768e_{r}^{2}-575e_{r}^{4})\cos\iota\sin(2\beta\ell+2 \omega)\cos(2\ell)\right.\\ &-3e_{r}(3-10e_{r}^{2})\cos\iota\sin(2\beta\ell+2\omega)\cos(3 \ell)-\frac{2}{3}e_{r}^{2}(35-83e_{r}^{2})\cos\iota\sin(2\beta\ell+2\omega) \cos(4\ell)\\ &-\frac{425}{9}e_{r}^{3}\cos\iota\sin(2\beta\ell+2\omega)\cos(5 \ell)-\frac{1365}{16}e_{r}^{4}\cos\iota\sin(2\beta\ell+2\omega)\cos(6\ell)\\ &+\frac{1}{9}e_{r}(45+31e_{r}^{2})\cos\iota\cos(2\beta\ell+2 \omega)\sin(\ell)-\frac{1}{48}\cos\iota(64-768e_{r}^{2}-529e_{r}^{4})\cos(2 \beta\ell+2\omega)\sin(2\ell)\\ &-3e_{r}\cos\iota(3-10e_{r}^{2})\cos(2\beta\ell+2\omega)\sin(3 \ell)-\frac{2}{3}e_{r}^{2}(35-83e_{r}^{2})\cos\iota\cos(2\beta\ell+2\omega) \sin(4\ell)\\ &\left.-\frac{425}{9}e_{r}^{3}\cos\iota\cos(2\beta\ell+2\omega) \sin(5\ell)-\frac{1365}{16}e_{r}^{4}\cos\iota\cos(2\beta\ell+2\omega)\sin(6 \ell)\right\}.\end{split} \tag{102}\]
So far, we have provided the expressions for the GW polarizations using the mean anomaly.
### Frequency-Domain Waveform: Overview
The detected signal is the linear combination of two different polarization modes,
\[\begin{split} h(t)=h_{+}F_{+}+h_{\times}F_{\times}=\frac{2\nu m} {R}\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\Bigg{\{}&\left[A_{SS}^{ (n,k)}(\xi,e_{r})\sin(n\ell)+A_{CS}^{(n,k)}(\xi,e_{r})\cos n\ell\right]\sin(k \beta\ell)\\ &+\left[A_{SC}^{(n,k)}(\xi,e_{r})\sin(n\ell)+A_{CC}^{(n,k)}(\xi,e _{r})\cos n\ell\right]\cos(k\beta\ell)\Bigg{\}},\end{split} \tag{103}\]
where \(h_{+,\times}=(2\nu m/R)\xi_{+,\times}\) and \(F_{+,\times}\) are the pattern functions of the GW detectors. \(A_{SS}^{(n,k)},A_{CS}^{(n,k)},A_{SC}^{(n,k)},A_{CC}^{(n,k)}\) are some coefficients including semimajor axis \(\xi\) and eccentricity \(e_{r}\). \(n\) is a non-negative integer and \(k\) represents the waveform modulation from the precession rate. And the frequency and eccentricity are both functions about the time, i.e., \(F=F(t)\) and \(e_{r}=e_{r}(t)\). For convenience, we transform Eq. (103) to the following form,
\[h(t)=\frac{2\nu m}{R}\sum_{nk}A_{nk}(\xi,e_{r})e^{-i(n+k\beta)\ell},\quad \text{with}\quad\sum_{nk}\equiv\sum_{n=-\infty}^{\infty}\sum_{k=-\infty}^{ \infty}, \tag{104}\]
and the Fourier transformation of Eq. (104) is given by
\[\tilde{h}(f)=\int_{-\infty}^{\infty}h(t)e^{i2\pi ft}dt=\frac{2\nu m}{R}\sum_{ nk}\int_{-\infty}^{\infty}A_{nk}(F)e^{-i(n+k\beta)\ell}e^{i2\pi ft}dt, \tag{105}\]
The above integration can be calculated by stationary phase approximation (SPA) [107, 111, 134, 58]. The final result is just contributed from the terms containing stationary points,
\[\begin{split}\tilde{h}(f)&\simeq\frac{2\nu m}{R} \tilde{\sum_{nk}}\int_{-\infty}^{\infty}A_{nk}(F)e^{i[2\pi ft-(n+k\beta)\ell]}dt \\ &=\frac{2\nu m}{R}\tilde{\sum_{nk}}A_{nk}(F_{nk})\sqrt{\frac{2 \pi}{\tilde{\psi}_{nk}}}\exp\left\{i\left[2\pi ft_{nk}-(n+k\beta_{nk})\ell_{nk }-\frac{\pi}{4}\right]\right\},\end{split} \tag{106}\]
where
\[\tilde{\psi}_{nk}=(n+k\beta_{nk})\tilde{\ell}_{nk}+2k\dot{\beta}_{nk}\dot{ \ell}_{nk}+k\ddot{\beta}_{nk}\ell_{nk}. \tag{107}\]
The new operator \(\tilde{\sum_{nk}}\) means summing all the terms involving stationary points, which is determined by
\[f=(n+k\beta_{nk})\cdot F_{nk}. \tag{108}\]
The periastron advance brings some difficulties to calculating the stationary points because \(\beta\) is also the function of eccentricity and semimajor axis, which further the functions about frequency under radiation reaction. One can solve the above equation (108) inversely and perturbatively,
\[F_{nk}=F_{nk}(f)\approx\frac{f}{n}\Big{[}1+\delta F_{nk}(f)\Big{]}. \tag{109}\]
The stationary frequencies in the Newtonian case are \(f/n\), and \(\delta F_{nk}\) is the DCS modification. The mean anomaly \(\ell=Ft(u)\) should be revised as
\[\ell(F)=2\pi\int_{0}^{t}Fdt=2\pi\int_{F_{0}}^{F}(F/\dot{F})dF=\ell_{c}+2\pi \int(F/\dot{F})dF, \tag{110}\]
involving the radiation reaction. Its first- and second-order derivatives of the mean anomaly are given by
\[\dot{\ell}=\dot{F}\frac{d\ell}{dF}=\dot{\ell}(F),\quad\text{and}\quad\ddot{ \ell}=\ddot{F}\frac{d\ell}{dF}+\dot{F}^{2}\frac{d^{2}\ell}{dF^{2}}, \tag{111}\]
respectively, where
\[\dot{F}=\frac{dF}{de_{r}}\frac{de_{r}}{dt},\quad\text{and}\quad\ddot{F}=\dot{ F}\frac{d\dot{F}}{dF}. \tag{112}\]
Compared with the calculation in the quasi-circular case, the expressions of \(\dot{\ell}\) and \(\ddot{\ell}\) are finally expressed as the function about the orbital frequency rather than time. At the stationary points, we have \(F_{nk}=F_{nk}(f)\) and then give \(\ell_{nk}=\ell(F_{nk})\), \(\dot{\ell}_{nk}=\dot{\ell}(F_{nk})\), and \(\ddot{\ell}_{nk}=\ddot{\ell}(F_{nk})\). The derivatives of the precession rate are also obtained similarly,
\[\dot{\beta}=\dot{F}\frac{d\beta}{dF}=\dot{\beta}(F),\quad\text{and}\quad\ddot{ \beta}=\ddot{F}\frac{d\beta}{dF}+\dot{F}^{2}\frac{d^{2}\beta}{dF^{2}}. \tag{113}\]
Such that at the stationary point, the precession rate and its derivatives are calculated as \(\beta_{nk}=\beta(F_{nk})\), \(\dot{\beta}_{nk}=\dot{\beta}(F_{nk})\), and \(\ddot{\beta}_{nk}=\ddot{\beta}(F_{nk})\). At the last, the time is calculated through
\[t=\int_{F_{0}}^{F}\dot{F}^{-1}dF=t_{c}+\int\dot{F}^{-1}dF, \tag{114}\]
and given by \(t_{nk}=t(F_{nk})\) at the stationary point.
Combining the results \(t_{nk}\), \(\beta_{nk}\), \(\ell_{nk}\), and \(\ddot{\psi}_{nk}\) (107), the frequency-domain waveform presents following form,
\[\tilde{h}(f)=\overset{\rightharpoonup}{\sum}_{nk}\mathcal{A}_{nk}(f)e^{i\Psi_{ nk}(f)}, \tag{115}\]
The modified amplitude in Eq. (115) is
\[\mathcal{A}_{nk}=\frac{2\nu m}{R}\cdot\sqrt{2\pi}\cdot A_{nk}(F_{nk})\cdot \ddot{\psi}_{nk}^{-1/2}, \tag{116}\]
and the modified phase is
\[\Psi_{nk}=2\pi ft_{nk}-(n+k\beta_{nk})\ell_{nk}-\frac{\pi}{4}. \tag{117}\]
Here we have completed the overview of the calculation of Fourier waveforms. The corresponding detailed results are reported in the following subsection.
### Frequency-Domain Waveform: Detailed Calculation
Now, we present the detailed results at the post-circular approximation or small-eccentricity limits. The considered waveforms up to order \(\sim\mathcal{O}(\zeta)\) and \(\sim\mathcal{O}(e_{r}^{4})\) is
\[h(t)=\frac{2\nu m}{R}\ \sum_{n^{\prime}k^{\prime}}A_{nk}(\xi,e_{r})e^{-i(n+k \beta)\ell}, \tag{118}\]
The re-defined summing operator represents the sum of 26 terms with \(n\in n^{\prime}\equiv\{-6,-5,\cdots,6\}\) and \(k\in k^{\prime}\equiv\{-2,2\}\). The amplitudes are separated into GR part and DCS modification, i.e., \(A_{nk}=\tilde{A}_{nk}+\delta A_{nk}\). All these terms are listed in the Appendix C. The Fourier transformation and SPA approximation give
\[\tilde{h}(f)\simeq\frac{2\nu m}{R}\tilde{\sum_{n^{\prime}k^{\prime}}}A_{nk}(F_{ nk})\sqrt{\frac{2\pi}{\bar{\psi}_{nk}}}\exp\left\{i\left[2\pi ft_{nk}-(n+k \beta_{nk})\ell_{nk}-\frac{\pi}{4}\right]\right\}, \tag{119}\]
which sums the 12 terms including the stationary points with \(n>0\). These stationary points are determined by \(f=(n+k\beta_{nk})F_{nk}\), where precession rate is
\[\beta(F)=\delta\varpi\cdot\frac{\tilde{u}^{4}}{\nu^{4/5}}\left\{1+2e_{0}^{2} \chi^{-19/9}+e_{0}^{4}\left[\frac{3323}{456}\chi^{-19/9}-\frac{1955}{456}\chi ^{-38/9}\right]\right\}. \tag{120}\]
Here we defined a new dimensionless frequency
\[\tilde{u}\equiv(2\pi\mathcal{M}F)^{1/3}, \tag{121}\]
and the chirp mass
\[\mathcal{M}\equiv m\nu^{3/5}. \tag{122}\]
Therefore, the solution of stationary frequency, corresponding to Eq. (108), is
\[F_{nk}=F_{nk}(f)\approx\frac{f}{n}\left\{1+\frac{k}{n}\delta\varpi\frac{ \tilde{u}_{f}^{4}}{\nu^{4/5}}\left[1-2e_{0}^{2}\chi_{f}^{-19/9}+e_{0}^{4} \left(\frac{1955}{456}\chi_{f}^{-38/9}-\frac{3323}{456}\chi_{f}^{-19/9}\right) \right]\right\}, \tag{123}\]
where
\[\tilde{u}_{f}\equiv(2\pi\mathcal{M}f/n)^{1/3},\quad\chi_{f}\equiv(1/n)(f/F_{0}). \tag{124}\]
Using formula (112), the first and second-order derivatives of orbital frequency \(F\) are given by
\[\begin{split}\frac{5}{48}\pi\mathcal{M}^{2}\dot{F}& =\tilde{u}^{11}\left\{1+\frac{157}{24}e_{0}^{2}\chi^{-19/9}+e_{0}^ {4}\chi^{-19/9}\left[-\frac{107891}{21888}+\frac{521711}{21888}\chi^{-19/9} \right]\right\}\\ &+\frac{\tilde{u}^{15}}{\nu^{4/5}}\Bigg{\{}\left[\frac{25}{24576 }\Delta^{2}+\frac{10}{3}\delta\varpi\right]\\ &+e_{0}^{2}\chi^{-19/9}\left[\left(\frac{2975}{3538944}+\frac{51 025}{3538944}\chi^{-4/3}\right)\Delta^{2}+\left(\frac{50011}{1728}+\frac{6437 }{1728}\chi^{-4/3}\right)\delta\varpi\right]\\ &+e_{0}^{4}\chi^{-19/9}\Bigg{[}\left(\frac{9885925}{3227516928}+ \frac{1640489075}{22592618496}\chi^{-4/3}+\frac{147769375}{5648154624}\chi^{-19 /9}-\frac{35064575}{1613758464}\chi^{-31/9}\right)\Delta^{2}\\ &\qquad\qquad+\left(\frac{166186553}{1575936}+\frac{339138997}{11 031552}\chi^{-4/3}+\frac{83973067}{5515776}\chi^{-19/9}-\frac{4423531}{787968} \chi^{-31/9}\right)\delta\varpi\Bigg{]}\Bigg{\}},\end{split} \tag{125}\]
and
\[\begin{split}\frac{25}{16896}\pi\mathcal{M}^{3}\ddot{F}& =\tilde{u}^{11}\left\{1+\frac{7379}{792}e_{0}^{2}\chi^{-19/9}+e_{ 0}^{4}\chi^{-19/9}\left[\frac{24520417}{722304}+\frac{315385}{22572}\chi^{-19/ 9}\right]\right\}\\ &+\frac{\tilde{u}^{15}}{\nu^{4/5}}\Bigg{\{}\left[\frac{325}{13516 8}\Delta^{2}+\frac{260}{33}\delta\varpi\right]\\ &+e_{0}^{2}\chi^{-19/9}\left[\left(\frac{1564975}{116785152}+ \frac{2398175}{116785152}\chi^{-4/3}\right)\Delta^{2}+\left(\frac{5173769}{570 24}+\frac{302539}{57024}\chi^{-4/3}\right)\delta\varpi\right]\\ &+e_{0}^{4}\chi^{-19/9}\Bigg{[}\left(\frac{5200411925}{1065080586 24}+\frac{77102986525}{745556410368}\chi^{-4/3}+\frac{752542375}{23298637824} \chi^{-19/9}+\frac{102500125}{1664188416}\chi^{-31/9}\right)\Delta^{2}\\ &\qquad\qquad+\left(\frac{17192434387}{52005888}+\frac{15939532859 }{364041216}\chi^{-4/3}+\frac{2594060795}{11376288}\chi^{-19/9}+\frac{12930785} {812592}\chi^{-31/9}\right)\delta\varpi\Bigg{]}\Bigg{\}},\end{split} \tag{126}\]
respectively. Thus, using Eq. (110), the mean anomaly is integrated as
\[\ell(F) =\ell_{c}-\frac{1}{32}\tilde{u}^{-5}\left\{1-\frac{785}{272}e_{0}^{ 2}\chi^{-19/9}-e_{0}^{4}\chi^{-19/9}\left[\frac{2608555}{248064}-\frac{5222765}{ 386688}\chi^{-19/9}\right]\right\} \tag{127}\] \[-\frac{1}{32}\frac{\tilde{u}^{-1}}{\nu^{4/5}}\Bigg{\{}\left[- \frac{125}{24576}\Delta^{2}-\frac{50}{3}\delta\varpi\right]\] \[+e_{0}^{2}\chi^{-19/9}\left[\left(\frac{220625}{25952256}-\frac{2 55125}{40108032}\chi^{-4/3}\right)\Delta^{2}+\left(\frac{126745}{12672}-\frac{ 32185}{19584}\chi^{-4/3}\right)\delta\varpi\right]\] \[+e_{0}^{4}\chi^{-19/9}\Bigg{[}\left(\frac{733136875}{23668457472} -\frac{8202445375}{256049676288}\chi^{-4/3}-\frac{1099890625}{19297861632}\chi^ {-19/9}+\frac{1697398625}{28509732864}\chi^{-31/9}\right)\Delta^{2}\] \[\qquad\qquad+\left(\frac{421173635}{11556864}-\frac{1695694985}{1 25024256}\chi^{-4/3}-\frac{2685294025}{75382272}\chi^{-19/9}+\frac{214133365}{ 13920768}\chi^{-31/9}\right)\delta\varpi\Bigg{]}\Bigg{\}}.\]
The first and second-order derivatives are obtained from Eq. (111), represented as
\[\dot{\ell}(F)=2\pi F, \tag{128}\]
and
\[\ddot{\ell}(F) =\frac{96}{5}\frac{\tilde{u}^{11}}{\mathcal{M}^{2}}\left\{1+ \frac{157}{24}e_{0}^{2}\chi^{-19/9}+e_{0}^{4}\left[-\frac{107891}{21888}\chi^ {-38/9}+\frac{521711}{21888}\chi^{-19/9}\right]\right\} \tag{129}\] \[+\frac{96}{5}\frac{1}{\mathcal{M}^{2}}\frac{\tilde{u}^{15}}{ \nu^{4/5}}\Bigg{\{}\left[\frac{25}{24576}\Delta^{2}+\frac{10}{3}\delta\varpi\right]\] \[+e_{0}^{2}\chi^{-19/9}\Bigg{[}\left(\frac{2975}{3538944}+\frac{5 1025}{3538944}\chi^{-4/3}\right)\Delta^{2}+\left(\frac{50011}{1728}+\frac{643 7}{1728}\chi^{-4/3}\right)\delta\varpi\Bigg{]}\] \[+e_{0}^{4}\chi^{-19/9}\Bigg{[}\left(\frac{9885925}{3227516928}+ \frac{1640489075}{22592618496}\chi^{-4/3}+\frac{147769375}{5648154624}\chi^{-19 /9}-\frac{35064575}{1613758464}\chi^{-31/9}\right)\Delta^{2}\] \[\qquad\qquad+\left(\frac{166186553}{1575936}+\frac{339138997}{1103 1552}\chi^{-4/3}+\frac{83973067}{5515776}\chi^{-19/9}-\frac{4423531}{787968} \chi^{-31/9}\right)\delta\varpi\Bigg{]}\Bigg{\}}.\]
Substituting Eq. (123) into Eqs. (127, 128, 129), we obtain the value of mean anomaly and its derivatives at the stationary points, which are shown as
\[\ell_{nk} =\ell_{c}-\frac{1}{32}\tilde{u}_{f}^{-5}\left\{1-\frac{785}{272}e_ {0}^{2}\chi_{f}^{-19/9}+e_{0}^{4}\left[\frac{5222765}{386688}\chi_{f}^{-38/9}- \frac{2608555}{248064}\chi_{f}^{-19/9}\right]\right\} \tag{130}\] \[-\frac{1}{32}\frac{\tilde{u}_{f}^{-1}}{\nu^{4/5}}\Bigg{\{}\left[- \frac{125}{24576}\Delta^{2}+\left(\frac{5}{3}\frac{k}{n}-\frac{50}{3}\right) \delta\varpi\right]\] \[+e_{0}^{2}\chi_{f}^{-19/9}\left[\left(\frac{220625}{25952256}- \frac{255125}{40108032}\chi_{f}^{-4/3}\right)\Delta^{2}+\left(-\frac{545}{72} \frac{k}{n}+\frac{126745}{12672}-\frac{32185}{19584}\chi_{f}^{-4/3}\right) \delta\varpi\right]\] \[+e_{0}^{4}\chi_{f}^{-19/9}\Bigg{[}\left(\frac{733136875}{23668457472 }-\frac{8202445375}{256049676288}\chi_{f}^{-4/3}-\frac{1099890625}{19297861632} \chi_{f}^{-19/9}+\frac{1697398625}{28509732864}\chi_{f}^{-31/9}\right)\Delta^{2}\] \[\qquad+\left(\left(\frac{421173635}{11556864}-\frac{1811035}{65664} \frac{k}{n}\right)-\frac{1695694985}{125024256}\chi_{f}^{-4/3}\right.\] \[\qquad\qquad\left.-\left(\frac{2685294025}{75382272}-\frac{3321725} {65664}\frac{k}{n}\right)\chi_{f}^{-19/9}+\frac{214133365}{13920768}\chi_{f}^{ -31/9}\right)\delta\varpi\Bigg{]}\Bigg{\}},\]
\[\dot{\ell}_{nk}=\dot{\ell}(F_{nk})=\frac{2\pi f}{n}\left\{1+ \frac{k}{n}\delta\varpi\frac{\tilde{u}_{f}^{4}}{\nu^{4/5}}\left[1+2e_{0}^{2} \chi_{f}^{-19/9}+e_{0}^{4}\chi_{f}^{-19/9}\left(\frac{3323}{456}-\frac{1955}{45 6}\chi_{f}^{-19/9}\right)\right]\right\}, \tag{131}\]
and
\[\begin{split}\ddot{\ell}_{nk}&=\frac{96}{5}\frac{\tilde{u} _{f}^{11}}{\mathcal{M}^{2}}\left\{1+\frac{157}{24}e_{0}^{2}\chi_{f}^{-19/9}+e_ {0}^{4}\chi_{f}^{-19/9}\left[\frac{521711}{21888}-\frac{107891}{21888}\chi_{f}^ {-19/9}\right]\right\}\\ &+\frac{96}{5}\frac{1}{\mathcal{M}^{2}}\frac{\tilde{u}_{f}^{15}}{ \nu^{4/5}}\Bigg{\{}\left[\frac{25}{24576}\Delta^{2}+\left(\frac{10}{3}-\frac{1 1}{3}\frac{k}{n}\right)\delta\varpi\right]\\ &+e_{0}^{2}\cdot\chi_{f}^{-19/9}\Bigg{[}\left(\frac{2975}{3538944} +\frac{51025}{3538944}\chi_{f}^{-4/3}\right)\Delta^{2}+\left(\frac{50011}{1728 }-\frac{1891}{108}\frac{k}{n}+\frac{6437}{1728}\chi_{f}^{-4/3}\right)\delta \varpi\Bigg{]}\\ &+e_{0}^{4}\chi_{f}^{-19/9}\Bigg{[}\left(\frac{9885925}{3227516928 }+\frac{1640489075}{22592618496}\chi_{f}^{-4/3}+\frac{147769375}{5648154624} \chi_{f}^{-19/9}-\frac{35064575}{1613758464}\chi_{f}^{-31/9}\right)\Delta^{2} \\ &\qquad+\Bigg{(}\left(\frac{166186553}{1575936}-\frac{6283793}{98 496}\frac{k}{n}\right)+\frac{339138997}{11031552}\chi_{f}^{-4/3}\\ &\qquad+\left(\frac{83973067}{5515776}-\frac{1451887}{196992} \frac{k}{n}\right)\chi_{f}^{-19/9}-\frac{4423531}{787968}\chi_{f}^{-31/9} \Bigg{)}\delta\varpi\Bigg{]}\Bigg{\}}.\end{split} \tag{132}\]
Similarly, the precession rate \(\beta\) has been written as the function about orbital frequency, i.e., \(\beta=\beta(F)\) (120). Its derivatives are given through Eq. (113),
\[\dot{\beta}(F)=\delta\varpi\cdot\frac{128}{5}\frac{1}{\mathcal{M}}\frac{\tilde {u}^{12}}{\nu^{4/5}}\left[1+\frac{43}{8}e_{0}^{2}\chi^{-19/9}+e_{0}^{4}\chi^{ -19/9}\left(\frac{142889}{7296}-\frac{23873}{7296}\chi^{-19/9}\right)\right], \tag{133}\]
and
\[\ddot{\beta}(F)=\delta\varpi\cdot\frac{49152}{25}\frac{1}{\mathcal{M}^{2}} \frac{\tilde{u}^{20}}{\nu^{4/5}}\left[1+\frac{2615}{288}e_{0}^{2}\chi^{-19/9} +e_{0}^{4}\chi^{-19/9}\left(\frac{8689645}{262656}+\frac{389275}{32832}\chi^{ -19/9}\right)\right]. \tag{134}\]
The corresponding values at the stationary points are
\[\beta_{nk}=\delta\varpi\cdot\frac{\tilde{u}_{f}^{4}}{\nu^{4/5}}\left\{1+2e_{ 0}^{2}\chi_{f}^{-19/9}+e_{0}^{4}\chi_{f}^{-19/9}\left[\frac{3323}{456}-\frac{ 1955}{456}\chi_{f}^{-19/9}\right]\right\}, \tag{135}\]
\[\dot{\beta}_{nk}=\delta\varpi\cdot\frac{128}{5}\frac{1}{\mathcal{M}}\frac{ \tilde{u}_{f}^{12}}{\nu^{4/5}}\left\{1+\frac{43}{8}e_{0}^{2}\chi_{f}^{-19/9}+ e_{0}^{4}\chi_{f}^{-19/9}\left[\frac{142889}{7296}-\frac{23873}{7296}\chi_{f}^{-19/9} \right]\right\}, \tag{136}\]
and
\[\ddot{\beta}_{nk}=\delta\varpi\cdot\frac{49152}{25}\frac{1}{\mathcal{M}^{2}} \frac{\tilde{u}_{f}^{20}}{\nu^{4/5}}\left[1+\frac{2615}{288}e_{0}^{2}\chi_{f}^ {-19/9}+e_{0}^{4}\chi_{f}^{-19/9}\left(\frac{8689645}{262656}+\frac{389275}{328 32}\chi_{f}^{-19/9}\right)\right], \tag{137}\]
respectively. Finally, from Eq. (114), the time function is
\[\begin{split} t(F)&=t_{c}-\frac{5}{256}\mathcal{M} \tilde{u}^{-8}\left\{1-\frac{157}{43}e_{0}^{2}\chi^{-19/9}+e_{0}^{4}\left[ \frac{1044553}{56544}\chi^{-38/9}-\frac{521711}{39216}\chi^{-19/9}\right]\right\} \\ &-\frac{5}{256}\mathcal{M}\frac{\tilde{u}^{-4}}{\nu^{4/5}}\Bigg{\{} \left[-\frac{25}{12288}\Delta^{2}-\frac{20}{3}\delta\varpi\right]\\ &+e_{0}^{2}\chi^{-19/9}\left[\left(\frac{44125}{4571136}-\frac{51 025}{6340608}\chi^{-4/3}\right)\Delta^{2}+\left(\frac{25349}{2232}-\frac{643 7}{3096}\chi^{-4/3}\right)\delta\varpi\right]\\ &+e_{0}^{4}\chi^{-19/9}\Bigg{[}\left(\frac{146627375}{4168876032}- \frac{1640489075}{40478441472}\chi^{-4/3}-\frac{8799125}{117669888}\chi^{-19/9}+ \frac{339479725}{4168876032}\chi^{-31/9}\right)\Delta^{2}\\ &\qquad\qquad+\left(\frac{84234727}{2035584}-\frac{339138997}{19764864 }\chi^{-4/3}-\frac{107411761}{2298240}\chi^{-19/9}+\frac{42826673}{2035584} \chi^{-31/9}\right)\delta\varpi\Bigg{]}\Bigg{\}},\end{split} \tag{138}\]
and the stationary-point value is thus
\[t_{nk} =t_{c}-\frac{5}{256}\mathcal{M}\tilde{u}_{f}^{-8}\left\{1-\frac{157} {43}e_{0}^{2}\chi_{f}^{-19/9}+e_{0}^{4}\left[\frac{1044553}{56544}\chi_{f}^{-38/9 }-\frac{521711}{39216}\chi_{f}^{-19/9}\right]\right\} \tag{139}\] \[-\frac{5}{256}\mathcal{M}\frac{\tilde{u}_{f}^{-4}}{\nu^{4/5}} \Bigg{\{}\left[-\frac{25}{12288}\Delta^{2}+\left(\frac{8}{3}\frac{k}{n}-\frac{ 20}{3}\right)\delta\varpi\right]\] \[+e_{0}^{2}\chi_{f}^{-19/9}\Bigg{[}\left(\frac{44125}{4571136}- \frac{51025}{6340608}\chi_{f}^{-4/3}\right)\Delta^{2}+\left(\frac{25349}{2232}- \frac{109}{9}\frac{k}{n}-\frac{6437}{3096}\chi_{f}^{-4/3}\right)\delta\varpi \Bigg{]}\] \[+e_{0}^{4}\chi_{f}^{-19/9}\Bigg{[}\Bigg{(}\frac{146627375}{416887 6032}-\frac{1640489075}{40478441472}\chi_{f}^{-4/3}-\frac{8799125}{117669888} \chi_{f}^{-19/9}+\frac{339479725}{4168876032}\chi_{f}^{-31/9}\Bigg{)}\Delta^{2}\] \[\qquad\qquad+\Bigg{(}\frac{84234727}{2035584}-\frac{362207}{8208} \frac{k}{n}\Bigg{)}-\frac{339138997}{19764864}\chi_{f}^{-4/3}\] \[\qquad\qquad-\left(\frac{107411761}{2298240}-\frac{664345}{8208} \frac{k}{n}\right)\chi_{f}^{-19/9}+\frac{42826673}{2035584}\chi_{f}^{-31/9} \Bigg{)}\delta\varpi\Bigg{]}\Bigg{\}}.\]
Using Eqs. (139, 135, 130), the final modified phase defined in Eqs. (115) or (117) is
\[\Psi_{nk} =-n\ell_{c}+2\pi ft_{c}-\frac{\pi}{4}+\frac{3}{256}n\tilde{u}_{f} ^{-5}\left\{1-\frac{2355}{1462}e_{0}^{2}\chi_{f}^{-19/9}+e_{0}^{4}\left(\frac {5222765}{998944}\chi_{f}^{-38/9}-\frac{2608555}{44448}\chi_{f}^{-19/9}\right)\right\} \tag{140}\] \[+\frac{3}{256}n\tilde{u}_{f}^{-5}\cdot\frac{\tilde{u}_{f}^{4}}{ \nu^{4/5}}\Bigg{\{}\left[-\frac{125}{12288}+e_{0}^{2}\left(\frac{220625}{33521 664}\chi_{f}^{-19/9}-\frac{255125}{71860224}\chi_{f}^{-31/9}\right)\right.\] \[+e_{0}^{4}\left(\frac{733136875}{30571757568}\chi_{f}^{-19/9}- \frac{8202445375}{458755670016}\chi_{f}^{-31/9}-\frac{43995625}{1608155136} \chi_{f}^{-38/9}+\frac{1697398625}{73650143232}\chi_{f}^{-50/9}\right)\Bigg{]} \Delta^{2}\] \[+\left[\frac{8}{3}\frac{k}{n}-\frac{100}{3}-\frac{256}{3}\frac{k }{n}\chi_{f}^{5/3}(2\mathcal{M}F_{0})^{5/3}\ell_{c}\right.\] \[+e_{0}^{2}\left(-\frac{241}{102}\frac{k}{n}\chi_{f}^{-19/9}+\frac {126745}{16368}\chi_{f}^{-19/9}-\frac{32185}{35088}\chi_{f}^{-31/9}-\frac{512} {3}\frac{k}{n}\chi_{f}^{-4/9}(2\mathcal{M}F_{0})^{5/3}\ell_{c}\right)\] \[+e_{0}^{4}\Bigg{(}\left(\frac{421173635}{14927616}-\frac{800843}{9 3024}\frac{k}{n}\right)\chi_{f}^{-19/9}+\left(\frac{22659965}{2465136}\frac{k}{ n}-\frac{107411761}{6281856}\right)\chi_{f}^{-38/9}-\frac{1695694985}{224001792 }\chi_{f}^{-31/9}\] \[\qquad\qquad+\left.\frac{214133365}{35961984}\chi_{f}^{-50/9}+\left( \frac{62560}{171}\chi_{f}^{-23/9}-\frac{106336}{171}\chi_{f}^{-4/9}\right) \frac{k}{n}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right)\Bigg{]}\delta\varpi \Bigg{\}}.\]
From Eqs. (135, 136, 137, 130, 131, 137), the modified amplitudes in Eq. (116) are obtained, which are listed in Appendix D. The first line of Eq. (140) has been obtained by [88], but our results differ by a minus sign from theirs, because of the different definitions. Our \(\Psi_{nk}\) corresponds to \(-i(\pi/4+\Psi_{n})\) in Eq. (4.29) of [88]. We can recall that the DCS theory modifies the gravitational radiation and reaction at 2PN-order approximation. And because of the existence of periastron advance, the second derivative of phase function in the SPA formula should be replaced by Eq. (107), where \(\ell\) is involved. Such that there are some terms including mean anomaly at the coalescence moment, \(\ell_{c}\).
Eq. (140) and Appendix D provide the explicit expression of modified waveform (115). The waveform depends on the following parameters: the total mass \(m\), the mass ratio \(\nu\), the spins \(\mathbf{S}_{A}\), the merging time \(t_{c}\), the azimuth angle \(\omega\), the inclination angle \(\iota\) of the observer, the merging phase \(\ell_{c}\), initial frequency \(F_{0}\), initial eccentricity \(e_{0}\) (\(\lesssim 0.3\)), and coupling parameters \(\zeta\), which is encoded in \(\delta\varpi\) and \(\Delta^{2}\), given by Eqs. (9) and (62), respectively. The frequency-dependent combinations \(\chi_{f}\) and \(\tilde{u}_{f}\) are given by Eq. (124). The modified phase and amplitudes are two sets of functions of frequency \(f\) with two indexes \(n\) and \(k\). The summation in the waveform (115) over \(n\) and \(k\) is within the ranges \(n^{\prime}\equiv\{-6,-5,\cdots,6\}\) and \(k^{\prime}\equiv\{-2,2\}\).
As a summary, we conclude that the final ready-to-use waveform is valid under four approximations. Among them, the first three, small-coupling, slowly-rotating, PN have been discussed in Section III.1. The last one is small initial eccentricity, which allows an analytic calculation of the frequency-domain evolution of eccentricity. We expand the expression up to the order of \(\sim\mathcal{O}(\epsilon_{0}^{4})\), which is valid for initial eccentricity \(e_{0}\lesssim 0.3\).
## VII Conclusion and discussion
DCS theory [147; 148] is a parity-violating gravitational theory that has recently attracted more and more attention. Compared with other scalar-tensor gravity, this theory only modifies the non-spherically symmetric spacetime [170; 175; 176; 180], such that only the gravitational radiation from binary rotating black holes encodes the distinctions between DCS and GR. In the PN framework, the DCS modifications to BBH motion, radiation, orbital secular evolution, and Fourier waveforms always enter 2PN-order correction [150; 151; 152]. This is the main conclusion of our previous works [152], in which the quasi-circular orbits and waveforms have been fully investigated.
This article focuses on the non-precessing BBH systems with quasi-elliptic orbits. The motion is constrained on the orbital plane, thus the quasi-Keplerian parameterization firstly introduced for non-spinning binaries [89] can be successfully extended to the DCS modification, which also induces the periastron-advance effects in Eq. (32) and then the BBH orbits are no longer closed. Therefore, the BBH motion presents a doubly periodic structure, the azimuth angle of BBH passes through \((1+\beta)v\) while the true (or eccentric and mean) anomaly through \(2\pi\). Two formal orbital elements, "radial" semimajor \(a_{r}\) and "radial" eccentricity \(e_{r}\), are introduced to describe the BBH motion [see Eq. (26)]. These two elements are not the true geometric quantifies but two independent parameters related to the conserved energy and OAM as shown in Eq. (22).
Based on the PN description of the BBH motion, the scalar and gravitational waveforms are obtained through quadrupole formula [152], which are expressed in terms of true anomaly, rather than the azimuth angle. The waveform is shown as the linear combination of \(\sin(nV)\) and \(\cos(nV)\), with \(n=1,2,3\) for Newtonian limit and \(n=1,2,3,4,5\) for DCS modification [see Eqs. (43; 53; 54)]. And again due to the periastron advance, the periodic behavior of waveform is modulated at a much lower frequency. From the results of the waveforms, the energy flux and OAM flux carried by scalar and tensor radiation are calculated as Eqs. (67, 76). Combining the balance equation (77), one can obtain the secular evolution of elements [see Eqs. (80; 81)], which results in the orbits gradually becoming quasi-circular. The DCS modification changes the rate of circularization but does not influence the fact that the orbit will become circular. Although the time-domain solution of orbital frequency \(x\) and eccentricity \(e_{r}\) are unachievable, we can analytically solve the results of \(x\) in terms of \(e_{r}\), which indicate that the eccentricity decreases as frequency increases, which is shown in Eq. (83) and its solutions (85; 86).
Due to the complicated form of secular evolution, the Fourier waveform cannot be fully obtained for arbitrary eccentricity. The small-eccentricity limit or post-circular approximation is in general adopted. Up to the fourth-order terms of initial eccentricity \(e_{0}\) and the linear order of DCS coupling \(\zeta\), the frequency-domain waveform is reported in Section VI. Significantly, the modified phase is shown in Eq. (140), and the amplitudes are shown in Appendix D, which complete the template construction and will benefit the signal searches and improve the theoretical constraints on DCS theory in the future.
###### Acknowledgements.
We would like to thank Aoxiang Jiang and Wei Liu for their helpful discussions and comments. This work is supported by the National Key R&D Program of China Grant No. 2022YFC2200100 and 2021YFC2203102, NSFC No. 12273035 and 12325301 the Fundamental Research Funds for the Central Universities under Grant No. WK3440000004, and the science research grants from the China Manned Space Project with No.CMS-CSST-2021-B01. T. Z. is supported in part by the National Key Research and Development Program of China under Grant No. 2020YFC2201503, the National Natural Science Foundation of China under Grant No. 12275238 and No. 11675143, the Zhejiang Provincial Natural Science Foundation of China under Grant No. LR21A050001 and LY20A050002, and the Fundamental Research Funds for the Provincial Universities of Zhejiang in China under Grant No. RF-A2019015. T. L. is supported by NSFC No. 12003008.
## Appendix A Coefficients Involved in Eq. (91)
In VI.1, we have derived the evolution of the orbital frequency with the changing eccentricity in the post-circular limits, some coefficients involved in Eq. (91) are listed as follows,
\[\mathcal{P}_{0}^{(0)}=-\mathcal{P}_{-24/19}^{(0)}=\frac{325}{311296}\Delta^{2} +\frac{41}{152}\delta\varpi, \tag{101}\]
\[\begin{split}\mathcal{P}_{0}^{(2)}&=-\frac{3239925}{17980 45696}\Delta^{2}-\frac{408729}{877952}\delta\varpi,\qquad\mathcal{P}_{-62/19}^{ (2)}=\frac{34620825}{12586319872}\Delta^{2}-\frac{964753}{6145664}\delta\varpi,\\ \mathcal{P}_{-2}^{(2)}&=\frac{40977425}{12586319872} \Delta^{2}+\frac{10501763}{6145664}\delta\varpi,\qquad\mathcal{P}_{-24/19}^{ (2)}=-\frac{7559825}{1798045696}\Delta^{2}-\frac{953701}{877952}\delta\varpi, \end{split} \tag{100}\]
\[\begin{split}\mathcal{P}_{0}^{(4)}&=\frac{4252453725}{ 5192755970048}\Delta^{2}+\frac{536463393}{2535525376}\delta\varpi,\qquad \mathcal{P}_{-24/19}^{(4)}=-\frac{537569626725}{236270396637184}\Delta^{2}+ \frac{141346710801}{115366404608}\delta\varpi,\\ \mathcal{P}_{-2}^{(4)}&=-\frac{408503949825}{7269858 3580672}\Delta^{2}-\frac{104692075347}{35497355264}\delta\varpi,\qquad\mathcal{ P}_{-62/19}^{(4)}=\frac{115045001475}{10385511940096}\Delta^{2}-\frac{3205874219}{507 1050752}\delta\varpi,\\ \mathcal{P}_{-4}^{(4)}&=\frac{834586228575}{135011652 21248}\Delta^{2}+\frac{314836109183}{65923659776}\delta\varpi,\qquad\mathcal{ P}_{-100/19}^{(4)}=-\frac{105761708325}{10385511940096}\Delta^{2}-\frac{13342246281}{507 1050752}\delta\varpi.\end{split} \tag{101}\]
## Appendix B Coefficients Involved in Eq. (93)
After obtaining the frequency represented in terms of evolving eccentricity, we inversely solve this relationship to get the frequency-domain evolution of eccentricity, with some coefficients in Eq. (93) shown following,
\[\mathcal{S}_{0}^{(0)}=-\mathcal{S}_{4/3}^{(0)}=\frac{325}{294912}\Delta^{2}+ \frac{41}{144}\delta\varpi \tag{102}\]
\[\begin{split}\mathcal{S}_{-19/9}^{(2)}&=-\frac{10799 75}{179306496}\Delta^{2}-\frac{136243}{87552}\delta\varpi,\qquad\mathcal{S}_{ -7/9}^{(2)}=\frac{5633725}{1255145472}\Delta^{2}+\frac{49807}{204288}\delta \varpi,\\ \mathcal{S}_{0}^{(2)}&=\frac{13338125}{3765436416} \Delta^{2}+\frac{3366541}{1838592}\delta\varpi,\qquad\mathcal{S}_{4/3}^{(2)}= -\frac{1079975}{537919488}\Delta^{2}-\frac{136243}{262656}\delta\varpi,\end{split} \tag{103}\]
\[\begin{split}\mathcal{S}_{-38/9}^{(4)}&=\frac{8167208 2375}{1962330292224}\Delta^{2}+\frac{10303247315}{958169088}\delta\varpi,\qquad \mathcal{S}_{-26/9}^{(4)}=-\frac{5196368177125}{178572056592384}\Delta^{2}- \frac{82850130257}{87193387008}\delta\varpi,\\ \mathcal{S}_{-19/9}^{(4)}&=-\frac{10507242925}{254376 148992}\Delta^{2}-\frac{5841770863}{372621312}\delta\varpi,\qquad\mathcal{S}_{ -7/9}^{(4)}=\frac{18720868175}{763128446976}\Delta^{2}+\frac{165508661}{1242071 04}\delta\varpi,\\ \mathcal{S}_{0}^{(4)}&=\frac{1232639443225}{17857205659 2384}\Delta^{2}+\frac{455716402373}{87193387008}\delta\varpi,\qquad\mathcal{ S}_{4/3}^{(4)}=-\frac{5198125075}{1962330292224}\Delta^{2}-\frac{655763471}{958169088} \delta\varpi.\end{split} \tag{104}\]
## Appendix C Coefficients Involved in Eq. (118)
Up to the linear order of DCS coupling and the fourth order of eccentricity, i.e., \(\sim\mathcal{O}(\zeta)\) and \(\sim\mathcal{O}(e_{r}^{4})\), the time-domain waveform includes 26 modes denoted by different \(n\) and \(k\). The non-vanishing modes included in Eq. (118)
with \(k=2\) are
\[\begin{split} A_{-1,2}&=\frac{1}{576}xe_{r}^{3}\mathcal{ F}_{d}^{(-)}e^{2i\omega}(21+80\cdot\delta\varpi\cdot x^{2}),\\ A_{-2,2}&=\frac{1}{192}xe_{r}^{4}\mathcal{F}_{d}^{(-)} e^{2i\omega}(6+23\cdot\delta\varpi\cdot x^{2}),\\ A_{1,2}&=\frac{1}{64}xe_{r}\mathcal{F}_{d}^{(-)} e^{2i\omega}[(24-13e_{r}^{2})+(80+64e_{r}^{2})\delta\varpi\cdot x^{2}],\\ A_{2,2}&=-\frac{1}{96}\xi\mathcal{F}_{d}^{(-)} e^{2i\omega}[(48-120e_{r}^{2}+69e_{r}^{4})+(32-376e_{r}^{2}-384e_{r}^{4}) \delta\varpi\cdot x^{2}],\\ A_{3,2}&=\frac{3}{64}xe_{r}\mathcal{F}_{d}^{(-)} e^{2i\omega}[-24+57e_{r}^{2}-(48-160e_{r}^{2})\delta\varpi\cdot x^{2}],\\ A_{4,2}&=-\frac{1}{6}xe_{r}^{2}\mathcal{F}_{d}^{(-)} e^{2i\omega}[(12-30e_{r}^{2})+(35-83e_{r}^{2})\delta\varpi\cdot x^{2}],\\ A_{5,2}&=-\frac{25}{576}xe_{r}^{3}\mathcal{F}_{d}^{(-)} e^{2i\omega}(75+272\cdot\delta\varpi\cdot x^{2}),\\ A_{6,2}&=-\frac{3}{64}xe_{r}^{4}\mathcal{F}_{d}^{(-)} e^{2i\omega}(108+455\cdot\delta\varpi\cdot x^{2}),\end{split} \tag{100}\]
The other modes with \(k=-2\) are
\[\begin{split} A_{1,-2}&=\frac{1}{576}xe_{r}^{3} \mathcal{F}_{d}^{(+)}e^{-2i\omega}(21+80\cdot\delta\varpi\cdot x^{2}),\\ A_{2,-2}&=\frac{1}{192}xe_{r}^{4}\mathcal{F}_{d}^{(+)} e^{-2i\omega}(6+23\cdot\delta\varpi\cdot x^{2}),\\ A_{-1,-2}&=\frac{1}{64}xe_{r}\mathcal{F}_{d}^{(+)} e^{-2i\omega}[24-13e_{r}^{2}+(80+64e_{r}^{2})\delta\varpi\cdot x^{2}],\\ A_{-2,-2}&=-\frac{1}{96}x\mathcal{F}_{d}^{(+)}e^{-2 i\omega}[(48-120e_{r}^{2}+69e_{r}^{4})+(32-384e_{r}^{2}-276e_{r}^{4})\delta \varpi\cdot x^{2}],\\ A_{-3,-2}&=-\frac{3}{64}xe_{r}\mathcal{F}_{d}^{(+)} e^{-2i\omega}[(24-57e_{r}^{2})+(48-160e_{r}^{2})\delta\varpi\cdot x^{2}],\\ A_{-4,-2}&=-\frac{1}{64}xe_{r}^{2}\mathcal{F}_{d}^{(+) }e^{-2i\omega}[(12-30e_{r}^{2})+(35-83e_{r}^{2})\delta\varpi\cdot x^{2}],\\ A_{-5,-2}&=-\frac{25}{576}xe_{r}^{3}\mathcal{F}_{d}^{(+ )}e^{-2i\omega}(75+272\cdot\delta\varpi\cdot x^{2}),\\ A_{-6,-2}&=-\frac{3}{64}xe_{r}^{4}\mathcal{F}_{d}^{(+ )}e^{-2i\omega}(108+455\cdot\delta\varpi\cdot x^{2}).\end{split} \tag{101}\]
The others vanish in the considered order, i.e.,
\[A_{0,2}=A_{-3,2}=A_{-4,2}=A_{-5,2}=A_{-6,2}=A_{0,-2}=A_{3,-2}=A_{4,-2}=A_{5,-2 }=A_{6,-2}=0. \tag{102}\]
We recall that \(x\) is dimensionless orbital frequency, defined by \(x\equiv(m\Omega)^{2/3}\) and \(\omega\) is the azimuth coordinate of the observer. Additionally, we define a new symbol \(\mathcal{F}_{d}^{(\pm)}\) consisting of the pattern functions \(F_{+,\times}\) and the inclination of the observers as
\[\mathcal{F}_{d}^{(\pm)}=(1+\cos^{2}\iota)F_{+}\pm 2i\cos\iota F_{\times}. \tag{103}\]
## Appendix D The Explicit Results Of Modified Amplitudes
At the end of the main text section, we have contained the amplitude and phase of the Fourier waveforms. The modified amplitudes are listed in this appendix. We first define an overall coefficient as
\[\mathcal{A}_{n}^{(0)}\equiv-\sqrt{\frac{5}{96}}\pi^{-2/3}\frac{\mathcal{M}^{5/ 6}}{R}f^{-7/6}\left(\frac{n}{2}\right)^{2/3}. \tag{104}\]
The non-vanishing modes in GR are
\[\begin{split}\bar{\mathcal{A}}_{1,2}&=-\frac{7}{96} \mathcal{A}_{1}^{(0)}\mathcal{F}_{d}^{(-)}e^{2i\omega}e_{0}^{3}\chi_{f}^{-19/6},\qquad\bar{\mathcal{A}}_{2,2}=-\frac{1}{16}\mathcal{A}_{2}^{(0)}\mathcal{F}_ {d}^{(-)}e^{2i\omega}e_{0}^{4}\chi_{f}^{-38/9},\\ \bar{\mathcal{A}}_{1,-2}&=\mathcal{A}_{1}^{(0)} \mathcal{F}_{d}^{(+)}e^{-2i\omega}\left\{-\frac{3}{4}e_{0}\chi_{f}^{-19/18}+e_ {0}^{3}\left[\frac{10277}{2432}\chi_{f}^{-19/6}-\frac{3323}{2432}\chi_{f}^{-19/ 18}\right]\right\},\\ \bar{\mathcal{A}}_{2,-2}&=\mathcal{A}_{2}^{(0)} \mathcal{F}_{d}^{(+)}e^{-2i\omega}\left\{1-\frac{277}{48}e_{0}^{2}\chi_{f}^{- 19/9}+e_{0}^{4}\left[\frac{3260071}{87552}\chi_{f}^{-38/9}-\frac{920471}{43776} \chi_{f}^{-19/9}\right]\right\},\\ \bar{\mathcal{A}}_{3,-2}&=\mathcal{A}_{3}^{(0)} \mathcal{F}_{d}^{(+)}e^{-2i\omega}\left\{\frac{9}{4}e_{0}\chi_{f}^{-19/18}+e_ {0}^{3}\left[\frac{9969}{2432}\chi_{f}^{-19/18}-\frac{40863}{2432}\chi_{f}^{-19 /6}\right]\right\},\\ \bar{\mathcal{A}}_{4,-2}&=\mathcal{A}_{4}^{(0)} \mathcal{F}_{d}^{(+)}e^{-2i\omega}\left\{4e_{0}^{2}\chi_{f}^{-19/9}+e_{0}^{4} \left[\frac{3323}{228}\chi_{f}^{-19/9}-\frac{1431}{38}\chi_{f}^{-38/9}\right] \right\},\\ \bar{\mathcal{A}}_{5,-2}&=\frac{625}{96}\mathcal{A}_ {5}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}e_{0}^{3}\chi_{f}^{-19/6},\qquad \bar{\mathcal{A}}_{6,-2}=\frac{81}{8}\mathcal{A}_{6}^{(0)}\mathcal{F}_{d}^{(+ )}e^{-2i\omega}e_{0}^{4}\chi_{f}^{-38/9}.\end{split} \tag{45}\]
The non-vanishing modes in DCS modification are
\[\begin{split}\delta\mathcal{A}_{1,2}&=\mathcal{A}_ {1}^{(0)}\mathcal{F}_{d}^{(-)}e^{2i\omega}e_{0}^{3}\chi_{f}^{-19/6}\frac{ \tilde{u}_{f}^{4}}{\nu^{4/5}}\Bigg{\{}\left(\frac{875}{3145728}-\frac{2275}{94 37184}\chi_{f}^{-4/3}\right)\Delta^{2}\\ &\qquad\qquad-\left(\frac{287}{4608}+\frac{15941}{23040}\chi_{f}^ {-4/3}+\frac{112}{15}\chi_{f}^{3/2}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right) \delta\varpi\Bigg{\}},\\ \delta\mathcal{A}_{2,2}&=\mathcal{A}_{2}^{(0)} \mathcal{F}_{d}^{(-)}e^{2i\omega}e_{0}^{4}\chi_{f}^{-38/9}\frac{\tilde{u}_{f}^ {4}}{\nu^{4/5}}\Bigg{\{}\left(\frac{725}{2359296}-\frac{325}{1179648}\chi_{f} ^{-4/3}\right)\Delta^{2}\\ &\qquad\qquad+\left(-\frac{371}{960}-\frac{41}{576}\chi_{f}^{-4 /3}+\frac{16}{5}\chi_{f}^{-23/9}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right) \delta\varpi\Bigg{\}},\end{split} \tag{46}\]
\[\begin{split}\delta\mathcal{A}_{1,-2}&=\mathcal{A}_ {1}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}\chi_{f}^{-19/18}\frac{\tilde{u}_{f}^ {4}}{\nu^{4/5}}\Bigg{\{}\Bigg{[}e_{0}\left(\frac{475}{393216}-\frac{325}{393216 }\chi_{f}^{-4/3}\right) \tag{47}\] \[&+e_{0}^{3}\left(\frac{83075}{37748736}-\frac{13338125}{5020581 888}\chi_{f}^{-4/3}-\frac{1034275}{88080384}\chi_{f}^{-19/9}+\frac{3340025}{ 239075328}\chi_{f}^{-31/9}\right)\Bigg{]}\Delta^{2}\\ &+\Bigg{[}e_{0}\left(\frac{623}{320}-\frac{41}{192}\chi_{f}^{-4 /3}-\frac{384}{5}\chi_{f}^{11/18}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right) \] \[&+e_{0}^{3}\Bigg{(}\frac{2070229}{583680}-\frac{3366541}{24514 56}\chi_{f}^{-4/3}-\frac{1587076063}{69457920}\chi_{f}^{-19/9}+\frac{421357}{1 16736}\chi_{f}^{-31/9}\] \[&+\left(\frac{67768}{285}\chi_{f}^{-3/2}-\frac{13292}{95}\chi _{f}^{11/18}\right)(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\Bigg{)}\Bigg{]} \delta\varpi\Bigg{\}},\end{split} \tag{48}\]
\[\delta\mathcal{A}_{2,-2} =\mathcal{A}_{2}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}\frac{\tilde{ u}_{f}^{4}}{\nu^{4/5}}\Bigg{\{}\Bigg{[}-\frac{25}{49152}+e_{0}^{2}\left(\frac{40175}{3538944 }\chi_{f}^{-19/9}-\frac{90025}{7077888}\chi_{f}^{-31/9}\right) \tag{105}\] \[+e_{0}^{4}\left(\frac{133501525}{3227516928}\chi_{f}^{-19/9}- \frac{2894366075}{45185236992}\chi_{f}^{-31/9}-\frac{11491459625}{90370473984} \chi_{f}^{-38/9}+\frac{1059523075}{6455033856}\chi_{f}^{-50/9}\right)\Bigg{]} \Delta^{2}\] \[+\Bigg{[}-\frac{29}{15}+\frac{256}{5}\chi_{f}^{5/3}(2\pi\mathcal{ M}F_{0})^{5/3}\ell_{c}+e_{0}^{2}\left(\frac{3680737}{293760}\chi_{f}^{-19/9}- \frac{11357}{3456}\chi_{f}^{-31/9}-\frac{7448}{45}\chi_{f}^{-4/9}(2\pi\mathcal{ M}F_{0})^{5/3}\ell_{c}\right)\] \[+e_{0}^{4}\Bigg{(}\frac{643741529}{14100480}\chi_{f}^{-19/9}- \frac{598353517}{22063104}\chi_{f}^{-31/9}-\frac{686542948523}{5231278080}\chi _{f}^{-38/9}+\frac{133662911}{3151872}\chi_{f}^{-50/9}\] \[+\left(\frac{157387}{135}\chi_{f}^{-23/9}-\frac{162827}{270}\chi _{f}^{-4/9}\right)(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\Bigg{)}\Bigg{]}\delta \varpi\Bigg{\}},\]
\[\delta\mathcal{A}_{3,-2} =\mathcal{A}_{3}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}\chi_{f}^ {-19/18}\frac{\tilde{u}_{f}^{4}}{\nu^{4/5}}\Bigg{\{}\Bigg{[}e_{0}\left(-\frac {475}{131072}+\frac{325}{131072}\chi_{f}^{-4/3}\right) \tag{106}\] \[+e_{0}^{3}\left(-\frac{2759}{960}+\frac{41}{64}\chi_{f}^{-4/3}+ \frac{1496275}{29360128}\chi_{f}^{-19/9}-\frac{4426825}{79691776}\chi_{f}^{-31 /9}\right)\Bigg{]}\Delta^{2}\] \[+\Bigg{[}e_{0}\left(-\frac{9168157}{1751040}-\frac{41}{192}\chi_ {f}^{-4/3}+\frac{384}{5}\chi_{f}^{11/18}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right)\] \[+e_{0}^{3}\Bigg{(}\frac{2070229}{583680}+\frac{3366541}{817152} \chi_{f}^{-4/3}+\frac{1981498061}{69457920}\chi_{f}^{-19/9}-\frac{558461}{3891 2}\chi_{f}^{-31/9}\] \[+\left(\frac{13292}{95}\chi_{f}^{11/18}-\frac{107896}{285}\chi_{ f}^{-3/2}\right)(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\Bigg{)}\Bigg{]}\delta \varpi\Bigg{\}},\]
\[\delta\mathcal{A}_{4,-2} =\mathcal{A}_{4}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}\chi_{f}^ {-19/9}\frac{\tilde{u}_{f}^{4}}{\nu^{4/5}}\Bigg{\{}\Bigg{[}e_{0}^{2}\left(- \frac{25}{2304}+\frac{325}{36864}\chi_{f}^{-4/3}\right) \tag{107}\] \[+e_{0}^{4}\left(-\frac{83075}{2101248}+\frac{10448975}{2353939776} \chi_{f}^{-4/3}+\frac{678425}{4358144}\chi_{f}^{-19/9}-\frac{51675}{311296} \chi_{f}^{-31/9}\right)\Bigg{]}\Delta^{2}\] \[+\Bigg{[}e_{0}^{2}\left(-\frac{101}{30}+\frac{41}{18}\chi_{f}^{-4 /3}+\frac{512}{5}\chi_{f}^{-4/9}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right) \] \[+e_{0}^{4}\Bigg{(}-\frac{335623}{27360}+\frac{2160121}{114912} \chi_{f}^{-4/3}+\frac{39932437}{813960}\chi_{f}^{-19/9}-\frac{6519}{152}\chi _{f}^{-31/9}\] \[+\left(\frac{106336}{285}\chi_{f}^{-4/9}-\frac{602032}{855}\chi_{ f}^{-23/9}\right)(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\Bigg{)}\Bigg{]}\delta \varpi\Bigg{\}},\]
\[\delta\mathcal{A}_{5,-2} =\mathcal{A}_{5}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}e_{0}^{3} \chi_{f}^{-19/6}\frac{\tilde{u}_{f}^{4}}{\nu^{4/5}}\Bigg{\{}\left(-\frac{78125}{3 145728}+\frac{203125}{9437184}\chi_{f}^{-4/3}\right)\Delta^{2} \tag{108}\] \[+\left(-\frac{16025}{4608}+\frac{25625}{4608}\chi_{f}^{-4/3}+ \frac{400}{3}\chi_{f}^{-3/2}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right)\delta \varpi\Bigg{\}},\] \[\delta\mathcal{A}_{6,-2} =\mathcal{A}_{6}^{(0)}\mathcal{F}_{d}^{(+)}e^{-2i\omega}e_{0}^{4} \chi_{f}^{-38/9}\frac{\tilde{u}_{f}^{4}}{\nu^{4/5}}\Bigg{\{}\left(-\frac{6525}{ 131072}+\frac{2925}{65536}\chi_{f}^{-4/3}\right)\Delta^{2}\] \[+\left(-\frac{63}{20}+\frac{369}{32}\chi_{f}^{-4/3}+\frac{864}{5} \chi_{f}^{-23/9}(2\pi\mathcal{M}F_{0})^{5/3}\ell_{c}\right)\delta\varpi\Bigg{\}}.\] |
2301.00013 | Don't throw that video away! Reference Frames can fix Video Analysis
with a Moving Camera | One common source of error in video analysis is camera movement. The paper
describes a simple frame of reference correction that students can employ to
salvage otherwise corrupted video analysis data. Two examples are provided. | Nathan T. Moore | 2022-12-31T00:06:09Z | http://arxiv.org/abs/2301.00013v1 | # Don't throw that video away!
###### Abstract
One common source of error in video analysis is camera movement. The paper describes a simple frame of reference correction that students can employ to salvage otherwise corrupted video analysis data. Two examples are provided.
## I Introduction
Video analysis is a convenient and fun way to collect kinematic position-time information for a motion that may not be otherwise accessible in the introductory lab. The general procedure is to track the successive motion of an object via it's x and y pixel position over multiple frames of a video.
One curriculum that uses video analysis as a learning support to construct and test ideas is Eugenia Etkina's ISLE approach to learning physics [1]. For example, in the 2-D Projectile motion unit, one video [2], involves Dr. Etkina tossing a ball vertically while rolling across the room on rollerblades. If you are reading the paper on a computer, you might watch the video now [http://islevides.net/experiment.php?topicid=2&exptid=95](http://islevides.net/experiment.php?topicid=2&exptid=95).
Another set of teaching videos is Peter Bohacek's Direct Measurement Physics Videos, [4]. These videos usually show physics in a "real" setting (outside the classroom) and are again powerful and engaging tools.
In both of these examples, students are presented with videos that can be analyzed frame by frame at constant time intervals, typically \(60\)\(frames/second\). Horizontal and vertical positions are available by either marks on a chalkboard (Ekina) or a computer drawn overlay (Bohacek).
There are also a number of video analysis software programs that allow students to analyze any old video they find or record with their cellphones. "Tracker", [5], is a free tool that works on a variety of platforms. Vernier's Logger Pro, [6], often used for data acquisition, also contains video analysis software.
In both of these packages, students repeatedly click on an object of interest and the software then collects time and 2D position data in spreadsheet form. However, the data collected is in units of an x,y pixel pair. Conversion from pixel to physical dimension is a student task, and the quality of student results can depend on the "calibration stick"[7] they employ.
Figure 2: A screenshot from Peter Bohacek’s Direct Measurement Videos, [3]. In this case, position information is given by a drawn overlay added to the video via post-processing.
Figure 1: A screenshot from the ISLE-based video website (developed by E. Etkina and D.T.Brookes). The video has been imported for analysis in Vernier’s LoggerPro software. Horizontal scale is given via the set of vertical lines drawn on the chalkboard with assumed spacing of 10cm. The (green) calibration stick is assumed to be 1.4m in length via lines drawn on the chalkboard. The video can be downloaded from [2].
Fixing moving camera motion
Figures 1 and 3 show the analysis process for the video in which Dr. Etkina throws a ball vertically while rollerblading across the classroom. The video of this event was taken by a camera that followed Etkina, so via direct tracking of the ball, the horizontal component is lost.
However, if a student also tracks the motion of a seemingly immobile object, for example the center seam of the chalkboard, the center of the clock, a corner of the calibration sheets, etc, the student can recover the horizontal motion of the ball via vector subtraction. Specifically, if you write the ball's position relative to the classroom wall as, \(X_{ball\ wrt\ wall}\), you can then express this position as a difference.
\[X_{ball\ wrt\ wall}=X_{ball\ wrt\ camera}-X_{clock\ wrt\ camera}\] \[Y_{ball\ wrt\ wall}=Y_{ball\ wrt\ camera}-Y_{clock\ wrt\ camera}\]
This subtraction can be accomplished in LoggerPro as a "calculated column" or the data can be exported to a spreadsheet and the operation performed there.
Results from this change of reference frame are shown in figures 4 and 5. In the introductory curriculum, frames of reference sometimes seems dry or contrived, but in video analysis projects with a moving camera, thinking about the movement of a seemingly immobile reference frame can be a magic bullet.
## III Example: A Bear Falls Out of a Tree
To further illustrate how useful this approach can be, consider this 2003 video of a bear being removed from a tree in Missoula Montana via a tranquilizer gun and a trampoline. [https://www.youtube.com/watch?v=9KiJnTGoPPII](https://www.youtube.com/watch?v=9KiJnTGoPPII) [8]. There are quite a few different application topics available in the video, and some of my students are outraged that the bear was so made fun of by the Missoula fire, wildlife, and police departments.
So, an ethical question: "Assuming the bear had to be removed from the tree, was bouncing it off a trampoline a humane thing to do?" With encouragement, the students can develop this question into a quantifiable measure, eg, "What would the bear's speed be if there was no trampoline in place?" There are many graphs online that show the fatality risk for pedestrians who are struck by cars at different speeds, for example [9], and it seems realistic to extrapolate this to a decrease in harm to the bear if its final velocity is reduced.
The bear video was shot by a professional videographer, Mark Hoyoak, and the focus of the camera follows the bear. Ignoring the early parts of the bear's fall when the zoom level changes, the video provides a falling body, tracked by a moving camera. If you consider one of the house's transom windows to be a static reference, camera movement can be removed.
This analysis is shown in figures 6, 7, and 8. The analysis assumes a trampoline leg height of 1 meter, which, based on the extracted gravitational acceleration values, is probably inaccurate.
Straight-line fits to the bear's vertical position give speeds of \(12.3m/s\approx 27mph\approx 44kmph\) as the bear hits the trampoline, and \(5.9m/s\approx 13mph\approx 21kmph\) at about the same altitude after bouncing off the trampoline. Based on the data in Figure 1b of [9] this corresponds to a reduction of pedestrian fatality risk from 5% to less than 1%. Extrapolating from vehicle fatality data, one could argue that in addition to being good comedy, using a trampoline in this case is humane wildlife management.
## IV Conclusion
Cellphone video is everywhere in a way that was unimaginable 20 years ago. Most of the videos that students might take for a kinematics assignment won't be shot from a tripod with a constant depth of field. Accordingly, talking about how to use topics from Physics to correct "errors" in video data collection can be empowering for students. If there's a fixed reference, a student no longer needs to be told, "That video is bad, go take it again...".
###### Acknowledgements.
The work would not have been possible without Mark Hoyoak's excellent videography. Thanks also to Eugenia Etkina for introducing me to video analysis many years ago. Thanks also to Peter Bohacek for his inspiring talks and amazing Direct Motion Video examples.
Figure 6: A screenshot from LoggerPro, showing the position-time track for both the bear and the lower left corner of the house’s left transom window. The calibration used is the trampoline’s leg height of 1m. The calibration stick has drifted off to the left because of the movement of the camera. The calibration length is only an estimate and should probably be back fit to the appropriate gravitational acceleration value for the bear’s free-fall.
Figure 7: A plot showing the vertical position-time graph for both the bear, the lower left corner of the house’s left transom window and the bear’s (calculated) motion relative to the window. |
2309.04457 | Ground state solutions for quasilinear Schrodinger type equation
involving anisotropic p-laplacian | This paper is concerned with the existence of a nonnegative ground state
solution of the following quasilinear Schr\"{o}dinger equation
\begin{equation*} \begin{split}
-\Delta_{H,p}u+V(x)|u|^{p-2}u-\Delta_{H,p}(|u|^{2\alpha})
|u|^{2\alpha-2}u=\lambda |u|^{q-1}u \text{ in }\;R^n;\;
u\in W^{1,p}(\;R^n)\cap L^\infty(\;R^N) \end{split} \end{equation*} where
$N\geq2$; $(\alpha,p)\in D_N=\{(x,y)\in \;R^2 : 2xy\geq y+1,\; y\geq2x,\;
y<N\}$ and $\lambda>0$ is a parameter. The operator $\Delta_{H,p}$ is the
reversible Finsler p-Laplacian operator with the function $H$ being the
Minkowski norm on $\;R^N$. Under certain conditions on $V$, we establish the
existence of a non-trivial non-negative bounded ground state solution of the
above equation. | Kaushik Bal, Sanjit Biswas | 2023-09-08T17:32:31Z | http://arxiv.org/abs/2309.04457v2 | # Ground state solutions for quasilinear Schrodinger type equation involving anisotropic p-laplacian
###### Abstract
This paper is concerned with the existence of a nonnegative ground state solution of the following quasilinear Schrodinger equation
\[\begin{cases}-\Delta_{H,p}u+V(x)|u|^{p-2}u-\Delta_{H,p}(|u|^{2\alpha})|u|^{2 \alpha-2}u=\lambda|u|^{q-1}u\text{ in }\mathbb{R}^{n}\\ u\in W^{1,p}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{N})\end{cases}\]
where \(N\geq 2\); \((\alpha,p)\in D_{N}=\{(x,y)\in\mathbb{R}^{2}:2xy\geq y+1,\ y\geq 2x,\ y<N\}\) and \(\lambda>0\) is a parameter. The operator \(\Delta_{H,p}\) is the reversible Finsler p-Laplacian operator with the function \(H\) being the Minkowski norm on \(\mathbb{R}^{N}\). Under certain conditions on \(V\), we establish the existence of a non-trivial non-negative bounded ground state solution of the above equation.
## Introduction and main results
In this paper, we are concerned with the following problem
\[\begin{cases}-\Delta_{H,p}u+V(x)|u|^{p-2}u-\Delta_{H,p}(|u|^{2\alpha})|u|^{2 \alpha-2}u=\lambda|u|^{q-1}u\text{ in }\mathbb{R}^{n}\\ u\in W^{1,p}(\mathbb{R}^{n})\cap L^{\infty}(\mathbb{R}^{N})\end{cases} \tag{1}\]
where \(\mathbb{N}\geq 2\), \((\alpha,p)\in D_{N}=\{(x,y)\in\mathbb{R}^{2}:2xy\geq y+1,\ y\geq 2x,\ y<N\}\) and \(\lambda>0\) is a parameter. The operator \(\Delta_{H,p}\) is defined as
\[\Delta_{H,p}u:=\text{div}(H(Du)^{p-1}\nabla_{\eta}H(Du))\]
known as anisotropic p-Laplacian, where \(\nabla_{\eta}\) denotes the gradient operator with respect to \(\eta\) variable. The function \(H:\mathbb{R}^{N}\rightarrow[0,\infty)\) is a Minkowski norm satisfying the following properties:
* \(H\) is a norm on \(\mathbb{R}^{N}\);
* \(H\in C^{4}(\mathbb{R}^{N}\setminus\{0\})\);
* The Hessian matrix \(\nabla_{\eta}^{2}(\frac{H^{2}}{2})\) is positive definite in \(R^{N}\setminus\{0\}\);
* \(H\) is uniformly elliptic, that means the set \[\mathcal{Q}_{1}^{H}:=\{\xi\in\mathbb{R}^{N}:H(\xi)<1\}\] is uniformly convex, i.e. there exists \(\Lambda>0\) such that \[\langle D^{2}H(\xi)\eta,\eta\rangle\geq\Lambda|\eta|^{2},\ \forall\xi\in\partial\mathcal{Q}_{1}^{H},\forall\eta\in\nabla H(\xi)^{\perp}.\]
* There exists a positive constant \(M=M(N,H)\) such that for \(1\leq i,j\leq N\), \[HH_{x_{i}x_{j}}+H_{x_{i}}H_{x_{j}}\leq M\]
For more details about \(H\), one may consult [4, 12, 32] and the references therein. A few examples of \(H\) are as follows:
**Example 0.1**.: _For \(k=2\) or \(k\geq 4\), we define \(H_{k}:\mathbb{R}^{N}\rightarrow\mathbb{R}\) as_
\[H_{k}(x_{1},x_{2},...x_{N}):=(\sum_{i=1}^{N}|x_{i}|^{k})^{\frac{1}{k}}. \tag{2}\]
**Example 0.2**.: _(Mezei-Vas [24, Remark 2.2]) For \(\rho,\mu>0\), we define \(H_{\rho,\mu}:\mathbb{R}^{N}\rightarrow\mathbb{R}\) as_
\[H_{\rho,\mu}(x_{1},x_{2},...,x_{N}):=\sqrt{\rho\sqrt{\sum_{i=1}^{N}x_{i}^{4}}+ \mu\sum_{i=1}^{N}x_{i}^{2}} \tag{3}\]
**Remark 0.3**.: _There exists two constants \(A,B>0\) such that_
\[A\parallel\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(V:\mathbb{R}^{N}\to\mathbb{R}\) is a given potential, \(\Phi:\mathbb{R}\times\mathbb{R}^{N}\to\mathbb{C}\); \(h,\rho:\mathbb{R}^{+}\to\mathbb{R}\) are functions and \(k\) is real constant. It is worth mentioning that the semilinear case corresponding to \(k=0\) has been extensively studied by many authors ( see [5, 13, 16] and the references therein).
The general equation (5) with various forms of \(h\) has been derived as models of several physical phenomena.
* The superfluid film equation in plasma physics has the structure (5) for \(h(s)=s\), see [17].
* For \(h(s)=(1+s)^{\frac{1}{2}}\), equation (5) models the self-channeling of a high-power ultrashort laser in matter (see [6, 28]).
In recent years, extensive studies have focused on the existence of solutions for quasilinear Schrodinger equation of the form
\[-\Delta u+V(x)u-k\Delta(u^{2})u=g(u)\text{ in }\mathbb{R}^{N} \tag{6}\]
where \(k>0\) is a constant. The existence of a nonnegative solution for (6) was proved for \(N=1\) and \(g(u)=|u|^{p-1}u\) by Poppenberg et al. [26] and for \(N\geq 2\) by Wang et al. [21]. In [19], Wang and Liu have proved that the equation (6) for \(k=\frac{1}{2}\) and \(g(u)=\lambda|u|^{p-1}u\) has a positive ground state solution when \(3\leq p<2.2^{*}-1\) and the potential \(V\in C(\mathbb{R}^{N},\mathbb{R})\) satisfies one of the following conditions:
* \(\lim_{|x|\to\infty}V(x)=\infty\).
* \(V(x)=V(|x|)\) and \(N\geq 2\).
* \(V\) is periodic in each variable.
* \(V_{\infty}:=\lim_{|x|\to\infty}=\|V\|_{L^{\infty}(\mathbb{R}^{N})}<\infty\).
They also have proved in [20] the existence of both one-sign and nodal ground state of soliton type solutions for (6) when \(3\leq p<22^{*}-1\) and the potential \(V\in C(\mathbb{R}^{N},\mathbb{R})\) satisfies
* There are positive constants \(M\), \(A\) and \(m\) such that for \(|x|\geq M\), \(V(x)\leq V_{\infty}-\frac{A}{1+|x|^{m}}\).
Similar work with critical growth has been done in [22]. Ruiz and Siciliano have proved the existence of a ground-state solution for (6) with \(g(u)=|u|^{p-1}u\), \(N\geq 3\), \(3\leq p<22^{*}-1\) under the following assumptions:
* \(0<V_{0}\leq V(x)\leq V(\infty):=\lim_{|x|\to\infty}V(x)>\infty\) and \(\langle\nabla V(x),x\rangle\in L^{\infty}(\mathbb{R}^{N})\).
* For every \(x\in\mathbb{R}^{N}\), the following map is concave \[s\to s^{\frac{N+2}{N+p+1}}V(s^{\frac{1}{N+p+1}}x)\]
In [33], Chen and Xu have proved that the equation (6) for \(g(u)=\lambda|u|^{p-1}u\) has a positive ground state solution for large \(\lambda>0\) under the condition \(N\geq 3\), \(3\leq p<22^{*}-1\), and the following assumptions on \(V\in C(\mathbb{R}^{N},\mathbb{R})\),
* \(0\leq V(x)\leq V(\infty):=\liminf_{|x|\to\infty}V(x)\infty\) and \(V\) is not identically equal to \(V(\infty)\).
* \(\langle\nabla V(x),x\rangle\in L^{\infty}(\mathbb{R}^{N})\cup L^{\frac{N}{2}}( \mathbb{R}^{N})\) and \(NV(x)+\langle\nabla V(x),x\rangle\geq 0\).
We end the literature review by mentioning Chen and Zhang [8], who proved the existence of a positive ground-state solution of
\[-\Delta u+V(x)u-k\Delta(u^{2})u=A(x)|u|^{p-1}u+\lambda B(x)|u|^{2^{2}-1} \tag{7}\]
when \(N\geq 3,2\leq p<22^{*}-1\) and under the following assumptions:
* \(V\in C^{1}(\mathbb{R}^{N},\mathbb{R}^{+}),0<V_{0}:=\inf_{x\in\mathbb{R}^{N}}V( x)\leq V(x)\leq V_{\infty}:=\lim_{|x|\to\infty}<\infty\) and \(V(x)\not\equiv V_{\infty}\);
* \(\langle\nabla V(x),x\rangle\in L^{\infty}(\mathbb{R}^{N})\), \(\langle\nabla V(x),x\rangle\leq 0\);
* \(A\in C^{1}(\mathbb{R}^{N},\mathbb{R}^{+})\), \(\lim_{|x|\to\infty}A(x)=A_{\infty}\in(0,\infty),A(x)\geq A_{\infty},0\leq \langle\nabla A(x),\ x\rangle\in L^{\infty}(\mathbb{R}^{N})\)
* \(B\in C^{1}(\mathbb{R}^{N},\mathbb{R}^{+})\), \(\lim_{|x|\to\infty}B(x)=B_{\infty}\in(0,\infty),B(x)\geq B_{\infty},0\leq \langle\nabla B(x),\ x\rangle\in L^{\infty}(\mathbb{R}^{N})\)
Before stating our main results we define two sets
\[\begin{split}\Pi=(p-1,2\alpha p^{*}-2\alpha q+p-1)& \cup(2\alpha p-2\alpha,2\alpha p^{*}-2\alpha)\cup(p+2\alpha-2,2 \alpha p^{*}-2\alpha q+p+2\alpha-2)\\ &\cup(\frac{p-1}{2\alpha}+2\alpha-1,p^{*}-p+\frac{p-1}{2\alpha}+2 \alpha-1)\cup(2\alpha p-1,2\alpha p^{*}-1).\end{split} \tag{8}\]
and,
\[D^{N}:=\{(x,y)\in\mathbb{R}^{2}:2xy\geq y+1,y\geq 2x+1,y<N\}.\]
Our main results are as follows:
**Theorem 0.5**.: _Let \(V\) be a constant potential, \(N\geq 2\), \((\alpha,p)\in D_{N}\) and \(q\in\Pi\cap(p-1,2\alpha p^{*}-1)\). Then for large \(\lambda>0\), equation (1) admits a non-trivial non-negative bounded ground state solution \(u\in C^{1}(\mathbb{R}^{N})\)._
**Theorem 0.6**.: _Suppose that the potential \(V\) satisfies \((v_{1})-(v_{2})\) and \(\mathbb{N}\geq 3\). We also assume \(p,\alpha\) satisfy one of the following_
* \(p=2\)_,_ \((\alpha,p)\in D_{N}\)_;_
* \((\alpha,p)\in D^{N}\)__
_and \(2\alpha p-1\leq q<2\alpha p^{*}-1\) (one has to take strict inequality if \(2\alpha p-1\in\Pi\)). Then for large \(\lambda>0\), equation (1) admits a non-trivial non-negative bounded ground state solution \(u\in C^{1}(\mathbb{R}^{N})\)._
**Remark 0.7**.: _If \(H=H_{2}\), \(p=2\), and \(\alpha=1\) then by using the strong maximum principle for Laplacian in Theorem 0.5 and Theorem 0.6, we obtain a positive ground state solution of (1), which generalize the main results of Chen et al. [33]._
The paper is organized as follows. In section 1, we reformulate this problem in an appropriate Orlicz space and discuss a few useful lemmas. In section 2, we prove some auxiliary results. Section 3 is devoted to the proof of Theorem 0.5. Finally, in section 4, we will give the proof of Theorem 0.6.
**Notation:** In this work we will use the following notations:
* \(C\) represents a positive constant whose value may change from line to line.
* \(W^{1,p}(\mathbb{R}^{N}):=\{u\in L^{p}(\mathbb{R}^{N}):\nabla u\in L^{p}( \mathbb{R}^{N})\}\) with the usual norm \[\left\|u\right\|_{1,p,\mathbb{R}^{N}}^{p}=\int_{\mathbb{R}^{N}}[|u|^{p}+| \nabla u|^{p}]dx\]
* \(D^{1,p}(\mathbb{R}^{N}):=\{u\in L^{p^{*}}(\mathbb{R}^{N}):\nabla u\in L^{p}( \mathbb{R}^{N})\}\) with the norm \[\|u\|^{p}:=\int_{\mathbb{R}^{N}}|\nabla u|^{p}dx,\] where \(p^{*}=\frac{Np}{N-p}\).
* For a function \(h\in L^{1}_{loc}(\mathbb{R}^{N})\), we denote \[\int h(x)dx:=\int_{\mathbb{R}^{N}}h(x)dx.\]
* \(C^{\infty}_{c}(\mathbb{R}^{N}):=\{u\in C^{\infty}(\mathbb{R}^{N})|\;u\text{ has compact support.}\}\)
* \(X^{\prime}\) denotes the dual of \(X\) and \(\langle\cdot,\cdot\rangle\) denotes the duality relation.
* For \(\Omega\subset\mathbb{R}^{N}\), \(|\Omega|\) denotes the Lebesgue measure of \(\Omega\).
* o(1) represents a quantity which tends to \(0\) as \(n\to\infty\).
* The symbols \(\rightharpoonup\) and \(\to\) denote weak convergence and strong convergence respectively.
## 1 Variational Framework and Preliminaries
The variational form of the equation (1) is
\[I(u)=\frac{1}{p}\int[1+(2\alpha)^{p-1}|u|^{(2\alpha-1)p}]H(Du)^{p}+V(x)|u|^{p }]dx-\frac{\lambda}{q+1}\int|u|^{q+1}dx\]
which is not well-defined on \(W^{1,p}(\mathbb{R}^{N})\). Inspired by [19, 21, 33], we choose a transformation \(u=f(v)\) where \(f\) is defined as follows:
\[\begin{cases}f^{\prime}(t)=[1+(2\alpha)^{p-1}|f(t)|^{(2\alpha-1)p}]^{-\frac{1 }{p}},\;t>0\\ f(0)=0\text{ and }f(-t)=-f(t)\text{ for all }t\in\mathbb{R}\end{cases} \tag{9}\]
Under the transformation \(u=f(v)\), the above functional becomes
\[I(f(v))=\frac{1}{p}\int[H(Dv)^{p}+V(x)|f(v)|^{p}]dx-\frac{\lambda}{q+1}\int|f( v)|^{q+1}dx. \tag{10}\]
We give some important properties of \(f\), which will be useful for establishing our main results.
**Lemma 1.1**.: _The function \(f\) enjoys the following properties:_
* \(f\) _is uniquely defined, a_ \(C^{2}\)_-function, and invertible._
* \(|f(t)|\leq|t|\) _for all_ \(t\in\mathbb{R}\)_._
* \(\frac{f(t)}{t}\to 1\) _as_ \(t\to 0\)_._
* _There exists_ \(a>0\) _such that_ \(f(t)t^{-\frac{1}{2\alpha}}\to a\) _as_ \(t\to\infty\)_._
* \(|f(t)|\leq(2\alpha)^{\frac{1}{2\alpha p}}|t|^{\frac{1}{2\alpha}}\) _for all_ \(t\in\mathbb{R}\)
_._
* \(f(t)\leq 2\alpha tf^{\prime}(t)\leq 2\alpha f(t)\) _for all_ \(t\geq 0\)_._
* _There exists_ \(C>0\) _such that_ \[f(t)\geq\begin{cases}C|t|,\text{ if }\ |t|\leq 1\\ C|t|^{\frac{1}{2\alpha}},\text{ if }\ |t|\geq 1\end{cases}\]
* \(|f|^{p}\) _is convex if and only if_ \(p\geq 2\alpha\)_._
* \(|f^{2\alpha-1}(t)f^{\prime}(t)|\leq(\frac{1}{2\alpha})^{\frac{p-1}{p}}\)_, for all_ \(t\in\mathbb{R}\)_._
* _There exist two positive constants_ \(M_{1}\) _and_ \(M_{2}\) _such that_ \[|t|\leq M_{1}|f(t)|+M_{2}|f(t)|^{2\alpha},\text{ for all }t\in\mathbb{R}.\]
Proof.: The proof of the first seven properties can be found in [29]. To prove property \((viii)\) we define \(\phi:\mathbb{R}\to\mathbb{R}\) by \(\phi(t)=|f(t)|^{p}\). It is easy to check that,
* \(\phi^{\prime}(t)=p|f(t)|^{p-2}f(t)f^{\prime}(t)\)
* \(\phi^{\prime\prime}=p|f|^{p-2}[ff^{\prime\prime}+(p-1)f^{\prime 2}]\).
Now, \(\phi\) is convex if and only if
\[ff^{\prime\prime}+(p-1)f^{\prime 2}=\frac{(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f| ^{p(2\alpha-1)}}{[1+(2\alpha)^{p-1}|f|^{p(2\alpha-1)}]^{1+\frac{2}{p}}}\geq 0. \tag{11}\]
Moreover, \(ff^{\prime\prime}+(p-1)f^{\prime 2}\geq 0\) if and only if \((p-1)+(p-2\alpha)(2\alpha)^{p-1}|f|^{p(2\alpha-1)}\geq 0\). Hence, we can conclude that if \(p\geq 2\alpha\) then \(\phi\) is convex. Conversely, if \(\phi\) is convex then
\[(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f(t)|^{p(2\alpha-1)}\geq 0,\text{ for all }t>0.\]
Therefore,
\[\frac{(p-1)+(p-2\alpha)(2\alpha)^{p-1}|f|^{p(2\alpha-1)}}{t^{\frac{p(2\alpha- 1)}{2\alpha}}}\geq 0.\]
Take \(t\to\infty\), we have
\[\lim_{t\to\infty}[(p-1)t^{-\frac{p(2\alpha-1)}{2\alpha}}+(p-2\alpha)(2\alpha) ^{p-1}(|f(t)|t^{-\frac{1}{2\alpha}})^{p(2\alpha-1)}]\geq 0\]
Using the fact \(2\alpha\geq\frac{p+1}{p}\) and the property (iv), we have \((p-2\alpha)(2\alpha)^{p-1}a^{p(2\alpha-1)}\geq 0\) that is, \(p\geq 2\alpha\).
The definition of \(f^{\prime}\) follows the property (ix) and property (x) is an immediate consequence of properties (iv) and (v).
Now, we are going to define a suitable space so that RHS of (10) makes sense. We define a normed space \(X:=\{v\in W^{1,p}(\mathbb{R}^{N}):\int V(x)|f(v)|^{p}\ dx<\infty\}\) equipped with the norm
\[\|v\|=\|H(Dv)\|_{L^{p}(\mathbb{R}^{N})}+\inf_{\eta>0}\frac{1}{\eta}\{1+\int V (x)|f(\eta v)|\ dx\}. \tag{12}\]
The space \(X_{r}=\{v\in X:\ \text{v is radial }\}\) is a subspace of \(X\).
**Remark 1.2**.: _The following inequality holds true_
\[\|v\|\leq 1+\|H(Dv)\|_{L^{p}(\mathbb{R}^{N})}+\int V(x)|f(v)|\;dx \tag{13}\]
**Lemma 1.3**.: _The following holds:_
* _There exists a positive constant_ \(C\) _such that for all_ \(v\in X\)__ \[\frac{\int V(x)|f(v)|^{p}dx}{1+\left[\int V(x)|f(v)|^{p-1}\right]^{\frac{p-1}{p} }}\leq C[\|\nabla v\|_{L^{p}(\mathbb{R}^{N})}^{p^{*}}+\inf_{\xi>0}\frac{1}{\xi} (1+\int V(x)|f(\xi v)|^{p}dx)].\]
* _If_ \(v_{n}\to v\) _in_ \(X\) _then_ \[\int V(x)|f(v_{n})-f(v)|^{p}dx\to 0\] _and_ \[\int V(x)\|f(v_{n})|^{p}-|f(v)|^{p}|dx\to 0.\]
* _If_ \(\int V(x)|f(v_{n}-v)|^{p}dx\to 0\) _then_ \[\inf_{\xi>0}\frac{1}{\xi}[1+\int V(x)|f(\xi(v_{n}-v))|^{p}dx]\to 0.\]
Proof.: For \(\xi>0\) and \(v\in X\), we define
\[A_{\xi}=\{x\in\mathbb{R}^{N}:\xi|v(x)|\leq 1\}.\]
Now, by using (ii) we can write
\[\int V(x)|f(v)|^{p}dx =\int_{A_{\xi}}V(x)|f(v)|^{p}dx+\int_{A_{\xi}^{c}}V(x)|f(v)|^{p}dx\] \[=\int_{A_{\xi}}V(x)|f(v)|^{p-1}|v(x)|dx+\int_{A_{\xi}^{c}}V(x)|f(v )|^{p}dx\]
Using Holder inequality, (vii) in lemma 1.1 and \(s^{\frac{1}{p}}\leq 1+s\) for all \(s\geq 0\) we have
\[\int_{A_{\xi}}V(x)|f(v)|^{p-1}|v(x)|dx \leq(\int_{A_{\xi}}V(x)|v|^{p}dx)^{\frac{1}{p}}(\int_{A_{\xi}}V( x)|f(v)|^{p}|dx)^{\frac{p-1}{p}}\] \[\leq(\frac{1}{\xi}\int_{A_{\xi}}V(x)|\xi v|^{p}dx)^{\frac{1}{p}} (\int V(x)|f(v)|^{p}|dx)^{\frac{p-1}{p}}\] \[\leq C(\frac{1}{\xi}\int_{A_{\xi}}V(x)|f(\xi v)|^{p}dx)^{\frac{1} {p}}(\int V(x)|f(v)|^{p}|dx)^{\frac{p-1}{p}}\] \[\leq C[\|\nabla v\|_{L^{p}(\mathbb{R}^{N})}^{p^{*}}+\frac{1}{\xi} (1+\int V(x)|f(\xi v)|^{p}dx)](\int V(x)|f(v)|^{p}|dx)^{\frac{p-1}{p}} \tag{14}\]
If \(\xi\geq 1\) then by using (iv) in lemma 1.1 we get
\[\int_{A^{c}_{\xi}}V(x)|f(v)|^{p}dx \leq C\int_{A^{c}_{\xi}}V(x)|v|^{p\over 2a}dx\leq C\int_{A^{c}_{\xi}}V(x) |\xi v|^{p\over 2a}dx\leq C{1\over\xi}\int_{A^{c}_{\xi}}V(x)|f(\xi v)|^{p}dx\] \[\leq C[\|\nabla v\|_{L^{p}(\mathbb{R}^{N})}^{p^{*}}+{1\over\xi}(1+ \int V(x)|f(\xi v)|^{p}dx)] \tag{15}\]
If \(0<\xi<1\) then by using (iv), Sobolev inequality and Chebyshev inequality, we deduce
\[\int_{A^{c}_{\xi}}V(x)|f(v)|^{p}dx \leq C\int_{A^{c}_{\xi}}V(x)|v|^{p}dx\leq C\int_{A^{c}_{\xi}}|v|^{ p}dx\leq C[\int_{A^{c}_{\xi}}|v|^{p^{*}}dx]^{p\over p^{*}}|A^{c}_{\xi}|^{1-{p \over p^{*}}}\] \[\leq C[\int|v|^{p^{*}}dx]^{p\over p^{*}}|A^{c}_{\xi}|^{1-{p \over p^{*}}}\leq C[\int|\nabla v|^{p}dx][\xi^{p^{*}}\int_{A^{c}_{\xi}}|v|^{p^ {*}}]^{p\over N}\] \[\leq C[\int|\nabla v|^{p}dx][\int_{\mathbb{R}^{N}}|v|^{p^{*}}]^{p \over N}\leq C[\int|\nabla v|^{p}dx]^{1+{p^{*}\over N}}\leq C[\int|\nabla v|^{ p}dx]^{p^{*}\over p}\] \[\leq C[\|\nabla v\|_{L^{p}(\mathbb{R}^{N})}^{p^{*}}+{1\over\xi}(1+ \int V(x)|f(\xi v)|^{p}dx)] \tag{16}\]
Thus, from (14)-(16) we can conclude that for all \(\xi>0\)
\[\int V(x)|f(v)|^{p}dx\leq C[\|\nabla v\|_{L^{p}(\mathbb{R}^{N})}^{p}+{1\over \xi}(1+\int V(x)|f(\xi v)|^{p}dx)][1+(\int V(x)|f(v)|^{p}dx)^{p-1\over p}]\]
which proves the first property. To prove the second property, if \(v_{n}\to v\) in \(X\) then from the first property we have,
\[\int V(x)|f(v_{n}-v)|^{p}dx\to 0\mbox{ as }n\to\infty.\]
There exists a nonnegative function \(h\in L^{1}(\mathbb{R}^{N})\) such that up to a subsequence \(v_{n}\to v\) a.e. in \(\mathbb{R}^{N}\) and \(V(x)|f(v_{n}-v)|^{p}\leq h\). Since \(|f|^{p}\) is convex and satisfies \(\Delta_{2}\) condition (see M.M Rao[27]) so \(V(x)|f(v_{n})|^{p}\leq CV(x)[|f(v_{n}-v)|^{p}+|f(v)|^{p}]\leq C[h+V(x)|f(v)|^{ p}]\). Moreover, Fatou's lemma ensures \(\int V(x)|f(v)|^{p}dx<\infty.\) Thus, by the Dominated Convergence Theorem, we can conclude
\[\int V(x)|f(v_{n})-f(v)|^{p}dx\to 0.\]
and
\[\int V(x)\|f(v_{n})|^{p}-|f(v)|^{p}|dx\to 0.\]
To prove the third part, since \({f(t)\over t}\) is nonincreasing in \((0,\infty)\) so for \(\xi>1\) and \(v\in X\), we obtain
\[{1\over\xi}(1+\int V(x)|f(\xi v)|^{p}dx)\leq{1\over\xi}+\xi^{p-1}\int V(x)|f(v _{n}-v)|^{p}dx \tag{17}\]
For every \(\epsilon>0\), we can choose \(\xi_{0}>1\) such that \({1\over\xi_{0}}<{\epsilon\over 2}\). There exists a positive integer \(N_{0}\) such that \(\int V(x)|f(v_{n}-v)|^{p}dx<{\epsilon\over 2\xi_{0}^{p-1}}\), for all \(n\geq N_{0}.\) Thus, (17) yields
\[\inf_{\xi>0}{1\over\xi}(1+\int V(x)|f(\xi v)|^{p}dx)\leq\epsilon,\mbox{ for all }n\geq N_{0}\]
and the property (3) follows.
**Corollary 1.4**.: _If \(u_{n}\to 0\) in \(X\) if and only if \(\int[H(Du_{n})^{p}+V|f(u_{n})|^{p}]dx\to 0\) as \(n\to\infty\)._
Proof.: The proof is an immediate consequence of the above lemma.
Define \(E=\{u\in W^{1,p}(\mathbb{R}^{N})|\int V(x)|u|^{p}\ dx<\infty\}\) equipped with the norm
\[\|u\|^{p}=\int[|\nabla u|^{p}+V(x)|u|^{p}]\ dx\]
**Corollary 1.5**.: _The embedding \(E\hookrightarrow X\) is continuous._
Proof.: Using the second property in lemma 1.1 we have
\[\int V(x)|f(v_{n})|^{p}dx\leq\int V(x)|v_{n}|^{p}dx.\]
Thus, if \(v_{n}\to 0\) in \(E\) then
\[\int V(x)|f(v_{n})|^{p}dx\to 0.\]
Hence, lemma 1.3 ensures \(v_{n}\to 0\) in \(X\).
**Lemma 1.6**.:
* _The map_ \(v\to f(v)\) _is continuous from_ \(X\) _to_ \(L^{s}(\mathbb{R}^{N})\) _for_ \(p\leq s\leq 2\alpha p^{*}\)_. Moreover, the map is locally compact for_ \(p\leq s<2\alpha p^{*}\)_._
* _The map_ \(v\to f(v)\) _from_ \(X_{r}\) _to_ \(L^{s}(\mathbb{R}^{N})\) _is compact for_ \(p<s<2\alpha p^{*}\)_._
Proof.: It is clear that under the condition \((v_{1})\), the embedding \(E\hookrightarrow W^{1,p}(\mathbb{R}^{N})\) is continuous. Moreover, if \(v\in X\) then \(f(v)\in E\). There exists \(C>0\) such that for every \(v\in X\),
\[\|f(v)\|_{L^{p}(\mathbb{R}^{N})}\leq C\|f(v)\|_{E}\leq C[\int(|\nabla v|^{p}+V (x)|f(v)|^{p})dx]^{\frac{1}{p}}. \tag{18}\]
Using the property \((v)\) and \((ix)\) in lemma 1.1, we have
\[\int|f(v)|^{2\alpha p^{*}}dx\leq[\int|\nabla f^{2\alpha}(v)|^{p}dx]^{\frac{p}{ p^{*}}}\leq C[\int|\nabla v|^{p}dx]^{\frac{p}{p^{*}}} \tag{19}\]
Using (18), (19) and the interpolation inequality, we can conclude \(f(v)\in L^{s}(\mathbb{R}^{N})\) for all \(s\in[p,2\alpha p^{*}]\). Let \(v_{n}\to v\) in \(X\). The property (1) in lemma 1.3 ensures
\[\int V(x)|f(v_{n})-f(v)|^{p}dx\to 0.\]
Furthermore, \(Dv_{n}\to Dv\) in \((L^{p}(\mathbb{R}^{N}))^{N}\). For every \(1\leq i\leq N\), without loss of generality we can assume that there exists \(h_{i}\in L^{p}(\mathbb{R}^{N})\) such that for almost every \(x\in\mathbb{R}^{N}\),
\[v_{n}(x)\to v(x)\ \ \text{as}\ \ n\to\infty \tag{20}\] \[\frac{\partial v_{n}}{\partial x_{i}}(x)\to\frac{\partial v_{n}} {\partial x_{i}}(x)\ \ \text{as}\ \ n\to\infty\] \[|\frac{\partial v_{n}}{\partial x_{i}}|,|\frac{\partial v}{ \partial x_{i}}|\leq h_{i}\]
By the Dominated Convergence Theorem, we have
\[\int|\frac{\partial}{\partial x_{i}}(f(v_{n}))-\frac{\partial}{\partial x_{i}}(f(v ))|^{p}dx=\int|f^{\prime}(v_{n})\frac{\partial v_{n}}{\partial x_{i}}-f^{\prime }(v)\frac{\partial v}{\partial x_{i}}|^{p}dx\to 0.\]
Therefore, \(Df(v_{n})\to Df(v)\) in \(L^{p}(\mathbb{R}^{N})\). Consequently, \(f(v_{n})\to f(v)\) in \(E\). Since for all \(s\in[p,p^{*}]\),
\[E\hookrightarrow W^{1,p}(\mathbb{R}^{N})\hookrightarrow L^{s}(\mathbb{R}^{N})\]
so \(f(v_{n})\to f(v)\) in \(L^{s}(\mathbb{R}^{N})\). Interpolation inequality and Rellich's lemma complete the first part. The second part is easily deduced from Theorem 1.10.
**Lemma 1.7**.: \((X,\|\cdot\|)\) _is a Banach space._
Proof.: Let \(\{u_{n}\}\) is a Cauchy sequence in \(X\). Since \(X\hookrightarrow D^{1,p}(\mathbb{R}^{N})\) so there exists \(u\in D^{1,p}(\mathbb{R}^{N})\) such that \(u_{n}\to u\) in \(D^{1,p}(\mathbb{R}^{N})\). By the inequality (1) in lemma 1.3, we observe
\[\int V|f(u_{n}-u_{m})|^{p}dx\to 0\text{ as }m,n\to\infty.\]
Under the assumption \((v_{1})\), we have
\[\int|f(u_{n}-u_{m})|^{p}dx\to 0\text{ as }m,n\to\infty \tag{21}\]
Using (19), (21) and Interpolation inequality, we get
\[\int|f(u_{n}-u_{m})|^{2ap}dx\to 0\text{ as }m,n\to\infty. \tag{22}\]
Using property (x) in lemma 1.1, we have
\[\int|u_{n}-u_{m}|^{p}dx\leq M_{1}\int|f(u_{n}-u_{m})|^{p}dx+M_{2}\int|f(u_{n}- u_{m})|^{2ap}dx\to 0\text{ as }m,n\to\infty \tag{23}\]
which implies \(\{u_{n}\}\) is Cauchy in \(L^{p}(\mathbb{R}^{N})\). Completeness property allows us to assume the existence of \(w\in L^{p}(\mathbb{R}^{N})\) such that
\[u_{n}\to w\text{ in }L^{p}(\mathbb{R}^{N}),\;u_{n}\to w\text{ a.e. in }\mathbb{R}^{N}.\]
Since \(Du_{n}\to Du\) in \(L^{p}(\mathbb{R}^{N})\) so, \(Dw=Du\). Consequently, \(w\in W^{1,p}(\mathbb{R}^{N})\).
Our next claim is
\[u_{n}\to w\text{ in }X.\]
For every \(\epsilon>0\) there exists \(N_{0}\in\mathbb{N}\) such that
\[\int V|f(u_{n}-u_{m})|^{p}dx<\epsilon\text{ for all }m,n\geq N_{0}.\]
By Fatou's lemma, we have for \(n\geq N_{0}\),
\[\int V|f(u_{n}-w)|^{p}dx\leq\liminf_{m\to\infty}\int V|f(u_{n}-u_{m})|^{p}dx<\epsilon \tag{24}\]
Using property 3 in lemma 1.3, we can conclude \(u_{n}\to w\) in \(X\). Hence, \((X,\|\cdot\|)\) is a Banach space.
**Remark 1.8**.: _Under the condition \((v_{1})\), \(X=W^{1,p}(\mathbb{R}^{N})\) and the \(\|\cdot\|\) is equivalent to the usual norm \(\|\cdot\|_{1,p,\mathbb{R}^{N}}\)._
Proof.: Let \(u\in W^{1,p}(\mathbb{R}^{N})\). By property (ii) in lemma 1.1, we have
\[\int V(x)|f(u)|^{p}dx\leq V(\infty)\int|u|^{p}\ dx<\infty.\]
Hence, \(X=W^{1,p}(\mathbb{R}^{N})\). We claim that the identity map \(Id:W^{1,p}(\mathbb{R}^{N})\to X\) is a bounded linear map. Using property (ii) in lemma 1.1 and \((v_{1})\), we have
\[\inf_{\xi>0}\frac{1}{\xi}[1+\int V(x)|f(\xi u)|^{p}\ dx]\leq\inf_{\xi>0}[\frac {1}{\xi}+(\xi)^{p-1}V(\infty)\int|u|^{p}\ dx] \tag{25}\]
Now, consider the function
\[g(\xi)=\frac{1}{\xi}+L\xi^{p-1}\ \text{for}\ \xi>0\]
where \(L=V(\infty)\int|u|^{p}\ dx\). One can directly find the global minimum of \(g\), which is equal to \([(p-1)^{\frac{1}{p}}+(p-1)^{\frac{1-p}{p}}]\ L^{\frac{1}{p}}\). Thus, there exists a constant \(C=[(p-1)^{\frac{1}{p}}+(p-1)^{\frac{1-p}{p}}]V(\infty)^{\frac{1}{p}}\) such that
\[\inf_{\xi>0}\frac{1}{\xi}[1+\int V(x)|f(\xi u)|^{p}dx]\leq C\|u\|_{L^{p}( \mathbb{R}^{N})}\]
which proves that the map \(Id\) is bounded. The conclusion follows from the Inverse Mapping Theorem.
The following compactness lemma is very useful whose proof is similar to that of lemma 2.2 [31].
**Lemma 1.9**.: _If \(\{v_{n}\}\) is a bounded sequence in \(X\) such that_
\[\sup_{x\in\mathbb{R}^{N}}\int_{B_{1}(x)}|f(v_{n})|^{p}dx\to 0\ \text{as}\ n\to\infty.\]
_Then \(f(v_{n})\to 0\) in \(L^{s}(\mathbb{R}^{N})\) for every \(s\in(p,2\alpha p^{*})\)._
**Theorem 1.10**.: _(Lions [18, Theorem II.1] ) The embedding \(W^{1,p}_{r}(\mathbb{R}^{N})\hookrightarrow L^{q}(\mathbb{R}^{N})\) is compact for \(p<q<p^{*}\)._
We will use the following slightly modified version of Jeanjean [15, Theorem 1.1]. The last part of the theorem follows from [15, Lemma 2.3].
**Theorem 1.11**.: _Let \((X,\|\cdot\|)\) be a Banach space and \(J\subset\mathbb{R}^{+}\) be an interval. Consider the family of \(C^{1}\) functional_
\[I_{\delta}(u)=Au-\delta Bu,\ \delta\in J\]
_where \(B\) is a non-negative functional and either \(Au\to\infty\) or \(Bu\to\infty\) as \(\|u\|\to\infty\). If_
\[C_{\delta}:=\inf_{\gamma\in\Gamma_{\delta}}\max_{t\in[0,1]}I_{\delta}(\gamma(t ))>0\]
_where \(\Gamma_{\delta}=\{\gamma\in C([0,1];X):\gamma(0)=0,I_{\delta}(\gamma(1))<0\}\). Then for almost every \(\delta\in J\), there exists a sequence \(\{x_{n}\}\) such that_
* \(\{x_{n}\}\) _is bounded in_ \(X\)_._
* \(I_{\delta}(x_{n})\) _converges to_ \(C_{\delta}\)_._
* \(I^{\prime}_{\delta}(x_{n})\) _converges to 0 in_ \(X^{*}\)_._
_Moreover, the map: \(\delta\to C_{\delta}\) is continuous from the left._
Now, we are ready to reformulate our problem. We define another functional \(J:X\to\mathbb{R}\) by
\[J(v)=\frac{1}{p}\int H(Dv)^{p}+V(x)|f(v)|^{p}|dx-\frac{\lambda}{q+1}\int|f(v)| ^{q+1}dx.\]
which is a \(C^{1}\) functional, whose derivative is given by
\[\langle J^{\prime}(v),w\rangle=\int H(Dv)^{p-1}\nabla H(Dv).\nabla Dwdx+\int V |f(v)|^{p-2}f(v)f^{\prime}(v)wdx-\lambda\int|f(v)|^{q-1}f(v)f^{\prime}(v)wdx \tag{26}\]
If \(v\) is a critical point of \(J\) then \(v\) satisfies the following equation in the weak sense
\[-\Delta_{H,p}v+V(x)|f(v)|^{p-2}f(v)f^{\prime}(v)-\frac{\lambda}{q+1}|f(v)|^{q -1}f(v)f^{\prime}(v)=0. \tag{27}\]
Thus \(u=f(v)\) is a solution of (1). Our aim is to find a critical point of \(J\) in \(X\). Firstly, we present Pohozaev type identity corresponding to (27), and for that, we need the following lemma.
**Lemma 1.12**.: _Let \(v\in W^{1,p}(\mathbb{R}^{N})\) be a solution of (27) with \(q\in\Pi\). Then \(v\in L^{\infty}(\mathbb{R}^{N})\)._
Proof.: For each \(m\in\mathbb{N}\) and \(s>1\), we define \(A_{m}=\{x\in\mathbb{R}^{N}\|u(x)|^{s-1}\leq m\}\), \(B_{m}=\mathbb{R}^{N}\setminus A_{m}\) and let us consider two functions
\[v_{m}=\begin{cases}v|v|^{p(s-1)}\text{ if }x\in A_{m}\\ m^{p}v\text{ if }x\in B_{m}\end{cases} \tag{28}\]
and
\[w_{m}=\begin{cases}v|v|^{(s-1)}\text{ if }x\in A_{m}\\ mv\text{ if }x\in B_{m}\end{cases} \tag{29}\]
So we have that \(v_{m}\in W^{1,p}(\mathbb{R}^{N})\), \(|v_{m}|\leq|v|^{ps-p+1}\), \(\|v|^{p-1}v|=|w_{m}|^{p}\leq m^{p}|v|^{p}\), \(|w_{m}|\leq|v|^{s}\),
\[\nabla v_{m}=\begin{cases}(ps-p+1)|v|^{p(s-1)}\nabla v\text{ if }x\in A_{m}\\ m^{p}\nabla v\text{ if }x\in B_{m}\end{cases} \tag{30}\]
and
\[\nabla w_{m}=\begin{cases}s|v|^{s-1}\nabla v\text{ if }x\in A_{m}\\ m\nabla v\text{ if }x\in B_{m}\end{cases} \tag{31}\]
We also have
\[\int H(Dw_{m})^{p}dx =\int_{A_{m}}H(s|v|^{s-1}Dv)^{p}dx+\int_{B_{m}}H(mDv)^{p}dx\] \[=\int_{A_{m}}s^{p}|v|^{p(s-1)}H(Dv)^{p}dx+m^{p}\int_{B_{m}}H(Dv)^{p }dx \tag{32}\]
and
\[\int H(Dv)^{p-1}\nabla H(Dv).Dv_{m}dx =(ps-p+1)\int_{A_{m}}|v|^{p(s-1)}H(Dv)^{p-1}\nabla H(Dv).Dvdx+m^{p} \int_{B_{m}}H(Dv)^{p}dx\] \[=(ps-p+1)\int_{A_{m}}|v|^{p(s-1)}H(Dv)^{p}dx+m^{p}\int_{B_{m}}H(Dv )^{p}dx \tag{33}\]
As a consequence of (33), we get
\[\int_{A_{m}}|v|^{p(s-1)}H(Dv)^{p}dx\leq\frac{1}{ps-p+1}\int H(Dv)^{p-1}\nabla H (Dv).Dv_{m}dx \tag{34}\]
Using (32) and (33), we derive
\[\int H(Dw_{m})^{p}dx=\int H(Dv)^{p-1}\nabla H(Dv).Dv_{m}dx+(s^{p}-ps+p-1)\int_ {A_{m}}|v|^{p(s-1)}H(Dv)^{p}dx \tag{35}\]
Consider \(v_{m}\) as a test function and using the definition of weak solution, we obtain
\[\int H(Dv)^{p-1}\nabla H(Dv).Dv_{m}=\int[\lambda|f(v)|^{q-1}-V(x)|f(v)|^{p-2}] f(v)f^{\prime}(v)v_{m}dx \tag{36}\]
Using (34)-(36), we obtain
\[\int H(Dw_{m})^{p}dx+s^{p}\int V(x)|f(v)|^{p-2}f(v)f^{\prime}(v)v _{m}dx=\int H(Dv)^{p-1}\nabla H(Dv).Dv_{m}dx+(s^{p}-ps+p-1)\] \[\int_{A_{m}}|v|^{p(s-1)}H(Dv)^{p}dx+s^{p}\int V(x)|f(v)|^{p-2}f(v )f^{\prime}(v)v_{m}dx\] \[\leq[\frac{s^{p}-ps+p-1}{ps-p+1}+1]\int H(Dv)^{p-1}\nabla H(Dv).Dv _{m}dx+s^{p}\int V(x)|f(v)|^{p-2}f(v)f^{\prime}(v)v_{m}dx\] \[\leq s^{p}[\int H(Dv)^{p-1}\nabla H(Dv).Dv_{m}dx+\int V(x)|f(v)|^{ p-2}f(v)f^{\prime}(v)v_{m}dx]\] \[=s^{p}\lambda\int|f(v)|^{q-1}f(v)f^{\prime}(v)v_{m}dx \tag{37}\]
since \(f^{\prime}(t)\leq 1\), (37) implies
\[\int H(Dw_{m})^{p}dx\leq s^{p}\lambda\int|f(v)|^{q}|v_{m}|dx\]
By using the facts that \(|f(t)|\leq|t|\), \(|f(t)|\leq\frac{1}{2^{2ap}}|t|^{\frac{1}{2a}}\) and \(\|v|^{p-1}v_{m}|=|v_{m}|^{p}\), we have
\[\int H(Dw_{m})^{p}dx\leq s^{p}\lambda\int|f(v)|^{p-1}|f(v)|^{q-p+ 1}|v_{m}|dx \leq s^{p}\lambda\int|v|^{p-1}|v_{m}||v|^{\frac{q-p+1}{2a}}dx\] \[\leq s^{p}\lambda\int|w_{m}|^{p}|v|^{\frac{q-p+1}{2a}}dx \tag{38}\]
Thus, it follows from Holder inequality and \(|w_{m}|\leq|v|^{s}\), that
\[\int H(Dw_{m})^{p}dx\leq s^{p}\lambda(\int|w_{m}|^{pr}dx)^{\frac{1}{r}}(\int|v|^{ \frac{(q-p+1)r^{\prime}}{2\alpha}})^{\frac{1}{r^{\prime}}}\leq s^{p}\lambda( \int|v|^{spr}dx)^{\frac{1}{r}}(\int|v|^{p^{*}}dx)^{\frac{1}{r^{\prime}}} \tag{39}\]
where we choose \(r>0\) such that
\[\frac{(q-p+1)r^{\prime}}{2\alpha}=p^{*}\]
Now, by applying the Sobolev's inequality and the inequality (39), we deduce
\[(\int_{A_{m}}|w_{m}|^{p^{*}})^{\frac{p}{p^{*}}}\leq C\int H(Dw_{m})^{p}dx\leq s ^{p}\lambda(\int|v|^{spr}dx)^{\frac{1}{r}}(\int|v|^{p^{*}}dx)^{\frac{q-p+1}{2 ap^{*}}} \tag{40}\]
Since \(|w_{m}|=|v|^{s}\) in \(A_{m}\), by using the monotone convergence theorem, we obtain
\[(\int|v|^{sp^{*}})^{\frac{1}{sp^{*}}}\leq|C\lambda|^{\frac{1}{sp}}s^{\frac{1}{ s}}(\int|v|^{spr}dx)^{\frac{1}{spr}}(\int|v|^{p^{*}}dx)^{\frac{q-p+1}{2 ap^{*}}} \tag{41}\]
that is,
\[\|v\|_{L^{\frac{p}{p^{*}}}(\mathbb{R}^{N})}\leq(C\lambda)^{\frac{1}{qp}}s^{ \frac{1}{s}}\|v\|_{L^{spr}(\mathbb{R}^{N})}\|v\|_{L^{p^{*}}(\mathbb{R}^{N})}^{ \frac{\tilde{r}}{\alpha}} \tag{42}\]
where \(\tilde{r}=\frac{q-p+1}{2ap}\). We choose \(\sigma=\frac{p^{*}}{pr}\) and observe that \(\sigma>1\) if and only if \(q<2\alpha p^{*}-2\alpha p+p-1\).
By taking \(s=\sigma\) in (42), we obtain
\[\|v\|_{L^{\frac{p}{p^{*}}}(\mathbb{R}^{N})}\leq(C\lambda)^{\frac{1}{qp}}\sigma ^{\frac{1}{\alpha}}\|v\|_{L^{p^{*}}(\mathbb{R}^{N})}\|v\|_{L^{p^{*}}(\mathbb{ R}^{N})}^{\frac{\tilde{r}}{\alpha}} \tag{43}\]
putting \(s=\sigma^{2}\) and using (43), we have
\[\|v\|_{L^{\sigma^{2}}p^{*}(\mathbb{R}^{N})}\leq(C\lambda)^{\frac{1}{p}[\frac{ 1}{\sigma}+\frac{1}{\sigma^{2}}]}\sigma^{[\frac{1}{\sigma}+\frac{2}{\sigma^{2 }}]}\|v\|_{L^{p^{*}}(\mathbb{R}^{N})}^{\frac{\tilde{r}}{\alpha}+\frac{1}{ \alpha^{2}}} \tag{44}\]
putting \(s=\sigma^{k}\) and continuing the above process, we obtain
\[\|v\|_{L^{\alpha^{k}}p^{*}(\mathbb{R}^{N})}\leq(C\lambda)^{\frac{1}{p}\sum_{i= 1}^{k}\frac{1}{\sigma^{i}}}\sigma^{\sum_{i=1}^{k}\frac{i}{\sigma^{i}}}\|v\|_{ L^{p^{*}}(\mathbb{R}^{N})}^{\frac{\tilde{r}}{\alpha}+\frac{1}{\alpha^{i}}} \tag{45}\]
By taking \(k\to\infty\), we obtain
\[\|v\|_{\infty}<\infty \tag{46}\]
Thus, we proved that if \(p-1<q<2\alpha p^{*}-2\alpha p+p-1\) then \(u\in L^{\infty}(\mathbb{R}^{N})\).
Again, we can easily derive the following inequality from (37) by using the facts that \(f^{\prime}(t)\leq 1\), \(|f(t)|\leq\frac{1}{2^{2\alpha p}}|t|^{\frac{1}{2\alpha}}\) and \(\|v|^{p-1}v_{m}|=|v_{m}|^{p}\).
\[\int H(Dw_{m})^{p}dx\leq s^{p}\lambda 2^{\frac{q}{2ap}}\int|w_{m}|^{p}|v|^{ \frac{q}{2\alpha}-p+1}dx \tag{47}\]
Similar argument as before we can prove \(u\in L^{\infty}(\mathbb{R}^{N})\) if \(2\alpha p-2\alpha<q<2\alpha p^{*}-2\alpha\). Since \(|f^{\prime}(t)|\leq\frac{C}{|f(t)|^{2\alpha-1}}\), (37) implies
\[\int H(Dw_{m})^{p}dx\leq Cs^{p}\lambda\int|f(v)|^{q-2\alpha+1}|v_{m}|dx \tag{48}\]
If \(q\) satisfies one of the following conditions
1. \(p+2\alpha-2<q<2\alpha p^{*}-2\alpha p+2\alpha-2\)
2. \(2\alpha p-1<q<2\alpha p^{*}-1\)
3. \(\frac{p-1}{2\alpha}+2\alpha-1<q<p^{*}-p+\frac{p-1}{2\alpha}+2\alpha-1\)
then by similar argument one can prove that \(u\in L^{\infty}(\mathbb{R}^{N})\).
**Lemma 1.13**.: _If \(v\) is a critical point of the functional \(J\) then_
\[P_{V}(v)=\frac{N-p}{p}\int H(Dv)^{p}dx+\frac{N}{p}\int V(x)|f(v)|^{p}dx+\frac{1 }{p}\int\langle\nabla V(x),x\rangle|f(v)|^{p}dx-\frac{\lambda N}{q+1}\int|f(v) |^{q+1}dx=0.\]
_We denote \(P_{V}(v)\) as Pohozaev identity._
Proof.: The preceding lemma ensures us that \(v\in L^{\infty}(\mathbb{R}^{N})\) and using [10, Proposition 4.3], we obtain \(v\in C^{1}(\mathbb{R}^{N})\). Hence, this result follows from [25, Theorem 1.3].
**Corollary 1.14**.: _If \(V\) is constant then the Pohozaev identity becomes_
\[P(v)=\frac{N-p}{p}\int H(Dv)^{p}dx+\frac{N}{p}\int V(x)|f(v)|^{p}dx-\frac{ \lambda N}{q+1}\int|f(v)|^{q+1}dx=0. \tag{49}\]
## 2 Auxiliary results
In this section, we prove a few auxiliary results that will be repeatedly used in the future. To begin with, we state two important lemmas whose proof can be found in (Bal et al. [12]) or in the references therein.
**Lemma 2.1**.: _([4, lemma 2.1 ]) Let \(x\in\mathbb{R}^{n}\setminus\{0\}\) and \(t\in\mathbb{R}\setminus\{0\}\) then_
1. \(x\cdot\nabla_{\xi}H(x)=H(x)\)_._
2. \(\nabla_{\xi}H(tx)=sign(t)\nabla_{\xi}H(x)\)_._
3. \(\|\nabla_{\xi}H(x)\|\leq C\)_, for some positive constant_ \(C\)_._
4. \(H\) _is strictly convex._
**Lemma 2.2**.: _([4, lemma 2.5 ]) Let \(2\leq p<\infty\). Then for \(x,y\in\mathbb{R}^{N}\), there exists a positive constant \(C\) such that_
\[\langle H(x)^{p-1}\nabla_{\eta}H(x)-H(y)^{p-1}\nabla_{\eta}H(y),x-y\rangle\geq C \ H(x-y)^{p}. \tag{50}\]
Next, we prove the following lemma, which will assist us in drawing conclusions regarding the pointwise convergence of the gradient of a Palais-Smale sequence of \(J\) in \(X\).
**Lemma 2.3**.: _Let \(p\geq 2\) and define_
\[T(t)=\begin{cases}t\text{ if }\ |t|\leq 1\\ \frac{t}{|t|}\text{ otherwise}\end{cases}\]
_and assume that \([H1]-[H5]\) hold. Let \(\{v_{n}\}\) be a sequence in \(D^{1,p}(\mathbb{R}^{N})\) such that \(v_{n}\rightharpoonup v\) in \(D^{1,p}(\mathbb{R}^{N})\) and for every \(\phi\in C_{c}^{\infty}(\mathbb{R}^{N})\),_
\[\int\phi(H(Dv_{n})^{p-1}\nabla H(Dv_{n})-H(Dv)^{p-1}\nabla H(Dv)).\nabla T(v_ {n}-v)dx\to 0.\]
_Then up to a subsequence, the following conclusions hold:_
1. \(Dv_{n}\to Dv\) _a.e. in_ \(\mathbb{R}^{N}\)_._
2. \(\lim_{n\to\infty}[\|H(Dv_{n})\|_{L^{p}}^{p}-\|H(Dv_{n}-Dv)\|_{L^{p}}^{p}]=\|H( Dv)\|_{L^{p}}^{p}\)_._
3. \(H(Dv_{n})^{p-1}\nabla H(Dv_{n})-H(Dv_{n}-Dv)^{p-1}\nabla H(Dv_{n}-Dv)\to H(Dv )^{p-1}\nabla H(Dv)\text{ in }\ L^{\frac{p}{p-1}}(\mathbb{R}^{N})\)_._
Proof.:
1. Let us define \(w_{n}=(H(Dv_{n})^{p-1}\nabla H(Dv_{n})-H(Dv)^{p-1}\nabla H(Dv)).(\nabla v_{n}- \nabla v)\geq 0\). Let \(\phi\in C_{c}^{\infty}(\mathbb{R}^{N})\) be a nonnegative function and \(\Omega=\text{supp}(\phi)\). Without loss of generality, we can assume \(v_{n}\to v\) in \(L^{p}(\Omega)\) and \(v_{n}\to v\) almost everywhere in \(\mathbb{R}^{N}\). Using the given condition and Holder inequality, we have for every \(s\in(0,1)\) \[0\leq\int(\phi w_{n})^{s}dx \leq\int_{K_{n}}(\phi w_{n})^{s}dx+\int_{L_{n}}(\phi w_{n})^{s}dx\] \[\leq|K_{n}|^{1-s}(\int_{K_{n}}\phi w_{n}dx)^{s}+|L_{n}|^{1-s}( \int_{L_{n}}\phi w_{n}dx)^{s}\] \[\leq|\Omega|^{1-s}o(1)+o(1)\] (51) where \(K_{n}=\{x\in\Omega:|u_{n}(x)-u(x)|\leq 1\}\) and \(L_{n}=\{x\in\text{supp}(\phi):|u_{n}(x)-u(x)|>1\}\). From (51) one has \(w_{n}\to 0\) a.e in \(\mathbb{R}^{N}\). Hence remark 0.3 and lemma 2.2 ensure \(Dv_{n}\to Dv\) almost everywhere in \(\mathbb{R}^{N}\).
2. It follows from Brezis-Leib lemma [7].
3. Define \(G:\mathbb{R}^{N}\to\mathbb{R}^{N}\) by \[G(x)=H(x)^{p-1}\nabla H(x)\] and let \[G_{i}(x)=H(x)^{p-2}HH_{x_{i}}.\] By using lemma 2.1 and \((H4)\), we have \[|G(x+h)-G(x)|=|\int_{0}^{1}\frac{d}{dt}[G(x+th)|dt] \leq C\int_{0}^{1}H(x+th)^{p-2}|h|dt\] \[\leq C\int_{0}^{1}[H(x)^{p-2}+t^{p-2}H(h)^{p-2}\|H(h)|dt\] \[\leq C[H(x)^{p-2}H(h)+H(h)^{p-1}]\leq\epsilon H(x)^{p-1}+C_{ \epsilon}H(h)^{p-1}\] where \(\epsilon>0\) be any real number and \(C_{\epsilon}>\) is a constant. Finally, we get \[|G(x+h)-G(x)|<\epsilon H(x)^{p-1}+C_{\epsilon}H(h)^{p-1}\] (52)
We define
\[\Psi_{\epsilon,n}:=[|G(Dv_{n})-G(Dv_{n}-Dv)-G(Dv)|-\epsilon H(Dv_{n})^{p-1}]^{+}.\]
Clearly, \(\Psi_{\epsilon,n}\to 0\) as \(n\to\infty\) almost everywhere in \(\mathbb{R}^{N}\). Using (52) and Remark 0.3, we have
\[\Psi_{\epsilon,n}\leq G(Dv)+C_{\epsilon}H(Dv)^{p-1}\leq CH(Dv)^{p-1}.\]
By the Dominated Convergence Theorem, we get
\[\lim\int|\psi_{\epsilon,n}|^{\frac{p}{p-1}}dx=0. \tag{53}\]
From the definition of \(\phi_{\epsilon,n}\)
\[|G(Dv_{n})-G(Dv_{n}-Dv)-G(Dv)|\leq\Psi_{\epsilon,n}+\epsilon H(Dv_{n})^{p-1}\]
Since \(\{v_{n}\}\) is bounded in \(D^{1,p}(\mathbb{R}^{N})\), using (53) we conclude
\[\limsup_{n\to\infty}\int|G(Dv_{n})-G(Dv_{n}-Dv)-G(Dv)|^{\frac{p}{p-1}}dx\leq\epsilon M\]
for some \(M>0\). As \(\epsilon>0\) is arbitrary so (c) follows.
**Remark 2.4**.:
* _The condition (_H5_) is not required to conclude (_a_) and (_b_)._
* _The above result is true for_ \(1<p<\infty\) _if_ \(H=H_{2}\)_._
**Lemma 2.5**.: _Suppose that \(p=2\) and \(2\alpha\leq p\); or \(p\geq 2\alpha+1\) hold and \(G\) is function from \(\mathbb{R}\) to \(\mathbb{R}\) such that \(G(t)=|f(t)|^{p-2}f(t)f^{\prime}(t)\). If \(v_{n}\rightharpoonup v\) in \(X\) then_
\[\lim_{n\to\infty}\int|G(v_{n})-G(v_{n}-v)-G(v)|^{\frac{p}{p-1}}dx=0.\]
Proof.: The proof is the same as that of the previous lemma. we omit it here.
**Lemma 2.6**.: _Let us consider the function \(h:(0,\infty)\to\mathbb{R},\;\;h(t)=C_{1}t^{N-p}+t^{N}(C_{2}-\lambda C_{3})\), where \(C_{1},C_{2}\), and \(C_{3}\) are positive constants and \(N>p\). Then for large enough \(\lambda>0\), \(h\) has a unique critical point, which corresponds to its maximum._
Proof.: The proof is very simple and is omitted here.
**Remark 2.7**.: _Notice that if \(h\) has a critical point \(t_{0}>0\) then \(C_{2}-\lambda C_{3}<0\) and \(h(t_{0})=\max_{t>0}h(t)\)._
We introduce the Pohozaev manifold
\[M=\{v\in X_{r}\setminus\{0\}:P(v)=0\}\]
where \(P_{V}\) is defined in (49).
**Lemma 2.8**.: _Let \(N\geq 2\), \((\alpha,p)\in D_{N}\) and \(p-1<q<2\alpha p^{*}-1\). Then_
* _For any_ \(v\in X_{r}\setminus\{0\}\)_, there exists unique_ \(t_{0}=t_{0}(v)>0\) _such that_ \(v_{t_{0}}=v(\frac{v}{t_{0}})\in M\)_. Moreover,_ \(J(v_{t_{0}})=\max_{t>0}J(v_{t})\)_._
* \(0\not\in\partial M\) _and_ \(\inf_{v\in M}J(v)>0\)_._
* _For any_ \(v\in M\)_,_ \(P^{\prime}(v)\neq 0\)_._
* \(M\) _is a natural constraint of_ \(J\)_._
Proof.:
* Let \(v\in X_{r}\setminus\{0\}\). For \(t>0\), we define \(v_{t}(x)=v(\frac{x}{t})\). Now, consider the function \[\phi:(0,\infty)\to\mathbb{R}\text{ by }\phi(t)=J(v_{t}).\] After simplification, we have \[\phi(t)=\frac{t^{N-p}}{p}\int H(Dv)^{p}dx+\frac{t^{N}}{p}\int V(x)|f(v)|^{p}dx -\frac{\lambda t^{N}}{q+1}\int|f(v)|^{q+1}dx.\] By lemma2.6, for large \(\lambda>0\), there exists \(t_{0}>0\) such that \(\phi^{\prime}(t_{0})=0\) and \(\phi(t_{0})=\max_{t>0}\phi(t)\). Also we notice that \(P(v_{t_{0}})=t_{0}\phi^{\prime}(t_{0})=0\). Hence, \(v_{t_{0}}\in M\) and \(J(v_{t_{0}})=\max_{t>0}J(v_{t})\).
* If \(v\in M\) then \[\frac{N-p}{p}\int[H(Dv)^{p}+V|f(v)|^{p}]dx-\frac{\lambda N}{q+1}\int|f(v)|^{q+ 1}dx\leq P(v)=0\] which ensures \[\frac{N-p}{p}\int[H(Dv)^{p}+V|f(v)|^{p}]dx\leq\frac{\lambda N}{q+1}\int|f(v)| ^{q+1}dx\] (54) Now, let \(\int[H(Dv)^{p}+V|f(v)|^{p}]dx=\beta^{p}\) and \(\gamma>0\) (to be determined later). By using Holder inequality and Sobolev inequality, we obtain \[\int|f(v)|^{q+1}dx\leq \int|f(v)|^{q+1}dx\leq \left[\int|f(v)|^{p}dx\right]^{\frac{\gamma(q+1)}{p}}\left[\int|f^ {2\alpha}(v)|^{p^{*}}\right]^{1-\frac{\gamma(q+1)}{p}}\] \[\leq C[\int|f(v)|^{p}dx]^{\frac{\gamma(q+1)}{p}}[\int|Dv|^{p}dx]^ {\frac{p^{*}}{p}(1-\frac{\gamma(q+1)}{p})}\] \[\leq C[\int|f(v)|^{p}dx]^{\frac{\gamma(q+1)}{p}}[\int H(Dv)^{p}dx ]^{\frac{p^{*}}{p}(1-\frac{\gamma(q+1)}{p})}\] \[\leq C\beta^{m}\] where \(\gamma=\frac{p(2\alpha p^{*}-q-1)}{(q+1)(2\alpha p^{*}-p)}\) and \(m=\frac{2\alpha p^{*}-pq-p+p^{*}(q+1-p)}{2\alpha p^{*}-p}>p\). As \(m>p\) so by using the above inequality and (54), we get \(\beta\geq C\), for some positive constant \(C\). Hence, \(0\not\in\partial M\). Notice that if \(v\in M\) then \(NJ(v)-P(v)=\int H(Dv)^{p}dx>0\). So, \(J(v)>0\), for all \(v\in M\). We shall prove that \(\inf_{v\in M}J(v)>0\). If not then there exists a sequence \(\{v_{n}\}\) in \(M\) such that \(J(v_{n})\to 0\). We can prove that the sequence \(\{v_{n}\}\) is bounded (see the proof of Theorem 0.5). Using (55) and \(\lim_{n\to\infty}\int H(Dv_{n})^{p}dx=\lim_{n\to\infty}NJ(v_{n})=0\), we get \[\int|f(v_{n})|^{q+1}dx\to 0\text{ as }n\to\infty\] (56) Now (56) and \(\lim_{n\to\infty}J(v_{n})=0\) implies \(0\in\partial M\), which is a contradiction.
3. If possible let \(P^{\prime}(v)=0\), for some \(v\in M\). As \(v\in M\) and \(r=J(v)>0\) so \[\frac{N-p}{p}\int H(Dv)^{p}dx+\frac{N}{p}\int V|f(v)|^{p}dx-\frac{ \lambda N}{q+1}\int|f(v)|^{q+1}dx=0\] (57) and \[\frac{1}{p}\int[H(Dv)^{p}+V|f(v)|^{p}]dx-\frac{\lambda}{q+1}\int|f(v)|^{q+1}dx=r\] (58) As \(P^{\prime}(v)=0\) so \(v\) satisfies the following equation in weak sense \[-(N-p)\Delta_{H,p}v+NV|f(v)|^{p-2}f(v)f^{\prime}(v)-\lambda N|f(v)|^{q-1}f(v)f^ {\prime}(v)=0.\] Hence \(v\) satisfies the corresponding Pohozaev identity: \[\frac{(N-p)^{2}}{p}\int H(Dv)^{p}dx+\frac{N^{2}}{p}\int V|f(v)|^{p}dx-\frac{ \lambda N^{2}}{q+1}\int|f(v)|^{q+1}dx=0\] (59) For simplicity, we assume \(a=\int H(Dv)^{p}dx\), \(b=\int V|f(v)|^{p}dx\), \(c=\int|f(v)|^{q+1}dx\). By using (57), (59) and \(p<N\), we have \(a=0\). From (57), we obtain \[\frac{b}{p}=\frac{\lambda c}{q+1}.\] By using the above result, (58) gives us \(r=0\), which is a contradiction as \(r>0\).
4. Let \(v\in M\) such that \(J(v)=\inf_{u\in M}J(u)=r>0\) (say). Our Claim is \(J^{\prime}(v)=0\) in \(X^{*}\). By Lagrange's multiplier, there exists a \(\tau\in\mathbb{R}\) such that \(J^{\prime}(v)=\tau P^{\prime}(v)\). So, \(v\) satisfies the following equation in the weak sense \[-(1-\tau(N-p))\Delta_{H,p}v+(1-\tau N)V|f(v)|^{p-2}f(v)f^{\prime}(v)+\lambda( \tau N-1)|f(v)|^{q-1}f(v)f^{\prime}(v)=0.\] (60) Hence, \(v\) satisfies the corresponding Pohozaev identity. Using the same notation as before we have the following equations \[\frac{(N-p)(1-\tau(N-p))}{p}a+\frac{(1-\tau N)N}{p}b-\frac{N\lambda(1-\tau N)} {q+1}c=0.\] (61) \[\frac{a}{p}+\frac{b}{p}-\frac{\lambda c}{q+1}=r.\] (62) and \[\frac{N-p}{p}a+\frac{N}{p}b-\frac{\lambda N}{q+1}c=0.\] (63) If \(\tau\neq 0\) then by using (61), (62) and (63), we have \(r=0\); which contradicts the fact that \(r>0\). Hence, \(\tau=0\). Consequently, \(J^{\prime}(v)=0\) in \(X^{\prime}\).
\(\Box\)
Let us consider a collection of auxiliary functional on \(X\), \(\{J_{\delta}\}_{\delta\in I}\) of the form
\[J_{\delta}(v)=\frac{1}{p}\int H(Dv)^{p}+V(x)|f(v)|^{p}|dx-\frac{ \lambda\delta}{q+1}\int|f(v)|^{q+1}dx \tag{64}\]
and we define
\[J_{\infty,\delta}(v)=\frac{1}{p}\int H(Dv)^{p}+V(\infty)|f(v)|^{p }|dx-\frac{\lambda\delta}{q+1}\int|f(v)|^{q+1}dx. \tag{65}\]
**Lemma 2.9**.: _Assume the potential \(V\) satisfies \((v_{1})\). Then the set_
\[\Gamma_{\delta}=\{\gamma\in C([0,1];X):\gamma(0)=0,\,J_{\delta}( \gamma(1))<0\}\neq\{0\},\,\,\text{for any}\,\,\delta\in I.\]
Proof.: For every \(v\in X\),
\[J_{\delta}(v)\leq J_{\infty,\frac{1}{2}}(v) \tag{66}\]
Now, let \(v\in X\setminus\{0\}\),
\[J_{\infty,\frac{1}{2}}(v_{t})=J_{\infty,\frac{1}{2}}(v(\frac{x}{ t}))=\frac{t^{N-p}}{p}\int H(Dv)^{p}dx+\frac{t^{N}}{p}\int V(\infty)|f(v)|^{p} |dx-\frac{\lambda t^{N}}{2(q+1)}\int|f(v)|^{q+1}dx\]
As \(\lambda>0\) is large enough so \(J_{\infty,\frac{1}{2}}(v_{t})\to-\infty\) as \(t\to\infty\). Hence, there exists \(t_{0}>0\) such that \(J_{\infty,\frac{1}{2}}(v_{t_{0}})<0\). Consequently, \(J_{\delta}(v_{t_{0}})<0\), for all \(\delta\in I\). Define \(\gamma:[0,1]\to X\) as
\[\gamma(t)=\begin{cases}0,\,\,\text{if}\,\,t=0\\ (v_{t_{0}})_{t},\,\,\text{if}\,\,0<t\leq 1\end{cases}\]
It is easy to prove that the \(\gamma\) is continuous. Hence, \(\gamma\) is a desired path.
**Lemma 2.10**.: _The above collection of functional \(J_{\delta}\) satisfies all the hypotheses of Theorem 1.11._
Proof.: Here, \(Au=\frac{1}{p}\int[H(Du)^{p}+V(x)|f(Du)|^{p}]\ dx\) and \(Bu=\frac{\lambda\delta}{q+1}\int|f(u)|^{q+1}\ dx\). Clearly, \(B\) is nonnegative and \(Au\to\infty\) as \(\|u\|\to\infty\).
Claim:
\[C_{\delta}=\inf_{\gamma\in I}\max_{t\in[0,1]}J_{\delta}(\gamma(t) )>0. \tag{67}\]
Let \(u\in S(\beta):=\{u\in X:\int[H(Du)^{p}+V(x)|f(Du)|^{p}]\ dx=\beta^{p}\). By using (55), we have
\[J_{\delta}(u) =\frac{1}{p}\int[H(Du)^{p}+V(x)|f(Du)|^{p}]\ dx-\frac{\lambda \delta}{q+1}\int|f(u)|^{q+1}\ dx \tag{68}\] \[\geq\frac{1}{p}\beta^{p}-C\beta^{m} \tag{69}\]
where \(m>p\). If \(\beta>0\) is small enough then there exists \(r>0\) such that \(J_{\delta}(u)\geq r\) and hence \(C_{\delta}\geq r>0\).
Define the Pohozaev Manifold
\[M_{\infty,\delta}=\{v\in X_{r}\setminus\{0\}:P_{\infty,\delta}(v)=\frac{N-p}{p} \int H(Dv)^{p}\ dx+\frac{N}{p}\int V(\infty)|f(v)|^{p}\ dx-\frac{\lambda\delta N }{q+1}\int|f(v)|^{q+1}\ dx=0\}.\]
We have the following lemma:
**Lemma 2.11**.: _If \(N\geq 2\), \((\alpha,p)\in D_{N}\), \(p-1<q<2\alpha p^{*}-1\) and \(\delta\in I\). Then for large \(\lambda>0\), there exists \(v_{\infty,\delta}\in M_{\infty,\delta}\) such that_
\[J_{\infty,\delta}(v_{\infty,\delta})=m_{\infty,\delta}:=\{J_{\infty,\delta}(v ):v\neq 0,J^{\prime}_{\infty,\delta}(v)=0\}.\]
_Moreover,_
\[J^{\prime}_{\infty,\delta}(v_{\infty,\delta})=0.\]
Proof.: This lemma is a simple consequence of Theorem 0.5 so we omit the proof.
Now we are going to prove the following lemma
**Lemma 2.12**.: _If the potential \(V\) satisfies \((v_{1})\) and \((v_{2})\), \(N\geq 2\), \((\alpha,p)\in D_{N}\) and \(p-1<q<2\alpha p^{*}-1\). Then for every \(\delta\in I\),_
\[C_{\delta}<m_{\infty,\delta}\]
Proof.: It is easy to see that
\[J_{\infty,\delta}(v_{\infty,\delta})=\max_{t>0}J_{\infty,\delta}(v_{\infty, \delta}(\frac{\cdot}{t}))\]
Let \(\gamma\) be the curve defined in the lemma 2.9 for \(v=v_{\infty,\delta}\). By using \((v_{1})\), we have
\[C_{\delta}\leq\max_{t\in[0,1]}J_{\delta}(\gamma(t))\leq\max_{t\in[0,1]}J_{ \infty,\delta}(\gamma(t))\leq J_{\infty,\delta}(v_{\infty,\delta})=m_{\infty, \delta}.\]
If possible let \(C_{\delta}=m_{\infty,\delta}\). Then \(\max_{t\in[0,1]}J_{\delta}(\gamma(t))=J_{\infty,\delta}(v_{\infty,\delta})\). As \(m_{\infty,\delta}>0\) and \(J_{\delta}\alpha\gamma\) is a continuous map, so there exists \(t^{*}\in(0,1)\) such that \(J_{\delta}(\gamma(t^{*}))=J_{\infty,\delta}(v_{\infty,\delta})=m_{\infty, \delta}\). Moreover, since \(t=1\) is the unique maxima of \(J_{\infty,\delta}\alpha\gamma\) so \(J_{\delta}(\gamma(t^{*}))>J_{\infty,\delta}(\gamma(t^{*}))\), which is not possible because of \((v_{1})\). Hence, \(C_{\delta}<m_{\infty,\delta}\).
Now, we present the most important lemma to prove our main result.
**Lemma 2.13**.: _(Global compactness lemma) Suppose that \((v_{1})\) and \((v_{2})\) hold, \(N\geq 3\), \((\alpha,p)\in D^{N}\), and \(2\alpha p-1\leq q<2\alpha p^{*}-1\). For every \(\delta\in I\), let \(\{u_{n}\}\) be a bounded \((PS)_{\epsilon_{\delta}}\) sequence for \(J_{\delta}\). Then there exist a subsequence of \(\{u_{n}\}\), still denote by \(\{u_{n}\}\) and \(u_{0}\in X\), an integer \(k\in\mathbb{N}\cup\{0\}\), \(w_{i}\in X\), sequence \(\{x_{n}^{i}\}\subset\mathbb{R}^{N}\) for \(1\leq i\leq k\) such that_
* \(u_{n}\rightharpoonup u_{0}\) _in_ \(X\) _with_ \(J_{\delta}(u_{0})\geq 0\) _and_ \(J_{\delta}^{\prime}(u_{0})=0\)__
* \(|x_{n}^{i}|\to\infty\)_,_ \(|x_{n}^{i}-x_{n}^{j}|\to\infty\) _as_ \(n\to\infty\) _if_ \(i\neq j\)_._
* \(w_{i}\not\cong 0\) _and_ \(J_{\infty,\delta}^{\prime}(w_{i})=0\)_, for_ \(1\leq i\leq k\)_._
* \(\|u_{n}-u_{0}-\sum_{i=1}^{k}w_{i}(.-x_{n}^{i})\|\to 0\) _as_ \(n\to\infty\)_._
* \(J_{\delta}(u_{n})\to J_{\delta}(u_{0})+\sum_{i=1}^{k}J_{\infty,\delta}(w_{i})\)_._
Proof.: The proof consists of several steps:
Step 1. Since \(\{u_{n}\}\) is bounded so without loss of generality we can assume that
1. \(u_{n}\rightharpoonup u_{0}\;\;\text{in}\;\;X\). 2. \(f(u_{n})\to f(u_{0})\;\;\text{in}\;\;L^{s}_{loc}(\mathbb{R}^{N})\;\;\text{for all}\;p\leq s\leq 2\alpha p^{*}\). 3. \(u_{n}\rightharpoonup u_{0}\;\;\text{a.e. in}\;\;\mathbb{R}^{N}\). We show that for every \(\phi\in C^{\infty}_{c}(\mathbb{R}^{N})\), \[\langle J^{\prime}_{\delta}(u_{0}),\phi\rangle=\lim_{n\to\infty}\langle J^{ \prime}_{\delta}(u_{n}),\phi\rangle.\] That is, we only have to show the following identities: 1. \(\lim_{n\to\infty}\int H(Du_{n})^{p-1}\nabla H(Du_{n}).\nabla\phi\;dx=\int H(Du_ {0})^{p-1}\nabla H(Du_{0}).\nabla\phi\;dx\) 2. \(\lim_{n\to\infty}\int V(x)|f(u_{n})|^{p-2}f(u_{n})f^{\prime}(u_{n})\phi\;dx= \int V(x)|f(u_{0})|^{p-2}f(u_{0})f^{\prime}(u_{0})\phi\;dx\) 3. \(\lim_{n\to\infty}\int|f(u_{n})|^{q-1}f(u_{n})f^{\prime}(u_{n})\phi\;dx=\int V (x)|f(u_{0})|^{q-1}f(u_{0})f^{\prime}(u_{0})\phi\;dx\) Let \(K=\text{supp}(\phi)\). For every \(s\in[p,2\alpha p^{*})\), there exists \(h_{s}\in L^{s}(K)\) such that up to a subsequence \(|f(u_{n})|\), \(|f(u_{0})|\leq h\) and \(u_{n}\to u_{0}\) a.e. in \(\mathbb{R}^{N}\). The equalities (b) and (c) are two consequences of the Dominated Convergence Theorem. As for the first part by Egorov's theorem for every \(\epsilon>0\), there exists a measurable set \(E\subset K\) such that \(|E|<\epsilon\) and \(u_{n}\) converges to \(u\) uniformly on \(E^{c}\cap K\). So for large n, \(|u_{n}(x)-u(x)|\leq 1\). Using the fact that \(u_{n}\rightharpoonup u_{0}\), we have \[|\int\phi H(Du_{0})^{p-1}\nabla H(Du_{0}).\nabla T(u_{n}-u_{0}) \;dx|\leq |\int_{E}\phi H(Du_{0})^{p-1}\nabla H(Du_{0}).\nabla T(u_{n}-u_{0} )\;dx|\] \[+ |\int_{E^{c}\cap K}\phi H(Du_{0})^{p-1}\nabla H(Du_{0}).\nabla T (u_{n}-u_{0})\;dx|\] \[\leq \int_{E}|\phi H(Du_{0})^{p-1}\nabla H(Du_{0}).\nabla T(u_{n}-u_{0 })|\;dx\] \[+ |\int_{E^{c}\cap K}\phi H(Du_{0})^{p-1}\nabla H(Du_{0}).\nabla(u _{n}-u_{0})\;dx|\] \[\leq Me^{\frac{1}{p}}+o(1)\] Hence, \[\int\phi H(Du_{0})^{p-1}\nabla H(Du_{0}).\nabla T(u_{n}-u_{0})\;dx\to 0.\] (70) Since \(\{u_{n}\}\) is a bounded Palais-Smale sequence so \[\langle J^{\prime}_{\delta}(u_{n}),\phi.T(u_{n}-u_{0})\rangle=o(1)\] which implies \[\int H(Du_{n})^{p-1}\nabla H(Du_{n}).\nabla(\phi.T(u_{n}-u_{0}) )\;dx =-\int V|f(u_{n})|^{p-2}f(u_{n})f^{\prime}(u_{n})\phi T(u_{n}-u_{0})\;dx\] \[+ \lambda\delta\int|f(u_{n})|^{q-1}f(u_{n})f^{\prime}(u_{n})\phi T (u_{n}-u_{0})\;dx+o(1)\] (71)
Using (71), we have
\[|\int\phi H(Du_{n})^{p-1}\nabla H(Du_{n}).\nabla T(u_{n}-u_{0})\ dx| \leq|\int H(Du_{n})^{p-1}\nabla H(Du_{n}).\nabla(\phi.T(u_{n}-u_{0})) \ dx|\] \[+\int H(Du_{n})^{p-1}(\nabla H(Du_{n}).\nabla\phi)T(u_{n}-u_{0}) \ dx|\] \[\leq\int V|f(u_{n})|^{p-1}|\phi T(u_{n}-u_{0})|\ dx\] \[+\lambda\delta\int|f(u_{n})|^{q}|f^{\prime}(u_{n})||\phi T(u_{n}- u_{0})|\ dx\] \[+|\int H(Du_{n})^{p-1}(\nabla H(Du_{n}).\nabla\phi)T(u_{n}-u_{0}) \ dx|+o(1)\] \[=o(1) \tag{72}\]
From (70)-(72), one has
\[\int\phi(H(Du_{n})^{p-1}\nabla H(Du_{n})-H(Du_{0})^{p-1}\nabla H(Du_{0})). \nabla T(u_{n}-u_{0})dx\to 0.\]
By lemma 2.3, we can conclude \(Du_{n}\to Du_{0}\) almost everywhere in \(\mathbb{R}^{N}\). Moreover,
* \(H(Du_{n})^{p-1}\nabla H(Du_{n})\) is bounded in \(L^{\frac{p}{p-1}}(\mathbb{R}^{N})\)
* \(H(Du_{n})^{p-1}\nabla H(Du_{n})\to H(Du_{0})^{p-1}\nabla H(Du_{0})\) a.e. in \(\mathbb{R}^{N}\).
Hence, \(H(Du_{n})^{p-1}\nabla H(Du_{n})\rightharpoonup H(Du_{0})^{p-1}\nabla H(Du_{0})\) in \(L^{\frac{p}{p-1}}(\mathbb{R}^{N})\). Consequently, (a) follows.
* Since \(J_{\delta}^{\prime}(u_{0})=0\) so \(u_{0}\) satisfies the following Pohozaev identity
\[\frac{N-p}{p}\int H(Du_{0})^{p}+\frac{1}{p}\int\left[V(x)+\left<\nabla V(x),x \right>\right]|f(u_{0})|^{p}dx-\frac{\lambda\delta N}{q+1}\int|f(u_{0})|^{q+1 }dx=0. \tag{73}\]
From (73) and \(\left<J_{\delta}^{\prime}(u_{0}),\frac{f(u_{0})}{f^{\prime}(u_{0})}\right>=0\) we deduce
\[\frac{N-p}{p}A+\frac{N}{p}\beta_{2}+\frac{1}{p}B-\frac{\lambda\delta N}{q+1} \beta_{3}=0. \tag{74}\]
and
\[J_{\delta}(u_{0})=(2\alpha-1)(\beta_{1}+\beta_{2})+\frac{\lambda\delta\beta_{ 3}(q+1-2\alpha p)}{2\alpha p(q+1)}. \tag{75}\]
where \(\beta_{1}=\int\frac{H(Du_{0})^{p}}{1+(2\alpha)^{p-1}|f(u_{0})|^{p(2\alpha-1)}}dx\), \(\beta_{2}=\int V(x)|f(u_{0})|^{p}dx\), \(\beta_{3}=\int|f(u_{0})|^{q+1}dx\), \(A=\int H(Du_{0})^{p}dx\), and \(B=\int(\nabla V(x),x)|f(u_{0})|^{p}dx\). Using (74), (75) and \((v_{2})\), we get
\[NJ_{\delta}(u_{0})=(2\alpha-1)(\beta_{1}+\beta_{2})+\frac{(N-p)(q+1-2\alpha p )}{2\alpha p^{2}}A+\frac{q+1-2\alpha p}{2\alpha p^{2}}(N\beta_{2}+B)\geq 0. \tag{76}\]
* Since \(\{u_{n}\}\) is a Palais-Smale sequence and \(u_{0}\) is a critical point of \(J_{\delta}\), we have \[\left<J_{\delta}^{\prime}(u_{n}),\frac{f(u_{n})}{f^{\prime}(u_{n})}\right>= \int H(Du_{n})^{p}(1+G(u_{n}))dx+\int V|f(u_{n})|^{p}dx-\lambda\delta\int|f(u_{ n})|^{q+1}dx=o(1)\] (77)
\[\langle I^{\prime}_{\delta}(u_{0}),\frac{f(u_{0})}{f^{\prime}(u_{0})}\rangle=\int H (Du_{0})^{p}(1+G(u_{0}))dx+\int V|f(u_{0})|^{p}dx-\lambda\delta\int|f(u_{0})|^{q +1}dx=0 \tag{78}\]
where \(G(t)=(2\alpha-1)(2\alpha)^{p-1}|f(t)|^{p(2\alpha-1)}[1+(2\alpha)^{p-1}|f(t)|^{ p(2\alpha-1)}]^{-1}\).
Let
\[\rho=\limsup_{n\to\infty}\sup_{y\in\mathbb{R}^{N}}\int_{B_{1}(y)}|f(u_{n}^{1}) |^{p}dx\]
where \(u_{n}^{1}=u_{n}-u_{0}\rightharpoonup 0\) in \(X\).
1. **Vanishing Case:** If \(\rho=0\) then by lemma 1.9, we have \[f(u_{n}^{1})\to 0\text{ in }L^{q+1}(\mathbb{R}^{N}).\] which implies \[f(u_{n})\to f(u_{0})\text{ in }L^{q+1}(\mathbb{R}^{N})\] (79) Using (77), (78) and (79), we deduce \[\lim_{n\to\infty}\int\left[H(Du_{n})^{p}(1+G(u_{n}))+V|f(u_{n})|^{p} \right]dx=\int\left[H(Du_{0})^{p}(1+G(u_{0}))+\int V|f(u_{0})|^{p}\right]dx\] (80) Fatou's lemma ensures that up to a subsequence \[\lim_{n\to\infty}\int H(Du_{n})^{p}\ dx =\int H(Du_{0})^{p}\ dx\] (81) \[\lim_{n\to\infty}\int V|f(u_{n})|^{p}\ dx =\int V|f(u_{0})|^{p}\ dx\] The Brezis-Lieb lemma and (81) imply \(u_{n}\to u_{0}\) in \(X\).
2. **Non-Vanishing Case:** If vanishing does not occur then there exists a sequence \(\{x_{n}^{1}\}\subset\mathbb{R}^{N}\) such that \[\int_{B_{1}(0)}|f(\bar{u}_{n}^{1})|^{p}\ dx\geq\frac{\rho}{2}\] (82) where \(\bar{u}_{n}^{1}(x):=u_{n}^{1}(x+x_{n}^{1})\). Since the sequence \(\{\bar{u}_{n}^{1}\}\) is also bounded in \(X\), there exists \(w_{1}\in X\) such that \[\begin{cases}\bar{u}_{n}^{1}\rightharpoonup w_{1}\text{ in }X\\ f(\bar{u}_{n}^{1})\to f(w_{1})\text{ in }L^{p}(B_{1}(0)).\end{cases}\] (83) The inequality (82) ensures \(w_{1}\not\equiv 0\). Moreover, \(\{x_{n}^{1}\}\) is unbounded. Our next goal is to show \(J^{\prime}_{\infty,\delta}(w_{1})=0\). For \(\phi\in C_{c}^{\infty}(\mathbb{R}^{N})\), one has, \[\langle I^{\prime}_{\infty,\delta}(w_{1}),\phi\rangle =\lim_{n\to\infty}\langle J^{\prime}_{\infty,\delta}(\bar{u}_{n}^ {1}),\phi\rangle\] \[=\lim_{n\to\infty}\langle J^{\prime}_{\delta}(u_{n}^{1}),\phi( \cdot-x_{n}^{1})\rangle-\lim_{n\to\infty}\int\left(V(x+x_{n}^{1})-V(\infty) \right)|f(\bar{u}_{n}^{1})|^{p-2}f(\bar{u}_{n}^{1})f^{\prime}(\bar{u}_{n}^{1}) \phi\ dx\]
By Remark 2.5, we deduce
\[\langle J_{\delta}^{\prime}(u_{n}^{1}),\phi\rangle\to 0\text{ uniformly with respect to }\phi.\]
Since \(u_{n}^{1}\rightharpoonup 0\), one has that \(\lim_{n\to\infty}\langle J_{\delta}^{\prime}(u_{n}^{1}),\phi(\cdot-x_{n}^{1}) \rangle=0\) and the condition \((v_{1})\) implies
\[\lim_{n\to\infty}\int(V(x+x_{n}^{1})-V(\infty))|f(\bar{u}_{n}^{1})|^{p-2}f( \bar{u}_{n}^{1})f^{\prime}(\bar{u}_{n}^{1})\phi dx=0.\]
Thus, \(J_{\infty,\delta}^{\prime}(w_{1})=0\). Again by Brezis-Lieb lemma, one has,
\[\lim_{n\to\infty}\int[H(Du_{n}^{1})^{p}-H(Du_{n})^{p}+H(Du_{0})^{ p}]dx =0. \tag{84}\] \[\lim_{n\to\infty}\int[|f(u_{n}^{1})|^{q+1}-|f(u_{n})|^{q+1}+|f(u_ {0})|^{q+1}]dx =0.\] \[\lim_{n\to\infty}\int V[|f(u_{n}^{1})|^{p}-|f(u_{n})|^{p}+|f(u_{0 })|^{p}]dx =0.\]
Under the assumption \((v_{1})\), we have
\[\lim_{n\to\infty}\int(V(x)-V(\infty))|f(u_{n}^{1})|^{p}dx=0 \tag{85}\]
Using (84) and (85), one can easily conclude,
\[\begin{split}& J_{\delta}(u_{n}^{1})-J_{\delta}(u_{n})+J_{\delta}(u_ {0})\to 0\text{ as }n\to\infty.\\ & J_{\delta}(u_{n})-J_{\infty,\delta}(u_{n}^{1})-J_{\delta}(u_{0 })\to 0\text{ as }n\to\infty.\end{split} \tag{86}\]
Step 4. Now, we define
\[\rho_{1}=\limsup_{n\to\infty}\sup_{y\in\mathbb{R}^{N}}\int_{B_{1}(y)}|f(u_{n} ^{2})|^{p}\ dx,\text{where }u_{n}^{2}=u_{n}^{1}-w_{1}(\cdot-x_{n}^{1})\rightharpoonup 0\text{ in }X\]
If \(\rho_{1}=0\) then by a similar argument as **Step 3**, we obtain
\[\|u_{n}-u_{0}-w_{1}(\cdot-x_{n}^{1})\|\to 0\text{ in }X.\]
If \(\rho_{1}\neq 0\) then there exists a sequence \(\{x_{n}^{2}\}\) such that \(\tilde{u}_{n}^{2}\rightharpoonup w_{2}\neq 0\). Moreover, \(|x_{n}^{1}-x_{n}^{2}|\to\infty\) as \(n\to\infty.\) Arguing as above, we obtain the following:
\[\begin{split}&\|H(Du_{n}^{2})\|_{p}^{p}-\|H(Du_{n})\|_{p}^{p}+\|H( Du_{0})\|_{p}^{p}+\|H(Du_{1}(\cdot-x_{n}^{1}))\|_{p}^{p}=o(1)\\ &\int[V|f(u_{n}^{2})|^{p}-V|f(u_{n})|^{p}+V|f(u_{0})|^{p}+V|f(w_{ 1}(\cdot-x_{n}^{1}))|^{p}]\ dx=o(1)\\ &\|f(u_{n}^{2})\|_{q+1}^{q+1}-\|f(u_{n})\|_{q+1}^{q+1}+\|f(u_{0}) \|_{q+1}^{q+1}+\|f(w_{1}(\cdot-x_{n}^{1}))\|_{q+1}^{q+1}=o(1)\end{split} \tag{87}\]
which helps us to obtain
\[\begin{split}& J_{\delta}(u_{n}^{2})=J_{\delta}(u_{n})-J_{\delta}(u_ {0})-J_{\infty,\delta}+o(1)\\ & J_{\infty,\delta}(u_{n}^{2})=J_{\delta}(u_{n}^{1})-J_{\infty, \delta}(w_{1})+o(1).\end{split} \tag{88}\]
Moreover, using the Brezis-Lieb lemma, we deduce
\[\langle J_{\delta}^{\prime}(u_{n}^{2}),\phi\rangle=\langle J_{\delta}^{\prime}(u_ {n}),\phi\rangle-\langle J_{\delta}^{\prime}(u_{0}),\phi\rangle-\langle J_{ \infty,\delta}^{\prime}(w_{1}),\phi(\cdot+x_{n}^{1})\rangle+o(1)=o(1) \tag{89}\]
and
\[\|\tilde{u}_{n}^{1}-w_{1}\|_{1,p,\mathbb{R}^{N}}^{p} =\|\tilde{u}_{n}^{1}\|_{1,p,\mathbb{R}^{N}}^{p}-\|w_{1}\|_{1,p, \mathbb{R}^{N}}^{p}+o(1)\] \[=\|u_{n}\|_{1,p,\mathbb{R}^{N}}^{p}-\|u_{0}\|_{1,p,\mathbb{R}^{N} }^{p}-\|w_{1}\|_{1,p,\mathbb{R}^{N}}^{p}+o(1)\] that is, \[\|u_{n}-u_{0}-w_{1}(\cdot-x_{n}^{1})\|_{1,p,\mathbb{R}^{N}}^{p} =\|u_{n}\|_{1,p,\mathbb{R}^{N}}^{p}-\|u_{0}\|_{1,p,\mathbb{R}^{N}}^{p}-\|w_{1} (\cdot-x_{n}^{1})\|_{1,p,\mathbb{R}^{N}}^{p}+o(1).\] Since \(u_{n}^{2}\to 0\), one has \(\langle J_{\delta}^{\prime}(u_{n}^{2}),\phi(\cdot-x_{n}^{2})\rangle\to 0\). Consequently, \(J_{\infty,\delta}^{\prime}(w_{2})=0\). Using (86) and (88), we have \[J_{\delta}(u_{n})=J_{\delta}(u_{0})+J_{\infty,\delta}(u_{n}^{1})+o(1)=J_{ \delta}(u_{0})+J_{\infty,\delta}(w_{1})+J_{\infty,\delta}(u_{n}^{2})+o(1).\] (90) Iterating this process k-times, we obtain \((k-1)\) number of sequences \(\{x_{n}^{j}\}\subset\mathbb{R}^{N}\) for \(j=1,2,...(k-1)\) and \((k-1)\) number of critical points \(w_{1},w_{2},...w_{k-1}\) of \(J_{\infty,\delta}^{\prime}\) such that \[\begin{split}\|u_{n}-u_{0}-\sum_{i=1}^{k-1}w_{i}(\cdot-x_{n}^{i}) \|_{1,p,\mathbb{R}^{N}}^{p}=\|u_{n}\|_{1,p,\mathbb{R}^{N}}^{p}-\|u_{0}\|_{1,p, \mathbb{R}^{N}}^{p}-\sum_{i=1}^{k-1}\|w_{i}(\cdot-x_{n}^{i})\|_{1,p,\mathbb{R }^{N}}^{p}+o(1)\\ J_{\delta}(u_{n})\to J_{\delta}(u_{0})+\sum_{i=1}^{k-1}J_{ \infty,\delta}(w_{i})+J_{\infty,\delta}(u_{n}^{k})\end{split}\] (91) where \(u_{n}^{k};=u_{n}-u_{0}-\sum_{i=1}^{k-1}w_{i}(\cdot-x_{n}^{i})\to 0\) in \(X\).
5. Since \(J_{\infty,\delta}^{\prime}(w_{i})=0\), by property (ii) in Lemma 2.8 there exists a constant \(C>0\) such that \(\|w_{i}\|\geq C\). Using this fact together with (91), we can conclude that the iteration stops after some finite index \(k\in\mathbb{N}\).
**Lemma 2.14**.: _Suppose that \((v_{1})\) and \((v_{2})\) hold, \(N\geq 3\), \((\alpha,p)\in D^{N}\) and \(2\alpha p-1\leq q<2\alpha p^{*}-1\). For \(\delta\in I\), let \(\{v_{n}\}\) be a bounded \((PS)_{C_{\delta}}\) sequence of \(I_{\delta}\). Then there exists \(v_{\delta}\in X\) such that \(J_{\delta}^{\prime}(v_{\delta})=0\) and \(J_{\delta}(v_{\delta})=C_{\delta}\), where \(C_{\delta}\) is defined by (67)._
Proof.: By using lemma 2.13, there exist \(v_{\delta}\in X\), \(k\in\mathbb{N}\cup\{0\}\) and \(\{w_{1},w_{2},...w_{k}\}\subset X\) such that
1. \(v_{n}\rightharpoonup v_{\delta}\), \(J_{\delta}^{\prime}(v_{\delta})=0\) and \(J_{\delta}(v_{\delta})\geq 0\).
2. \(w_{i}\neqneq 0\) and \(J_{\infty,\delta}^{\prime}(w_{i})=0\), for \(1\leq i\leq k\).
3. \(J_{\delta}(v_{n})\to J_{\delta}(v_{\delta})+\sum_{i=1}^{k}J_{\infty,\delta}(w_{ i})\).
Clearly, \(J_{\infty,\delta}(w_{i})\geq m_{\infty,\delta}\). If \(k\neq 0\) then \(C_{\delta}\geq m_{\infty,\delta}\), which contradicts the fact that \(C_{\delta}<m_{\infty,\delta}\). Hence, \(k=0\). By using lemma 2.13, we have \(v_{n}\to v_{\delta}\) in \(X\) and \(J_{\delta}(v_{\delta})=C_{\delta}\)
**Corollary 2.15**.: _If all the assumptions of lemma 2.14 are satisfied. Then for almost every \(\delta\in I\), there exists \(v_{\delta}\in X\) such that \(J_{\delta}(v_{\delta})=C_{\delta}\) and \(J^{\prime}_{\delta}(v_{\delta})=0\)._
Proof.: Theorem 1.11 ensures us that for almost every \(\delta\in I\), \(J_{\delta}\) has a bounded \((PS)_{C_{\delta}}\) sequence. Hence by using the above lemma, we get the result.
## 3 Proof of Theorem 0.5
Let \(I=\inf\{J(v):v\in M\}(>0)\) and \(\{v_{n}\}\subset M\) be a minimizing sequence. Also,
\[NJ(v_{n})=NJ(v_{n})-P(v_{n})=\int H(Dv_{n})^{p}dx\]
So, \(\{H(Dv_{n})\}\) is bounded in \(L^{p}(\mathbb{R}^{n})\). We will prove \(\{v_{n}\}\) is bounded in \(X\). From (55), we have
\[\int|(f(v_{n}))|^{q+1}dx \leq C[\int|f(v_{n})|^{p}dx]^{\frac{\gamma(q+1)}{p}}[\int H(Dv_{n} )^{p}dx]^{\frac{p^{*}}{p}(1-\frac{\gamma(q+1)}{p})}\] \[\leq\epsilon\int|f(v_{n})|^{p}+C_{\epsilon}(\int H(Dv_{n})^{p}dx )^{\frac{p^{*}}{p}} \tag{92}\]
where \(\epsilon>0\) and \(\gamma=\frac{p(2\alpha p^{*}-q-1)}{(q+1)(2\alpha p^{*}-p)}\). Now, by using the Pohozaev identity and (92), we get
\[\frac{N}{p}\int V|f(v)|^{p}dx =\frac{\lambda N}{q+1}\int|f(v)|^{q+1}dx-\frac{N-p}{p}\int H(Dv)^ {p}dx.\] \[\leq\frac{\lambda Ne}{q+1}\int|f(v_{n})|^{p}+\frac{\lambda NC_{ \epsilon}}{q+1}(\int H(Dv_{n})^{p}dx)^{\frac{p^{*}}{p}}-\frac{N-p}{p}\int H(Dv )^{p}dx.\]
Choose \(\epsilon=\frac{V(q+1)}{2p\lambda}\) and by using the fact that \(\{H(Dv_{n})\}\) is bounded in \(L^{p}(\mathbb{R}^{n})\), we can conclude that \(\{\int V|f(v_{n})|^{p}\}\) is bounded in \(\mathbb{R}\). Finally, (13) ensures the boundedness of \(\{v_{n}\}\) in \(X\). By lemma 1.6, up to a subsequence \(v_{n}\rightharpoonup v\) in X and \(f(v_{n})\to f(v)\) in \(L^{q+1}(\mathbb{R}^{n})\). Now we will show \(v\in M\) and \(l=J(v)\). Now,
\[P(v_{n})=\frac{N-p}{p}\int H(Dv_{n})^{p}dx+\frac{N}{p}\int V|f(v_{n})|^{p}dx- \frac{\lambda N}{q+1}\int|f(v_{n})|^{q+1}dx=0. \tag{93}\]
For simplicity, let
1. \(a_{n}=\int H(Dv_{n})^{p}\ dx\), \(a=\lim_{n\to\infty}a_{n}\) and \(\bar{a}=\int H(Dv)^{p}\ dx\).
2. \(b_{n}=\int V|f(v_{n})|^{p}\ dx\), \(b=\lim_{n\to\infty}b_{n}\) and \(\bar{b}=\int V|f(v)|^{p}\ dx\).
3. \(c_{n}=\int|f(v_{n})|^{q+1}\ dx\), \(c=\lim_{n\to\infty}c_{n}\) and \(\bar{c}=\int|f(v)|^{q+1}\ dx\).
Clearly, \(\bar{a}\leq a\), \(\bar{b}\leq b\) and \(c=\bar{c}\). Our claim is \(a=\bar{a}\) and \(b=\bar{b}\). For the time being, let us assume that the claim is true. Now,
\[P(v)=\frac{N-p}{p}\int H(Dv)^{p}dx+\frac{N}{p}\int V|f(v)|^{p}dx-\frac{ \lambda N}{q+1}\int|f(v)|^{q+1}dx=\lim_{n\to\infty}P(v_{n})=0 \tag{94}\]
\[J(v)=\lim_{n\to\infty}J(v_{n})=l>0 \tag{95}\]
So, \(v\in X_{r}\setminus\{0\}\). Hence, \(v\in M\) and \(J(v)=\inf_{u\in M}J(u)\). Moreover, by (iv) in lemma 2.8, we have \(J^{\prime}(v)=0\). Without loss of generality, we can assume \(v\) is non-negative and [10, Proposition 4.3 ] ensures \(v\in C^{1}(\mathbb{R}^{N})\). The function \(u=f(v)\in C^{1}(\mathbb{R}^{N})\) is a non-trivial non-negative bounded ground state solution of (1).
Now, we will prove our claim. If the claim is not true then \(\bar{a}+\bar{b}<a+b\). Consider the following equations
\[\frac{1}{p}a+\frac{1}{p}b-\frac{\lambda}{q+1}c=l\]
and
\[\frac{N-p}{p}a+\frac{N}{p}b-\frac{\lambda N}{q+1}c=0\]
Clearly, \(c\neq 0\). Define two functions \(g_{1},g_{2}:(0,\infty)\to\mathbb{R}\) by
\[g_{1}(t)=\frac{1}{p}\bar{a}t^{N-p}+\frac{1}{p}\bar{b}t^{N}-\frac{\lambda}{q+1} \bar{c}t^{N}\]
and
\[g_{2}(t)=\frac{1}{p}at^{N-p}+\frac{1}{p}bt^{N}-\frac{\lambda}{q+1}ct^{N}\]
It is clear that \(g_{1}(t)<g_{2}(t)\), for all \(t>0\). Also, \(g_{2}^{\prime}(1)=0\) and \(g_{2}(1)=l\). Hence there exists \(t_{0}>0\) such that \(g_{1}(t_{0})=\max_{t>0}g_{1}(t)<l\). Now, consider the function \(v_{t_{0}}(x)=v(\frac{x}{t_{0}})\), which satisfies \(J(v_{t_{0}})=g_{1}(t_{0})<l\) and \(P(v_{t_{0}})=t_{0}g_{1}^{\prime}(t_{0})=0\). Hence, \(v_{t_{0}}\in M\) and \(J(v_{t_{0}})<l\), which is a contradiction.
## 4 Proof of Theorem 0.6
Now we are ready to prove our main theorem which we split into two steps:
1. In this step, our aim is to show the existence of a non-trivial critical point of the functional \(J\). By corollary 2.15, we are allowed to choose a sequence \(\delta_{n}\nearrow 1\) such that for any \(n\geq 1\), there exists \(v_{n}\in X\setminus\{0\}\) satisfying \[J_{\delta_{n}}(v_{n})=\frac{1}{p}\int H(Dv_{n})^{p}+V(x)|f(v_{n})|^{p}\rfloor dx -\frac{\lambda\delta_{n}}{q+1}\int|f(v_{n})|^{q+1}dx=C_{\delta_{n}}\] (96) and \[J^{\prime}_{\delta_{n}}(v_{n})=0.\] By using lemma 2.1 and \(\langle J^{\prime}_{\delta_{n}}(v_{n}),\frac{f(v_{n})}{f^{\prime}(v_{n})} \rangle=0\), we deduce \[\int H(Dv_{n})^{p}(2\alpha-F(v_{n}))dx+\int V(x)|f(v_{n})|^{p}dx-\lambda\delta _{n}\int|f(v_{n})|^{q+1}dx=0\] (97)
where \(F(v_{n})=\frac{2\alpha-1}{1+(2\alpha)^{p-1}\left|f(v_{n})\right|^{p(2\alpha-1)}}\). Moreover, \(v_{n}\) satisfies the following Pohozaev identity,
\[\frac{N-p}{p}\int H(Dv_{n})^{p}dx+\frac{N}{p}\int V(x)|f(v_{n})|^{p }dx +\frac{1}{p}\int\langle\nabla V(x),x\rangle|f(v_{n})|^{p}dx\] \[-\frac{N\lambda\delta_{n}}{q+1}\int|f(v_{n})|^{q+1}dx=0. \tag{98}\]
Multiplying (96) and (98) by \(N\) and \(r=\frac{q-2\alpha p+1}{2\alpha p}\) respectively and then adding those results we get
\[[\frac{N}{p}+\frac{r(N-p)}{p}]\int H(Dv_{n})^{p}dx+\frac{N}{p} \int V(x)|f(v_{n})|^{p}dx+\frac{r}{p}\int[NV(x)+\langle\nabla V(x),x\rangle]|f( v_{n})|^{p}dx\] \[=\frac{N\lambda\delta_{n}}{2\alpha p}\int|f(v_{n})|^{q+1}dx+NC_{ \delta_{n}} \tag{99}\]
From (97) and (99), we deduce
\[\frac{r(N-p)}{p}\int H(Dv_{n})^{p}dx+\frac{N(2\alpha-1)}{2\alpha p }\int V(x)|f(v_{n})|^{p}dx+\frac{r}{p}\int[NV(x)+\langle\nabla V(x),x\rangle]|f (v_{n})|^{p}dx\] \[+\frac{N}{2\alpha p}\int H(Dv_{n})^{p}F(v_{n})dx=NC_{\delta_{n}}\]
Since \(V\) satisfies \((v_{2})\), \((\alpha,p)\in D_{N}\) and \(\{C_{\delta_{n}}\}\) is bounded so (100) ensures the boundedness of
\[\{\int H(Dv_{n})^{p}dx+\int V(x)|f(v_{n})|^{p}dx\}_{n}.\]
Hence \(\{v_{n}\}\) is bounded in \(X\). Now,
\[J(v_{n})=J_{\delta_{n}}(v_{n})+\frac{\lambda(\delta_{n}-1)}{q+1}\int|f(v_{n})| ^{q+1}dx=C_{\delta_{n}}+\frac{\lambda(\delta_{n}-1)}{q+1}\int|f(v_{n})|^{q+1}dx \tag{100}\]
and,
\[\langle J^{\prime}(v_{n}),w\rangle =\langle J_{\delta_{n}}(v_{n}),w\rangle+\frac{\lambda(\delta_{n}- 1)}{q+1}\int|f(v_{n})|^{q-1}f(v_{n})f^{\prime}(v_{n})wdx\] \[=\frac{\lambda(\delta_{n}-1)}{q+1}\int|f(v_{n})|^{q-1}f(v_{n})f^{ \prime}(v_{n})w\ dx\]
that is,
\[J^{\prime}(v_{n})=J^{\prime}_{\delta_{n}}(v_{n})+\frac{\lambda(\delta_{n}-1)}{ q+1}g(v_{n}) \tag{101}\]
where \(g(v_{n})=|f(v_{n})|^{q-1}f(v_{n})f^{\prime}(v_{n})\in X^{\prime}\). Since \(\{v_{n}\}\) is bounded in \(X\), by Banach-Steinhaus theorem we have \(\{g(v_{n})\}\) is bounded in \(X^{\prime}\). Using (100), (101), and the left continuity of the map \(\delta\to C_{\delta}\), we obtain
\[J(v_{n})\to C_{1}\text{ as }n\rightarrow\infty.\]
and,
\[J^{\prime}(v_{n})\to 0\text{ as }n\rightarrow\infty.\]
Hence \(\{v_{n}\}\) is a bounded \((PS)_{C_{1}}\) sequence for \(J\). By the lemma 2.14, there exists \(\bar{v}\in X\) such that \(J(\bar{v})=C_{1}\) and \(J^{\prime}(\bar{v})=0\).
* Let \(E=\{v\in X\setminus\{0\}:J^{\prime}(v)=0\}\) and \(S=\inf_{v\in E}J(v)\). Clearly, \(E\) is nonempty and \(0\leq S\leq C_{1}<m_{\infty,1}\). Let \(\{v_{n}\}\subset E\) be a minimizing sequence. Therefore, \(J(v_{n})\to S\) as \(n\to\infty\) and \(J^{\prime}(v_{n})=0\), for all \(n\in\mathbb{N}\). Using a similar argument as **Step I**, we can prove that \(\{v_{n}\}\) is a bounded \((PS)_{S}\) sequence for \(J\). Using the argument introduced in the proof of the lemma 2.12, there exists \(v_{0}\in X\) such that \(v_{n}\to v_{0}\) and \(J(v_{0})=S\). Without loss of generality, we can assume \(v_{0}\) is nonnegative. We want to prove \(v_{0}\not\equiv 0\). Define a map \(T:X\to\mathbb{R}\) as \[T(v):=\int[H(Dv)^{p}+V(x)|f(v)|^{p}]dx.\] which is continuous. If \(v_{n}\to 0\) in \(X\) then \(T(v_{n})\to 0\). Now, \[\begin{split}\langle J^{\prime}(v_{n}),\frac{f(v_{n})}{f^{\prime }(v_{n})}\rangle&=\int H(Dv_{n})^{p}(1+G(v_{n}))dx+\int V(x)|f(v_{n })|^{p}dx-\lambda\int|f(v_{n})|^{q+1}dx\\ &\geq\int H(Dv_{n})^{p}dx+\int V(x)|f(v_{n})|^{p}dx-\lambda\int|f (v_{n})|^{q+1}dx\end{split}\] (102) where \(G(v_{n})=\frac{(2\alpha-1)(2\alpha)^{p-1}|f(v_{n})|^{p(2\alpha-1)}}{1+(2 \alpha)^{p-1}|f(v_{n})|^{p(2\alpha-1)}}\geq 0\). Let \[v_{n}\in S(\beta_{n})=\{v\in X:\int[H(Dv)^{p}+V(x)|f(v)|^{p}]dx=\beta_{n}^{p}\}\] Using (55), (102) and \(J^{\prime}(v_{n})=0\), we deduce \[\begin{split}\beta_{n}^{p}=\int H(Dv_{n})^{p}dx+\int V(x)|f(v_{ n})|^{p}dx&\leq\langle J^{\prime}(v_{n}),\frac{f(v_{n})}{f^{ \prime}(v_{n})}\rangle+\lambda\int|f(v_{n})|^{q+1}dx\\ &\leq C\beta_{n}^{m},\text{ where }m>p\end{split}\] Hence, the sequence \(\{\beta_{n}\}\) is bounded below by some positive constant, which contradicts the fact that \(T(v_{n})\to 0\). Hence, \(\bar{v}=|v_{0}|\) is a non-trivial non-negative ground state solution of (27). By using lemma 1.12 and [10, Proposition 4.3], we have \(\bar{v}\) is bounded and in \(C^{1}(\mathbb{R}^{N})\). Thus, \(u_{0}=f(\bar{v})\in C^{1}(\mathbb{R}^{N})\) is a non-trivial non-negative bounded ground state solution of (1).
## 5 Acknowledgement
We would like to thank Prof. Adimurthi for his invaluable advice and assistance. The first author was supported by MATRICS project no MTR/2020/000594.
|
2309.12235 | Distributed Conjugate Gradient Method via Conjugate Direction Tracking | We present a distributed conjugate gradient method for distributed
optimization problems, where each agent computes an optimal solution of the
problem locally without any central computation or coordination, while
communicating with its immediate, one-hop neighbors over a communication
network. Each agent updates its local problem variable using an estimate of the
average conjugate direction across the network, computed via a dynamic
consensus approach. Our algorithm enables the agents to use uncoordinated
step-sizes. We prove convergence of the local variable of each agent to the
optimal solution of the aggregate optimization problem, without requiring
decreasing step-sizes. In addition, we demonstrate the efficacy of our
algorithm in distributed state estimation problems, and its robust
counterparts, where we show its performance compared to existing distributed
first-order optimization methods. | Ola Shorinwa, Mac Schwager | 2023-09-21T16:30:22Z | http://arxiv.org/abs/2309.12235v2 | # Distributed Conjugate Gradient Method via Conjugate Direction Tracking
###### Abstract
We present a distributed conjugate gradient method for distributed optimization problems, where each agent computes an optimal solution of the problem locally _without_ any central computation or coordination, while communicating with its immediate, one-hop neighbors over a communication network. Each agent updates its local problem variable using an estimate of the _average conjugate direction_ across the network, computed via a dynamic consensus approach. Our algorithm enables the agents to use _uncoordinated_ step-sizes. We prove convergence of the local variable of each agent to the optimal solution of the aggregate optimization problem, without requiring decreasing step-sizes. In addition, we demonstrate the efficacy of our algorithm in distributed state estimation problems, and its robust counterparts, where we show its performance compared to existing distributed first-order optimization methods.
## I Introduction
A variety of problems in many disciplines can be formulated as distributed optimization problems, where a group of agents seek to compute the optimal estimate, action, or control that minimizes (or maximizes) a specified objective function. Examples of such problems include distributed target tracking [1, 2, 3, 4], pose/state/signal estimation in sensor/robotic networks [5, 6, 7]; machine learning and statistical modeling [8, 9, 10]; process control [11, 12, 13]; and multi-agent planning and control [14, 15, 16]. In these problems, the data is collected and stored locally by each agent, with the additional constraint that no individual agent has access to all the problem data across the network. In many situations, the limited availability of communication and data storage resources, in addition to privacy regulations, preclude the aggregation of the problem data at a central location or node, effectively rendering _centralized optimization_ methods infeasible.
_Distributed optimization_ enables each agent to compute an optimal solution via local computation procedures while communicating with its neighbors over a communication network. In essence, via distributed optimization, each agent _collaborates_ with its neighbors to compute an optimal solution without access to the aggregate problem data. Some distributed optimization methods require a central coordinator for execution or coordination of some of the update procedures. These methods are often used in machine learning for parallel processing on a cluster of computing nodes, especially in problems involving large datasets. In contrast, in this work, we focus on _fully-distributed_ algorithms that do not require a central node for coordination or computation.
We derive a distributed conjugate gradient algorithm, termed DC-Grad, for distributed optimization problems. In our algorithm, each agent utilizes _first-order_ information (i.e., gradients) of its local objective function to compute its local conjugate directions for updating its local estimate of the solution of the optimization problem at each iteration and communicates with its one-hop neighbor over a point-to-point communication network. Each agent does not share its local problem data, including its objective function and gradients, with other agents, preserving the _privacy_ of the agents. For simplicity of exposition, we limit our analysis to distributed optimization problems with smooth, convex objective functions. We prove convergence of the local problem variables of all agents to the optimal solution of the aggregate optimization problem.
We examine the performance of our distributed algorithm in comparison to notable existing distributed optimization methods in distributed state estimation and robust-state-estimation problems. In both problems, we show that our algorithm converges with the least communication overhead in densely-connected communication networks, with some additional computation overhead in comparison to the best-competing distributed algorithm DIGing-ATC. On sparsely-connected graphs, our algorithm performs similarly to other first-order distributed optimization methods.
## II Related Work
Distributed optimization methods have received significant attention, with many such methods developed from their centralized counterparts. Distributed first-order methods leverage the local gradients (i.e., first-order information) of each agent to iteratively improve the each agent's local estimate of the optimal solution of the optimization problem, bearing similarities with other centralized first-order methods such as the centralized gradient descent. Distributed incremental (sub)gradient methods require a central node that receives the local gradient information from each agent and performs the associated update step [17]. As such, these methods require a hub-spoke communication model -- where all the agents are connected to the central node (hub) -- or a ring communication model (a cyclic network), which is quite restrictive.
Distributed (sub)gradient methods circumvent this limitation, enabling distributed optimization over arbitrary network topologies. At each iteration, each agent exchanges its local iterates and other auxiliary variables (such as estimates of the
average gradient of the joint (global) objective function) with other neighboring agents. In distributed (sub)gradient methods, each agent recursively updates its local estimate using its local (sub)gradient and _mixes_ its estimates with the estimates of its neighbors via a convex combination, where the _mixing_ step is achieved via average consensus [18, 19] or the push-sum technique [20, 21]. Generally, distributed (sub)gradient methods require a diminishing step-size for convergence to the optimal solution in convex problems [22, 23], which typically slows down convergence. With a constant step-size, these methods converge to a neighborhood of the optimal solution.
Distributed gradient-tracking methods were developed to eliminate the need for diminishing step-sizes [24, 25, 26]. In these methods, in addition to the local estimate of the optimal solution, each agent maintains an estimate of the _average gradient_ of the objective function and updates its local estimate of the optimal solution by taking a descent step in the direction of the estimated average gradient. Distributed gradient-tracking methods provide faster convergence guarantees with constant step-sizes. Further, diffusion-based distributed algorithms [27, 28, 29] converge to the optimal solution with constant step-sizes. Distributed first-order methods have been derived for undirected [24, 30] and directed [31, 32] networks, as well as static [33, 34] and time-varying [26] networks. In addition, acceleration schemes such as momentum and Nesterov acceleration have been applied to distributed first-order methods [35, 36, 37].
Distributed methods that leverage higher-order information have been developed for distributed optimization, including distributed quasi-newton methods that approximate the inverse Hessian of the objective function [38, 39, 40]. Further, the alternating direction method of multipliers (ADMM) is amenable to consensus optimization problem. In ADMM, each agent maintains an estimate of the optimal solution in addition to dual variables associated with the consensus constraints between agents. However, ADMM, in its original form, requires a central node for computation of the dual update procedure. Fully-distributed variants of ADMM have been developed, addressing this limitation [41, 42, 7, 43]. In general, ADMM methods are amenable to static, undirected communication networks.
The conjugate gradient (CG) method was originally developed for computing the solution of a linear system of equations (i.e., \(Ax=b\)), where the matrix \(A\in\mathbb{R}^{n\times n}\) is square, symmetric, and positive-definite [44, 45]. More generally, the method applies to strongly-convex quadratic programming problems, where the conjugate gradient method is guaranteed to compute the optimal solution in at most \(n\) iterations, in the absence of roundoff errors. The conjugate gradient method has been extended to nonlinear optimization problems (which includes non-quadratic problems) [46, 47]. In general, the conjugate gradient method provides faster convergence compared to gradient descent methods [48, 49, 50]. Variants of the conjugate gradient method for parallel execution on multiple computing nodes (processors) have been developed [51, 52, 53, 54, 55]. These methods decompose the data matrix associated with the linear system of equations into individual components assigned to each processor, enabling parallelization of the matrix-vector operations arising in the conjugate gradient method, which constitute the major computational bottleneck in the CG method. However, these methods are only amenable to problems with hub-spoke communication models or all-to-all communication models. Some other distributed CG methods eliminate the need for a hub-spoke communication model [56], but, however, require a ring communication model, which does not support parallel execution of the update procedures, ultimately degrading the computational speed of the algorithm. The distributed variant [57] allows for more general communication networks. Nonetheless, these CG methods are limited to solving a linear system of equations and do not consider a more general optimization problem. Few distributed CG methods for nonlinear optimization problems exist. The work in [58] derives a distributed CG method for online optimization problems where mixing of information is achieved using the average consensus scheme. Like distributed (sub)gradient methods, this algorithm requires a diminishing step-size for convergence to the optimal solution, converging to a neighborhood of the optimal solution if a constant step-size is used.
In this paper, we derive a distributed conjugate gradient method for a more general class of optimization problems and prove convergence of the algorithm to the optimal solution in convex problems with Lipschitz-continuous gradients. Moreover, we note that, in our algorithm, each agent can use uncoordinated constant step-sizes.
## III Notation and Preliminaries
In this paper, we denote the gradient of a function \(f\) by \(\nabla f\) and \(g\), interchangeably. We denote the all-ones vector as \(\mathbf{1}_{n}\in\mathbb{R}^{n}\). We represent the inner-product of two matrices \(A\in\mathbb{R}^{m\times n}\) and \(B\in R^{m\times n}\) as \(\langle A,B\rangle=\operatorname{trace}\big{(}A^{\mathsf{T}}B\big{)}\). We denote the standard scalar-vector product, matrix-vector product, and matrix-matrix product (composition) as \(A\cdot B\), depending on the mathematical context. For a given matrix \(A\in\mathbb{R}^{m\times n}\), we denote its spectral norm as \(\rho(A)=\left\|A\right\|_{2}\). Further, we denote its Frobenius norm by \(\left\|A\right\|_{F}\). Likewise, we define the mean of a matrix \(B\in\mathbb{R}^{N\times n}\), computed across its rows, as \(\overline{B}=\frac{1}{N}\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}}B\in\mathbb{ R}^{N\times n}\), where each row of \(\overline{B}\) is the same. In addition, we define the consensus violation between the matrix \(B\in\mathbb{R}^{N\times n}\) and its mean \(\overline{B}\in\mathbb{R}^{N\times n}\) as \(\tilde{B}=B-\overline{B}\). We denote the domain of a function \(f\) as \(\operatorname{dom}(f)\), the non-negative orthant as \(\mathbb{R}_{+}\), and the strictly-positive orthant as \(\mathbb{R}_{++}\).
We introduce the following definitions that will be relevant to our discussion.
**Definition 1** (Conjugacy).: _Two vectors \(a,b\in\mathbb{R}^{n}\) are conjugate with respect to a symmetric positive-definite matrix \(C\in\mathbb{R}^{n\times n}\) if:_
\[a^{\mathsf{T}}Cb=\langle a,Cb\rangle=\langle Ca,b\rangle=0. \tag{1}\]
**Definition 2** (Convex Function).: _A function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is
convex if for all \(x,y\in\operatorname{dom}(f)\) and all \(\zeta\in[0,1]\):_
\[f(\zeta x+(1-\zeta)y)\leq\zeta f(x)+(1-\zeta)f(y), \tag{2}\]
_and the domain of \(f\), \(\operatorname{dom}\left(f\right)\subseteq\mathbb{R}^{n}\), is convex._
**Definition 3** (Smoothness).: _A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is \(L\)-smooth if it is continuously differentiable over its domain and its gradients are \(L\)-Lipschitz continuous, i.e.:_
\[\left\|\nabla f(x)-\nabla f(y)\right\|_{2}\leq L\left\|x-y\right\|_{2},\ \forall x,y\in \operatorname{dom}\left(f\right), \tag{3}\]
_where \(L\in\mathbb{R}_{++}\) is the Lipschitz constant._
**Definition 4** (Coercive Function).: _A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is coercive if \(f(x)\rightarrow\infty\) as \(x\rightarrow\infty\), for all \(x\in\operatorname{dom}\left(f\right)\)._
We represent the agents as nodes in an undirected, connected communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,\ldots,N\}\) denotes the set of vertices, representing the agents, and \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) represents the set of edges. An edge \((i,j)\) exists in \(\mathcal{E}\) if agents \(i\) and \(j\) share a communication link. Moreover, we denote the set of neighbors of agent \(i\) as \(\mathcal{N}_{i}\). We associate a _mixing matrix_\(W\in\mathbb{R}^{N\times N}\) with the underlying communication graph. A mixing matrix \(W\) is compatible with \(\mathcal{G}\) if \(w_{ij}=0\), \(\forall j\notin\mathcal{N}_{i}\), \(\forall i\in\mathcal{V}\). We denote the _degree_ of agent \(i\) as \(\deg(i)=\left|\mathcal{N}_{i}\right|\), representing the number of neighbors of agent \(i\), and the _adjacency matrix_ associated with \(\mathcal{G}\) as \(\mathcal{A}\in\mathbb{R}^{N\times N}\), where \(\mathcal{A}_{ij}=1\) if and only if \(j\in\mathcal{N}_{i}\), \(\forall i\in\mathcal{V}\). In addition, we denote the _graph Laplacian_ of \(\mathcal{G}\) as \(L=\operatorname{diag}(\deg(1),\ldots,\deg(N))-\mathcal{A}\).
## IV Problem Formulation And The Centralized Conjugate Gradient Method
We consider the distributed optimization problem:
\[\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\ \frac{1}{N}\sum_{i=1}^{N}f_{i}(x), \tag{4}\]
over \(N\) agents, where \(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) denotes the local objective function of agent \(i\) and \(x\in\mathbb{R}\) denotes the optimization variable. The objective function of the optimization problem (4) consists of a sum of \(N\) local components, making it _separable_, with each component associated with an agent. We assume that agent \(i\) only knows its local objective function \(f_{i}\) and has no knowledge of the objective function of other agents.
We begin with a description of the centralized nonlinear conjugate gradient method, before deriving our method in Section V. The nonlinear conjugate gradient method (a generalization of the conjugate gradient method to optimization problems beyond quadratic programs) represents an iterative first-order optimization algorithm that utilizes the gradient of the objective function to generate iterates from the recurrence:
\[x^{(k+1)}=x^{(k)}+\alpha^{(k)}\cdot s^{(k)}, \tag{5}\]
where \(x^{(k)}\in\mathbb{R}^{n}\) denotes the estimate at iteration \(k\), \(\alpha^{(k)}\in\mathbb{R}_{+}\) denotes the step-size at iteration \(k\), and \(s^{(k)}\in\mathbb{R}^{n}\) denotes the conjugate direction at iteration \(k\). In the nonlinear conjugate direction method, the conjugate direction is initialized as the negative gradient of the objective function at the initial estimate, with \(s^{(0)}=-g^{(0)}\). Further, the conjugate directions are generated from the recurrence:
\[s^{(k+1)}=-g^{(k+1)}+\beta^{(k)}\cdot s^{(k)}, \tag{6}\]
at iteration \(k\), where \(\beta^{(k)}\in\mathbb{R}\) denotes the conjugate gradient update parameter. Different schemes have been developed for updating the conjugate update parameter. Here, we provide a few of the possible schemes:
* _Hestenes-Stiefel Scheme_[44]: \[\beta_{HS}^{(k)}=\frac{\left(g^{(k+1)}-g^{(k)}\right)^{\mathsf{T}}g^{(k+1)}}{ \left(g^{(k+1)}-g^{(k)}\right)^{\mathsf{T}}s^{(k)}}\] (7)
* _Fletcher-Reeves Scheme_[59]: \[\beta_{FR}^{(k)}=\frac{\left\|g^{(k+1)}\right\|_{2}^{2}}{\left\|g^{(k)} \right\|_{2}^{2}}\] (8)
* _Polak-Ribiere Scheme_[60, 61]: \[\beta_{PR}^{(k)}=\frac{\left(g^{(k+1)}-g^{(k)}\right)^{\mathsf{T}}g^{(k+1)}}{ \left\|g^{(k)}\right\|_{2}^{2}}\] (9)
We note that the update schemes are equivalent when \(f\) is a strongly-convex quadratic function. Moreover, when \(f\) is strongly-convex and quadratic, the search directions \(\{s^{(k)}\}_{\forall k}\) are conjugate. As a result, the iterate \(x^{(k)}\) converges to the optimal solution in at most \(n\) iterations. For non-quadratic problems, the search directions lose conjugacy, and convergence may occur after more than \(n\) iterations. In many practical problems, the value of the update parameter \(\beta\) is selected via a hybrid scheme, obtained from a combination of the fundamental update schemes, which include the aforementioned ones. Simple hybrid schemes are also used, e.g., \(\beta^{(k)}=\max\{0,\beta_{PR}^{(k)}\}\).
## V Distributed Conjugate Gradient Method
In this section, we derive a distributed optimization algorithm based on the nonlinear conjugate method for (4). We assign a local copy of \(x\) to each agent, representing its local estimate of the solution of the optimization problem, with each agent computing its conjugate directions locally. Agent \(i\) maintains the variables: \(x_{i}\in\mathbb{R}^{n}\), \(s_{i}\in\mathbb{R}^{n}\), along with \(\alpha_{i}\in\mathbb{R}\), and \(\beta_{i}\in\mathbb{R}\). In addition, we denote the gradient of \(f_{i}\) at \(x_{i}\) by \(g_{i}(x_{i})\).
Before proceeding with the derivation, we introduce the following notation:
\[\mathbf{x} =\begin{bmatrix}\begin{matrix}\mathbf{\mathsf{-}}&x_{1}^{\mathsf{T}} &\mathbf{\mathsf{-}}\\ &\vdots\\ &x_{N}^{\mathsf{T}}&\mathbf{\mathsf{-}}\end{bmatrix},\ \mathbf{s}=\begin{bmatrix}\begin{matrix}\mathbf{ \mathsf{-}}&s_{1}^{\mathsf{T}}&\mathbf{\mathsf{-}}\\ &\vdots\\ &\mathbf{\mathsf{-}}&s_{N}^{\mathsf{T}}&\mathbf{\mathsf{-}}\end{matrix}\\ \end{bmatrix},\] \[\mathbf{g}(\mathbf{x}) =\begin{bmatrix}\begin{matrix}\mathbf{\mathsf{-}}&(\nabla f_{1}(x_{1}))^{ \mathsf{T}}&\mathbf{\mathsf{-}}\\ &\vdots\\ &\vdots\\ &(\nabla f_{N}(x_{N}))^{\mathsf{T}}&\mathbf{\mathsf{-}}\end{matrix},\end{bmatrix}\] \[\mathbf{\alpha}=\operatorname{diag}(\alpha_{1},\ldots,\alpha_{N})\text {, and }\mathbf{\beta}=\operatorname{diag}(\beta_{1},\ldots,\beta_{N})\text{, where the variables are obtained by stacking the local variables of each agent, with \(\mathbf{x}\in\mathbb{R}^{N\times n}\), \(\mathbf{s}\in\mathbb{R}^{N\times n}\), \(\mathbf{g}(\mathbf{x})\in\mathbb{R}^{N\times n}\), and
\(\mathbf{\alpha}\in\mathbb{R}^{N}\). To simplify notation, we denote \(\mathbf{g}(\mathbf{x}^{(k)})\) by \(\mathbf{g}^{k}\). In addition, we note that all agents achieve _consensus_ when all the rows of \(\mathbf{x}\) are the same. Moreover, _optimality_ is achieved when \(\mathbf{1}_{N}^{\mathsf{T}}\mathbf{g}(\mathbf{x})=\mathbf{0}_{n}^{\mathsf{T}}\), i.e., the first-order optimality condition is satisfied. Further, we define the _aggregate_ objective function considering the local variables of each agent as:
\[\mathbf{f}(\mathbf{x})=\frac{1}{N}\sum_{i=1}^{N}f_{i}(x_{i}). \tag{10}\]
To obtain a distributed variant of the centralized conjugate gradient method, one could utilize the average consensus technique to eliminate the need for centralized procedures, yielding the distributed algorithm:
\[\mathbf{x}^{(k+1)} =W\mathbf{x}^{(k)}+\mathbf{\alpha}^{(k)}\cdot\mathbf{s}^{(k)}, \tag{11}\] \[\mathbf{s}^{(k+1)} =-\mathbf{g}^{(k+1)}+\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}, \tag{12}\]
which simplifies to:
\[x_{i}^{(k+1)} =w_{ii}x_{i}^{(k)}+\sum_{j\in\mathcal{N}_{i}}w_{ij}x_{j}^{(k)}+ \alpha_{i}^{(k)}\cdot s_{i}^{(k)}, \tag{13}\] \[s_{i}^{(k+1)} =-g_{i}^{(k+1)}+\beta_{i}^{(k)}\cdot s_{i}^{(k)}, \tag{14}\]
when expressed with respect to agent \(i\), with initialization \(x_{i}^{(0)}\in\mathbb{R}^{n}\), \(s_{i}^{(0)}=-\nabla f_{i}(x_{i}^{(0)})\), and \(\alpha_{i}^{(0)}\in\mathbb{R}_{+}\).
One can show that the above distributed algorithm does not converge to the optimal solution with a non-diminishing step-size. Here, we provide a simple proof by contradiction showing that the optimal solution \(x^{\star}\) is not a fixed point of the distributed algorithm (11): Assume that \(x^{\star}\) is a fixed point of the algorithm. With this assumption, the first-two terms on the right-hand side of (13) simplify to \(x^{\star}\). Further, the conjugate update parameter \(\beta_{i}^{(k-1)}\) simplifies to zero, where we define the ratio \(\frac{0}{0}\) to be zero if the Fletcher-Reeves Scheme is utilized. However, in general, the local conjugate direction of agent \(i\), denoted by \(s_{i}^{(k)}\), may not be zero, since the critical point of the joint objective function \(\frac{1}{N}\sum_{i=1}^{N}f_{i}\) may not coincide with the critical point of \(f_{i}\), i.e., \(\nabla f_{i}(x^{\star})\) may not be zero. Consequently, the last term in (11) is not zero, in general, and as a result, agent \(i\)'s iterate \(x_{i}^{(k+1)}\) deviates from \(x^{\star}\), showing that \(x^{\star}\) is not a fixed point of the distributed algorithm given by (13). This property mirrors that of distributed (sub)gradient methods where a diminishing step-size is required for convergence.
Further, we note that the last term in (13) is zero if agent \(i\) utilizes the average conjugate direction in place of its local conjugate direction. With this modified update procedure, the optimal solution \(x^{\star}\) represents a fixed point of the resulting, albeit non-distributed, algorithm. To address this challenge, we assign an auxiliary variable \(z\) to each agent, representing an estimate of the average conjugate direction, which is updated locally using dynamic average consensus [62], yielding the _Distributed Conjugate Gradient Method_ (DC-Grad), given by:
\[\mathbf{x}^{(k+1)} =W(\mathbf{x}^{(k)}+\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}), \tag{15}\] \[\mathbf{s}^{(k+1)} =-\mathbf{g}^{(k+1)}+\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)},\] (16) \[\mathbf{z}^{(k+1)} =W(\mathbf{z}^{(k)}+\mathbf{s}^{(k+1)}-\mathbf{s}^{(k)}), \tag{17}\]
which is initialized with \(x_{i}^{(0)}\in\mathbb{R}^{n}\), \(s_{i}^{(0)}=-\nabla f_{i}(x_{i}^{(0)})\), \(z_{i}^{(0)}=s_{i}^{(0)}\), and \(\alpha_{i}^{(0)}\in\mathbb{R}_{+}\), \(\forall i\in\mathcal{V}\). Using dynamic average consensus theory, we can show that the agents reach consensus with \(z_{i}^{(\infty)}=\bar{z}^{(\infty)}=\bar{s}^{(\infty)}\), \(\forall i\in\mathcal{V}\). The resulting distributed conjugate gradient method enables each agent to compute the optimal solution of the optimization problem using _uncoordinated_, _non-diminishing_ step-sizes.
Considering the update procedures in terms of each agent, at each iteration \(k\), agent \(i\) performs the following updates:
\[x_{i}^{(k+1)} =\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}\left(x_{j}^{(k)}+ \alpha_{j}^{(k)}\cdot z_{j}^{(k)}\right), \tag{18}\] \[s_{i}^{(k+1)} =-g_{i}^{(k+1)}+\beta_{i}^{(k)}\cdot s_{i}^{(k)},\] (19) \[z_{i}^{(k+1)} =\sum_{j\in\mathcal{N}_{i}\cup\{i\}}w_{ij}\left(z_{j}^{(k)}+s_{j} ^{(k+1)}-s_{j}^{(k)}\right), \tag{20}\]
where agent \(i\) communicates:
\[u_{i}^{(k)} =x_{i}^{(k)}+\alpha_{i}^{(k)}\cdot z_{i}^{(k)}, \tag{21}\] \[v_{i}^{(k)} =z_{i}^{(k)}+s_{i}^{(k+1)}-s_{i}^{(k)} \tag{22}\]
with its neighbors.
We summarize the distributed conjugate gradient algorithm in Algorithm 1.
```
Initialization:\(x_{i}^{(0)}\in\mathbb{R}^{n}\), \(s_{i}^{(0)}=-\nabla f_{i}(x_{i}^{(0)})\), \(z_{i}^{(0)}=s_{i}^{(0)}\), and \(\alpha_{i}^{(0)}\in\mathbb{R}\), \(\forall i\in\mathcal{V}\). do in parallel\(\forall i\in\mathcal{V}\) \(x_{i}^{(k+1)}\leftarrow\) Procedure (18) \(s_{i}^{(k+1)}\leftarrow\) Procedure (19) \(z_{i}^{(k+1)}\leftarrow\) Procedure (20) \(k\gets k+1\) whilenot converged or stopping criterion is not met;
```
**Algorithm 1**Distributed Conjugate Gradient Method (DC-Grad)
We present some assumptions that will be relevant in analyzing the convergence properties of our algorithm.
**Assumption 1**.: _The local objective function of each agent, \(f_{i}\), is closed, proper, and convex. Moreover, \(f_{i}\) is \(L_{i}\)-Lipschitz-smooth, with Lipschitz-continuous gradients._
**Remark 1**.: _From Assumption 1, we note that the aggregate objective function \(\mathbf{f}\) is closed, convex, proper, and Lipschitz-continuous with:_
\[\left\|\nabla\mathbf{f}(x)-\nabla\mathbf{f}(y)\right\|_{2}\leq L\left\|x-y\right\|_{2}, \ \forall x,y\in\mathbb{R}^{n}, \tag{23}\]
_where \(L=\max_{i\in\mathcal{V}}\{L_{i}\}\)._
**Assumption 2**.: _The local objective function of each agent \(f_{i}\) is coercive._
**Assumption 3**.: _The optimization problem (4) has a non-empty feasible set, and further, an optimal solution \(x^{\star}\) exists for the optimization problem._
In addition, we make the following assumption on the stochastic weight matrix.
**Assumption 4**.: _The mixing matrix \(W\) associated with the communication graph \(G\) satisfies:_
1. (Double-Stochasticity)__\(W\mathbf{1}=\mathbf{1}\) _and_ \(\mathbf{1}^{\mathsf{T}}W=\mathbf{1},\)__
2. (Spectral Property)__\(\lambda=\rho(W-\frac{\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}}}{N})<1.\)__
Part 2 of Assumption 4 specifies that the matrix \(M=W-\frac{\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}}}{N}\) has a spectral norm less than one. This assumption is necessary and sufficient for consensus, i.e.,
\[\lim_{k\to\infty}W^{k}\to\frac{\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}}}{N}. \tag{24}\]
We note that Assumption 4 is not restrictive, in undirected communication networks. We provide common choices for the mixing matrix \(W\):
1. _Metropolis-Hastings Weights_: \[w_{ij}=\begin{cases}\frac{1}{\max\{\deg(i),\deg(j)\}+\epsilon},&\text{if }(i,j)\in\mathcal{E},\\ 0&\text{if }(i,j)\notin\mathcal{E}\text{ and }i\neq j,\\ 1-\sum_{r\in\mathcal{V}}w_{ir}&\text{if }i=j,\end{cases}\] where \(\epsilon\in\mathbb{R}_{++}\) denotes a small positive constant, e.g., \(\epsilon=1\)[63].
2. _Laplacian-based Weights_: \[W=I-\frac{L}{\tau},\] where \(L\) denotes the Laplacian matrix of \(\mathcal{G}\), and \(\tau\in\mathbb{R}\) denotes a scaling parameter with \(\tau>\frac{1}{2}\lambda_{\max}(L)\). One can choose \(\tau=\max_{i\in\mathcal{V}}\{\deg(i)\}+\epsilon\), if computing \(\lambda_{\max}(L)\) is infeasible, where \(\epsilon\in\mathbb{R}_{++}\) represents a small positive constant [64].
The aforementioned assumptions are standard in convergence analysis of distributed optimization algorithms.
## VI Convergence Analysis
We analyze the convergence properties of our distributed algorithm. Before proceeding with the analysis, we consider the following sequence:
\[\overline{\mathbf{x}}^{(k+1)} =MW\mathbf{x}^{(k+1)}+M\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}, \tag{25}\] \[\overline{\mathbf{x}}^{(k+1)} =\overline{\mathbf{x}}^{(k)}+\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^ {(k)}},\] (26) \[\overline{\mathbf{s}}^{(k+1)} =M\left(-\mathbf{g}^{(k+1)}+\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}\right),\] (27) \[\overline{\mathbf{s}}^{(k+1)} =-\overline{\mathbf{g}}^{(k+1)}+\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s} ^{(k)}},\] (28) \[\overline{\mathbf{x}}^{(k+1)} =M\left(W(\mathbf{z}^{(k)}+\mathbf{s}^{(k+1)}-\mathbf{s}^{(k)})\right),\] (29) \[\overline{\mathbf{x}}^{(k+1)} =\overline{\mathbf{x}}^{(k)}+\overline{\mathbf{s}}^{(k+1)}-\overline{\bm {s}}^{(k)}, \tag{30}\]
derived from the mean of the local iterates of each agent, where we have utilized the assumption that \(W\) is column-stochastic. From (30), we note that \(\overline{\mathbf{z}}^{(k)}=\overline{\mathbf{s}}^{(k)}\), \(\forall k\), given that \(\overline{\mathbf{z}}^{(0)}=\overline{\mathbf{s}}^{(0)}\).
In addition, we introduce the following definitions: \(\alpha_{\max}=\max_{k\in\mathbb{Z}_{+}}\{\left\|\mathbf{\alpha}^{(k)}\right\|_{2}\}\); \(\beta_{\max}=\max_{k\in\mathbb{Z}_{+}}\{\left\|\mathbf{\beta}^{(k)}\right\|_{2}\}\); \(r_{\alpha}=\alpha_{\max}\max_{k\in\mathbb{Z}_{+}}\frac{1}{\left\|\overline{ \mathbf{\alpha}}^{(k)}\right\|_{2}}\); \(r_{\beta}=\beta_{\max}\max_{k\in\mathbb{Z}_{+}}\frac{1}{\left\|\overline{\mathbf{ \beta}}^{(k)}\right\|_{2}}\). Likewise, we define \(\overline{\mathbf{\alpha}}=\frac{\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}}}{N}\sum _{i\in\mathcal{V}}\alpha_{i}^{(k)}\), with a similar definition for \(\overline{\mathbf{\beta}}\). We state the following lemma, bounding the norm of the sequence \(\{\mathbf{x}^{(k)}\}_{\forall k}\), \(\{\mathbf{z}^{(k)}\}_{\forall k}\), and \(\{\mathbf{s}^{(k)}\}_{\forall k}\).
**Lemma 1**.: _If the sequence \(\{\mathbf{x}^{(k)}\}_{\forall k}\), \(\{\mathbf{s}^{(k)}\}_{\forall k}\), and \(\{\mathbf{z}^{(k)}\}_{\forall k}\) is generated by the recurrence in (15), (16), and (17), the auxiliary sequence \(\{\tilde{\mathbf{x}}^{(k)}\}_{\forall k}\), \(\{\tilde{\mathbf{s}}^{(k)}\}_{\forall k}\), and \(\{\tilde{\mathbf{z}}^{(k)}\}_{\forall k}\) satisfy the following bounds:_
\[\left\|\tilde{\mathbf{x}}^{(k+1)}\right\|_{2} \leq\lambda\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}+\lambda\alpha_{ \max}(1+r_{\alpha})\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}\] \[\quad+\lambda r_{\alpha}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot \mathbf{z}^{(k)}}\right\|_{2}, \tag{31}\] \[\left\|\tilde{\mathbf{s}}^{(k+1)}\right\|_{2} \leq\left\|\tilde{\mathbf{g}}^{(k+1)}\right\|_{2}+\beta_{\max}(1+r_{ \beta})\left\|\tilde{\mathbf{s}}^{(k)}\right\|_{2}\] \[\quad+(1+r_{\beta})\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s} ^{(k)}}\right\|_{2},\] (32) \[\left\|\tilde{\mathbf{z}}^{(k+1)}\right\|_{2} \leq(\lambda+\lambda^{2}L\alpha_{\max}(1+r_{\alpha}))\left\|\tilde{ \mathbf{z}}^{(k)}\right\|_{2}\] \[\quad+\lambda L(\lambda+1)\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}\] \[\quad+\lambda L(\lambda r_{\alpha}+1)\left\|\overline{\mathbf{\alpha}^{ (k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}\] \[\quad+\lambda\left(\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)} \right\|_{2}+\left\|\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right). \tag{33}\]
Proof.: Please refer to Appendix A for the proof.
Further, all agents reach agreement on their local iterates, which we state in the following theorem.
**Theorem 1** (Agreement).: _Given the recurrence (15), (16), and (17), the local iterates of agent \(i\), \(\left(x_{i}^{(k)},s_{i}^{(k)},z_{i}^{(k)}\right)\), converge to the mean, \(\forall i\in\mathcal{V}\), i.e., each agent reaches agreement with all other agents, for sufficiently large \(k\). In particular:_
\[\lim_{k\to\infty}\left\|\tilde{\mathbf{s}}^{(k)}\right\|_{2}=0,\ \lim_{k\to\infty}\left\|\tilde{\mathbf{x}}^{(k)} \right\|_{2}=0,\ \lim_{k\to\infty}\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}=0. \tag{34}\]
_Further, the local iterates of each agent converge to a limit point, as \(k\to\infty\), with:_
\[\lim_{k\to\infty}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}= \lim_{k\to\infty}\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}\right\|_{2}=0. \tag{35}\]
_Moreover, the norm of the mean of the agents' local iterates tracking the average conjugate direction converges to zero, with the norm of the average gradient evaluated at the local iterate of each agent also converging to zero, for sufficiently large \(k\). Specifically, the following holds:_
\[\lim_{k\to\infty}\left\|\overline{\mathbf{s}}^{(k)}\right\|_{2}=0,\ \lim_{k\to\infty}\left\|\overline{\mathbf{z}}^{(k)}\right\|_{2}=0,\ \lim_{k\to\infty}\left\|\overline{\mathbf{g}}^{(k)}\right\|_{2}=0. \tag{36}\]
Proof.: We refer readers to Appendix B for the proof.
Theorem 1 indicates that the local iterates of all agents, \(\{x_{i}^{(k)}\}_{\forall i\in\mathcal{V}}\), converge to a common limit point \(x^{(\infty)}\), given
by the mean \(\sum_{i\in\mathcal{V}}x_{i}^{(k)}\), as \(k\rightarrow\infty\). Further:
\[\begin{split}\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{g}}^{(k) }\right\|_{2}&=\lim_{k\rightarrow\infty}\left\|\frac{\mathbf{1}_{ N}\mathbf{1}_{N}^{\mathsf{T}}}{N}\mathbf{g}^{(k)}\right\|_{2},\\ &=\left\|\frac{\mathbf{1}_{N}\mathbf{1}_{N}^{\mathsf{T}}}{N}\mathbf{g} (\mathbf{x}^{(\infty)})\right\|_{2},\\ &=\left\|\mathbf{1}_{N}\left(\nabla f(x^{(\infty)})\right)^{ \mathsf{T}}\right\|_{2},\\ &=\left\|\mathbf{1}_{N}\right\|_{2}\left\|\nabla f(x^{(\infty)}) \right\|_{2},\\ &=\sqrt{N}\left\|\nabla f(x^{(\infty)})\right\|_{2},\end{split} \tag{37}\]
where \(\mathbf{x}^{(\infty)}=\mathbf{1}_{N}\left(x^{(\infty)}\right)^{\mathsf{T}}\). From (36) and (37), we note that \(\left\|\nabla f(x^{(\infty)})\right\|_{2}=0\). Hence, the limit point of the distributed algorithm represents a critical point of the optimization problem (4).
**Theorem 2** (Convergence of the Objective Value).: _The value of the objective function \(\mathbf{f}\) evaluated at the mean of the local iterates of all agents converges to the optimal objective value. Moreover, the value of \(\mathbf{f}\) evaluated at the agents' local iterates converges to the optimal objective value, for sufficiently large \(k\). Particularly:_
\[\lim_{k\rightarrow\infty}\mathbf{f}(\mathbf{x}^{(k)})=\lim_{k\rightarrow\infty}\mathbf{f }(\overline{\mathbf{x}}^{(k)})=f^{\star}, \tag{38}\]
Proof.: We provide the proof in Appendix C.
## VII Simulations
In this section, we examine the performance of our distributed conjugate gradient method (DC-Grad) in comparison to other existing distributed optimization algorithms, namely: DIGing-ATC [26], C-ADMM [7], \(AB\)/Push-Pull [65], and \(ABm\)[66], which utilizes _momentum_ acceleration to achieve faster convergence. We note that \(AB\)/Push-Pull reduces to DIGing-CTA when the matrices \(A\) and \(B\) are selected to be doubly-stochastic. We assess the convergence rate of our algorithm across a range of communication networks, with varying degrees of connectivity, described by the connectivity ratio \(\kappa=\frac{2|\mathcal{E}|}{N(N-1)}\). We consider a _state estimation problem_, formulated as a least-squares optimization problem, in addition to its robust variant derived with the Huber loss function. In each problem, we utilize _Metropolis-Hastings_ weights for the mixing matrix \(W\). Since Metropolis-Hastings weights yield doubly-stochastic (DS) mixing matrices, we use the terms \(ABm\) and \(ABm\)-DS interchangeably. We compute the convergence error of the local iterate of each agent to the optimal solution, in terms of the _relative-squared error_ (RSE) given by:
\[\mathrm{RSE}=\frac{\left\|x_{i}-x^{\star}\right\|_{2}}{\left\|x^{\star} \right\|_{2}}, \tag{39}\]
where \(x_{i}\) denotes the local iterate of agent \(i\) and \(x^{\star}\) denotes the optimal solution, computed from the aggregate optimization problem. We set the threshold for convergence at \(1e^{-13}\). For a good comparison of the computation and communication overhead incurred by each method, we selected a convergence threshold that could be attained by all methods. In our simulation study, we note that DIGing-ATC and DC-Grad yield higher-accuracy solutions compared to the other methods, with \(AB\)/Push-Pull yielding solutions with the least accuracy.
We utilize the _golden-section search_ to select an optimal step-size for our distributed conjugate gradient method, DIGing-ATC, \(AB\)/Push-Pull, and \(ABm\). Likewise, we select an optimal value for the penalty parameter \(\rho\) in C-ADMM using golden-section search. Further, we assume that each scalar component in the agents' iterates is represented using the double-precision floating-point representation format.
### _Distributed State Estimation_
In the state estimation problem, we seek to compute an estimate of a parameter (_state_) given a set of observations (_measurements_). In many situations (e.g., in robotics, process control, and finance), the observations are collected by a network of sensors, resulting in decentralization of the problem data, giving rise to the distributed state estimation problem. Here, we consider the distributed state estimation problem over a network of \(N\) agents, where the agents estimate the state \(x\in\mathbb{R}^{n}\), representing the parameter of interest, such as the location of a target. Each agent makes noisy observations of the state, given by the model: \(y_{i}=C_{i}x+w_{i}\), where \(y_{i}\in\mathbb{R}^{m_{i}}\) denotes the observations of agent \(i\), \(C_{i}\in\mathbb{R}^{m_{i}\times n}\) denotes the observation (measurement) matrix, and \(w_{i}\) denotes random noise. We can formulate the state estimation problem as a least-squares optimization problem, given by:
\[\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\ \frac{1}{N}\sum_{i=1}^{N} \left\|C_{i}x-y_{i}\right\|_{2}^{2}. \tag{40}\]
We determine the number of local observations for each agent randomly by sampling from the uniform distribution over the closed interval \([5,30]\). We randomly generate the problem data: \(C_{i}\) and \(y_{i}\), \(\forall i\in\mathcal{V}\), with \(N=50\) and \(n=10\). We examine the convergence rate of the distributed optimization algorithms over randomly-generated connected communication graphs. We update the conjugate gradient parameter \(\beta\) using a modified _Fletcher-Reeves Scheme_ (8).
In Table I, we present the mean and standard deviation of the cumulative computation time per agent, in seconds, required for convergence by each distributed algorithm, over \(20\) randomly-generated problems for each communication network. We utilize a closed-form solution for the primal update procedure arising in C-ADMM, making it competitive with other distributed optimization methods in terms of computation time. From Table I, we note that DIGing-ATC requires the shortest computation time, closely followed by DC-Grad, on densely-connected communication graphs, i.e., on graphs with \(\kappa\) close to one, where we note that DC-Grad requires an update procedure for \(\beta\), increasing its computation time. However, on more sparsely-connected communication graphs, C-ADMM requires the shortest computation time.
Moreover, we provide the mean and standard deviation of the cumulative size of messages exchanged per agent, in Megabytes (MB), for each distributed algorithm in Table II. We note that C-ADMM requires agents to communicate fewer
variables by a factor of 2, compared to \(AB\)/Push-Pull, \(ABm\), DIGing-ATC, and DC-Grad. Table II shows that DC-Grad incurs the least communication overhead for convergence on more-densely-connected graphs, closely followed by DIGing-ATC. This finding reveals that DC-Grad requires fewer iterations for convergence on these graphs, compared to the other algorithms. On more-sparsely-connected graphs, C-ADMM incurs the least communication overhead.
In Figure 1, we show the convergence error of the agents' iterates, per iteration, on a fully-connected communication network. Figure 1 highlights that DC-Grad requires the least number of iterations for convergence, closely followed by DIGing-ATC. In addition, \(ABm\) and C-ADMM converge at relatively the same rate. Similarly, we show the convergence error of the iterates of each agent on a randomly-generated connected communication graph with \(\kappa=0.48\) in Figure 2. We note that C-ADMM converges the fastest in Figure 2. In addition, we note that the convergence plot of DIGing-ATC overlays that of DC-Grad, with both algorithms exhibiting a similar performance.
### _Distributed Robust Least-Squares_
We consider the robust least-squares formulation of the state estimation problem, presented in Section VII-A. We replace the \(\ell_{2}^{2}\)-loss function in (40) with the Huber loss function, given by:
\[f_{\mathrm{hub},\xi}(u)=\begin{cases}\frac{1}{2}u^{2},&\text{if }|u|\leq\xi\ ( \ell_{2}^{2}\text{-zone}),\\ \xi(|u|-\frac{1}{2}\xi),&\text{otherwise }(\ell_{1}\text{-zone}).\end{cases} \tag{41}\]
We note the Huber loss function is less sensitive to outliers, since the penalty function \(f_{\mathrm{hub},\xi}\) grows linearly for large values of \(u\). The corresponding robust least-squares optimization problem is given by:
\[\underset{x\in\mathbb{R}^{n}}{\text{minimize}}\ \frac{1}{N}\sum_{i=1}^{N}f_{ \mathrm{hub},\xi}(C_{i}x-y_{i}). \tag{42}\]
We assume each agent has a single observation, i.e., \(m_{i}=1\), \(\forall i\in\mathcal{V}\) and assess the convergence rate of the distributed algorithms on randomly-generated connected communication graphs, with \(N=50\) and \(n=10\). We randomly initialize \(x_{i}\) such that the \(x_{i}\) lies in the \(\ell_{1}\)-zone, \(\forall i\in\mathcal{V}\). Further, we randomly generate the problem data such that the optimal solution \(x^{\star}\) lies in the \(\ell_{2}^{2}\)-zone. We set the maximum number of iterations to \(3000\). We note that a closed-form solution does not exist for the primal update procedure for C-ADMM in this problem. Consequently, we do not include C-ADMM in this study, noting that solving the primal update procedure with iterative solvers would negatively impact the computation time of C-ADMM, effectively limiting its competitiveness. Further, we update the conjugate gradient parameter \(\beta\) using a modified _Polak-Ribiere Scheme_ (9).
Fig. 1: Convergence error of all agents per iteration in the distributed state estimation problem on a fully-connected communication graph. DC-Grad converges the fastest, closely followed by DIGing-ATC.
Fig. 2: Convergence error of all agents per iteration in the distributed state estimation problem on a randomly-generated connected communication graph with \(\kappa=0.48\). C-ADMM attains the fastest convergence rate. The convergence plot of DIGing-ATC overlays that of DC-Grad, with both algorithms converging at the same rate.
We provide the mean computation time per agent, in seconds, required for convergence of each algorithm, along with the standard deviation in Table III, over \(20\) randomly-generated problems for each communication network. From Table III, we note that \(ABm\) requires the shortest computation time for convergence on more-sparsely-connected communication graphs. However, on more-densely-connected communication graphs, DIGing-ATC achieves the shortest computation time, followed by DC-Grad.
In Table IV, we show the mean and standard deviation of the cumulative size of messages exchanged by each agent (in MB), in each distributed algorithm. Generally, on more-sparsely-connected graphs, \(ABm\) converges the fastest, in terms of the number of iterations, and as a result, incurs the least communication overhead, closely followed by DIGing-ATC and DC-Grad. On the other hand, on more-densely-connected communication graphs, DC-Grad incurs the least communication overhead.
We show the convergence error of each agent's iterate \(x_{i}\), per iteration, on a fully-connected communication network in Figure 3. We note that DC-Grad converges within the fewest number of iterations, closely followed by DIGing-ATC. In addition, \(AB\)/Push-Pull requires the greatest number of iterations for convergence. We note that \(AB\)/Push-Pull (which is equivalent to DIGing-CTA) utilizes the _combine-then-adapt_ update scheme, which results in slower convergence, generally [26]. Moreover, the objective function in (41) is not strongly-convex over its entire domain, particularly in the \(\ell_{1}\)-zone. In addition, gradient-tracking methods, in general, require (_restricted_) strong convexity for linear convergence. As a result, all the algorithms exhibit sublinear convergence initially, since all the algorithms are initialized with \(x_{i}\) in the \(\ell_{1}\)-zone, \(\forall i\in\mathcal{V}\). The algorithms exhibit linear convergence when the iterates enter the \(\ell_{2}^{2}\)-zone, as depicted in Figure 3. In addition, Figure 4 shows the convergence error of each agent's iterates on a randomly-generated communication network with \(\kappa=0.42\). On these graphs, \(ABm\) requires the least number of iterations for convergence. We note that the convergence plot for DIGing-ATC overlays that of DC-Grad, with both algorithms exhibiting relatively the same performance.
algorithms. Preliminary convergence analysis of our algorithm suggests that our algorithm converges linearly. In future work, we seek to characterize the convergence rate of our method. Further, in our simulation studies, our algorithm exhibits a notably similar performance with DIGing-ATC. We intend to examine this similarity in future work.
### _Proof of Lemma 1_
Before proceeding with the proof, we state the following relation:
\[\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2} =\left\|\overline{\mathbf{\alpha}}^{(k)}\cdot\overline{\mathbf{z}}^{(k)}+ \overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}, \tag{43}\] \[\geq\left\|\overline{\mathbf{\alpha}}^{(k)}\right\|_{2}\cdot\left\| \overline{\mathbf{z}}^{(k)}\right\|_{2}-\left\|\tilde{\mathbf{\alpha}}^{(k)}\right\|_ {2}\cdot\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}, \tag{44}\]
where we have used the fact that \(\overline{\alpha}^{(k)}=\left\|\overline{\mathbf{\alpha}}^{(k)}\right\|_{2}\) and \(\left\|\mathbf{1}_{N}\mathbf{1}_{N}^{\mathrm{T}}\right\|_{2}=\sqrt{N}\sqrt{N}=N\).
Considering the recurrence in (30):
\[\mathbf{z}^{(k+1)}-\overline{\mathbf{z}}^{(k+1)} =W\mathbf{z}^{(k)}-\overline{\mathbf{z}}^{(k)}+W(\mathbf{s}^{(k+1)}-\mathbf{s}^{( k)})\] \[\quad-(\overline{\mathbf{s}}^{(k+1)}-\overline{\mathbf{s}}^{(k)}), \tag{45}\] \[\tilde{\mathbf{z}}^{(k+1)} =M\tilde{\mathbf{z}}^{(k)}+M(\mathbf{s}^{(k+1)}-\mathbf{s}^{(k)}),\]
where we have utilized the relation: \(M\overline{\mathbf{z}}^{(k)}=0\). From (45):
\[\left\|\tilde{\mathbf{z}}^{(k+1)}\right\|_{2} =\left\|M\tilde{\mathbf{z}}^{(k)}+M(\mathbf{s}^{(k+1)}-\mathbf{s}^{(k)})\right\| _{2}, \tag{46}\] \[\leq\left\|M\tilde{\mathbf{z}}^{(k)}\right\|_{2}+\left\|M(\mathbf{s}^{(k+1 )}-\mathbf{s}^{(k)})\right\|_{2},\] \[\leq\lambda\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}+\lambda\left\| \mathbf{s}^{(k+1)}-\mathbf{s}^{(k)}\right\|_{2}.\]
Using the recurrence (16):
\[\left\|\tilde{\mathbf{z}}^{(k+1)}\right\|_{2} \leq\lambda\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}+\lambda\left\| \mathbf{g}^{(k+1)}-\mathbf{g}^{(k)}\right\|_{2} \tag{47}\] \[\quad+\lambda\left(\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)} \right\|_{2}+\left\|\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right),\] \[\leq\lambda\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}+\lambda L \left\|\mathbf{x}^{(k+1)}-\mathbf{x}^{(k)}\right\|_{2}\] \[\quad+\lambda\left(\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)} \right\|_{2}+\left\|\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right),\]
from Lipschitz continuity of \(\nabla\mathbf{f}\), with:
\[\left\|\tilde{\mathbf{z}}^{(k+1)}\right\|_{2} \leq\lambda\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2} \tag{48}\] \[\quad+\lambda L\left\|\tilde{\mathbf{x}}^{(k+1)}+\overline{\mathbf{x}}^{ (k+1)}-\tilde{\mathbf{x}}^{(k)}-\overline{\mathbf{x}}^{(k+1)}\right\|_{2}\] \[\quad+\lambda\left(\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)} \right\|_{2}+\left\|\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right),\] \[\leq\lambda\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}\] \[\quad+\lambda L\left(\left\|\tilde{\mathbf{x}}^{(k+1)}\right\|_{2}+ \left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}\right.\] \[\quad\quad\quad\quad\quad\left.+\left\|\mathbf{\overline{\mathbf{\alpha} }^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}\right)\] \[\quad+\lambda\left(\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)} \right\|_{2}+\left\|\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right),\]
using (26).
In addition:
\[\left\|\tilde{\mathbf{x}}^{(k+1)}\right\|_{2} =\left\|\mathbf{x}^{(k)}-\overline{\mathbf{x}}^{(k)}\right\|, \tag{49}\] \[=\left\|W\left(\mathbf{x}^{(k)}+\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)} \right)-\overline{\mathbf{x}}^{(k)}-\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}} \right\|_{2},\] \[=\left\|M\tilde{\mathbf{x}}^{(k)}+M\left(\mathbf{\alpha}^{(k)}\cdot\mathbf{z} ^{(k)}-\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right)\right\|_{2},\] \[=\left\|M\tilde{\mathbf{x}}^{(k)}+M\left(\mathbf{\alpha}^{(k)}\cdot \tilde{\mathbf{z}}^{(k)}+\tilde{\mathbf{\alpha}}^{(k)}\cdot\overline{\mathbf{z}}^{(k)} \right)\right\|_{2},\]
notion: \(M\overline{\mathbf{v}}=0\). Thus:
\[\left\|\tilde{\mathbf{x}}^{(k+1)}\right\|_{2} \leq\lambda\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}+\lambda\left\| \mathbf{\alpha}^{(k)}\cdot\tilde{\mathbf{z}}^{(k)}\right\|_{2} \tag{50}\] \[\quad+\lambda\left\|\tilde{\mathbf{\alpha}}^{(k)}\cdot\overline{\mathbf{z} }^{(k)}\right\|_{2},\] \[\leq\lambda\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}+\lambda\alpha_ {\max}\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}\] \[\quad+\lambda\frac{\left\|\tilde{\mathbf{\alpha}}^{(k)}\right\|_{2}}{ \left\|\overline{\mathbf{\alpha}}^{(k)}\right\|_{2}}\cdot\left(\left\|\overline{ \mathbf{\alpha}}^{(k)}\cdot\mathbf{z}^{(k)}\right\|_{2}\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\left.+\left\|\tilde{\mathbf{\alpha}}^{(k)}\right\|_{2}\cdot\left\|\tilde{ \mathbf{z}}^{(k)}\right\|_{2}\right),\] \[\leq\lambda\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}+\lambda\alpha _{\max}(1+r_{\alpha})\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}\] \[\quad+\lambda r_{\alpha}\left\|\overline{\mathbf{\alpha}}^{(k)}\cdot \mathbf{z}^{(k)}\right\|_{2},\]
using (44) in the second inequality and the fact \(\left\|\tilde{\mathbf{\alpha}}^{(k)}\right\|_{2}\leq\alpha_{\max}\).
Likewise, from (16) and (28):
\[\tilde{\mathbf{s}}^{(k+1)}=-\tilde{\mathbf{g}}^{(k+1)}+\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{( k)}-\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}_{2}. \tag{51}\]
Considering the second term in (51):
\[\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}\right\|_{2} =\left\|\mathbf{\beta}^{(k)}\cdot\left(\tilde{\mathbf{s}}^{(k)}+\overline {\mathbf{s}}^{(k)}\right)\right\|_{2}, \tag{52}\] \[\leq\beta_{\max}\left\|\tilde{\mathbf{s}}^{(k)}+\overline{\mathbf{s}}^{( k)}\right\|_{2},\] \[\leq\beta_{\max}\left(\left\|\tilde{\mathbf{s}}^{(k)}\right\|_{2}+ \left\|\overline{\mathbf{s}}^{(k)}\right\|_{2}\right).\]
Further, considering the third term in (51):
\[\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}\right\|_{2} \geq\left\|\overline{\mathbf{\beta}}^{(k)}\right\|_{2}\cdot\left\|\tilde{\mathbf{s}}^ {(k)}\right\|_{2}-\left\|\tilde{\mathbf{\beta}}^{(k)}\right\|_{2}\cdot\left\| \tilde{\mathbf{s}}^{(k)}\right\|_{2}; \tag{53}\]
hence:
\[\left\|\overline{\mathbf{s}}^{(k)}\right\|_{2}\leq\frac{1}{\left\|\overline{\mathbf{ \beta}}^{(k)}\right\|_{2}}\left(\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s} ^{(k)}}\right\|_{2}+\left\|\tilde{\mathbf{\beta}}^{(k)}\right\|_{2}\cdot\left\| \tilde{\mathbf{s}}^{(k)}\right\|_{2}\right), \tag{54}\]
which yields:
\[\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}\right\|_{2} \leq\beta_{\max}(1+r_{\beta})\left\|\tilde{\mathbf{s}}^{(k)}\right\|_ {2} \tag{55}\] \[\quad+\frac{\beta_{\max}}{\left\|\overline{\mathbf{\beta}}^{(k)} \right\|_{2}}\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}\right\|_{2}.\]
Hence, from (51):
\[\left\|\tilde{\mathbf{s}}^{(k+1)}\right\|_{2} \leq\left\|\tilde{\mathbf{g}}^{(k+1)}\right\|_{2}+\beta_{\max}(1+r_{ \beta})\left\|\tilde{\mathbf{s}}^{(k)}\right\|_{2} \tag{56}\] \[\quad+\left(1+\frac{\beta_{\max}}{\left\|\overline{\mathbf{\beta}}^{ (k)}\right\|_{2}}\right)\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}} \right\|_{2},\] \[\leq\left\|\tilde{\mathbf{g}}^{(k+1)}\right\|_{2}+\beta_{\max}(1+r_{ \beta})\left\|\tilde{\mathbf{s}}^{(k)}\right\|_{2}\] \[\quad+\left(1+r_{\beta}\right)\left\|\overline{\mathbf{\beta}^{(k)} \cdot\mathbf{s}^{(k)}}\right\|_{2}.\]
In addition, from (48) and (50):
\[\left\|\tilde{\mathbf{z}}^{(k+1)}\right\|_{2} \leq(\lambda+\lambda^{2}L\alpha_{\max}(1+r_{\alpha}))\left\|\tilde {\mathbf{z}}^{(k)}\right\|_{2} \tag{57}\] \[\quad+\lambda L(\lambda+1)\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}\] \[\quad+\lambda L(\lambda r_{\alpha}+1)\left\|\overline{\mathbf{\alpha} }^{(k)}\cdot\mathbf{z}^{(k)}\right\|_{2}\] \[\quad+\lambda\left(\left\|\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)} \right\|_{2}+\left\|\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right).\]
### _Proof of Theorem 1_
We introduce the following sequences:
\[X^{(k)} =\sqrt{\sum_{l=0}^{k}\left\|\tilde{\mathbf{x}}^{(l)}\right\|_{2}^{2} },\quad S^{(k)}=\sqrt{\sum_{l=0}^{k}\left\|\tilde{\mathbf{s}}^{(l)}\right\|_{2}^{2}}, \tag{58}\] \[Z^{(k)} =\sqrt{\sum_{l=0}^{k}\left\|\tilde{\mathbf{z}}^{(l)}\right\|_{2}^{2}},\] (59) \[R^{(k)} =\sqrt{\sum_{l=0}^{k}\left(\left\|\overline{\mathbf{\alpha}}^{(l)} \cdot\mathbf{z}^{(l)}\right\|_{2}^{2}+\left\|\overline{\mathbf{\beta}}^{(l)}\cdot\mathbf{s}^ {(l)}\right\|_{2}^{2}+\left\|\tilde{\mathbf{g}}^{(l+1)}\right\|_{2}^{2}\right)}. \tag{60}\]
We state the following lemma, and refer readers to [30] for its proof.
**Lemma 2**.: _Given the non-negative scalar sequence \(\{\nu^{(k)}\}_{\forall k>0}\), defined by:_
\[\nu^{(k+1)}\leq\lambda\nu^{(k)}+\omega^{(k)}, \tag{61}\]
_where \(\lambda\in(0,1)\), the following relation holds:_
\[V^{(k+1)}\leq\gamma\Omega^{(k)}+\epsilon, \tag{62}\]
_where \(V^{(k)}=\sqrt{\sum_{l=0}^{k}\left\|\nu^{(l)}\right\|_{2}^{2}}\), \(\Omega^{(k)}=\sqrt{\sum_{l=0}^{k}\left\|\omega^{(l)}\right\|_{2}^{2}}\), \(\gamma=\frac{\sqrt{2}}{1-\lambda}\), and \(\epsilon=\nu^{(0)}\sqrt{\frac{2}{1-\lambda^{2}}}\)._
From (50), let:
\[\omega^{(k)}=\lambda\alpha_{\max}(1+r_{\alpha})\left\|\tilde{\mathbf{z}}^{(k)} \right\|_{2}+\lambda r_{\alpha}\left\|\overline{\mathbf{\alpha}}^{(k)}\cdot\mathbf{z}^ {(k)}\right\|_{2}, \tag{63}\]
which yields:
\[X^{(k)}\leq\rho_{xz}Z^{(k)}+\rho_{xx}R^{(k)}+\epsilon_{x}, \tag{64}\]
where \(\rho_{xz}=\frac{\sqrt{2}}{1-\lambda}\lambda\alpha_{\max}(1+r_{\alpha})\), \(\rho_{xr}=\frac{\sqrt{2}}{1-\lambda}\lambda r_{\alpha}\), and \(\epsilon_{x}=\left\|\tilde{\mathbf{x}}^{(0)}\right\|_{2}\sqrt{\frac{2}{1-\lambda^{2 }}}\). Likewise, from (56), assuming \(\lambda_{s}=\beta_{\max}(1+r_{\beta})<1\):
\[S^{(k)}\leq\mu_{sr}R^{(k)}+\mu_{sc}, \tag{65}\]
where \(\mu_{sr}=\rho_{sr}=\frac{\sqrt{2}}{1-\lambda_{s}}(2+r_{\beta})\) and \(\mu_{c}=\epsilon_{s}=\left\|\tilde{\mathbf{s}}^
From (64) and (66):
\[X^{(k)}\leq\mu_{xr}R^{(k)}+\mu_{xc}, \tag{67}\]
where \(\mu_{xr}=\frac{\rho_{xx}\rho_{xr}+\rho_{xr}}{1-\rho_{xz}\rho_{xx}}\) and \(\mu_{xc}=\frac{\rho_{xz}\epsilon_{x}+\epsilon_{x}}{1-\rho_{xz}\rho_{xx}}\); likewise,
\[Z^{(k)}\leq\mu_{zr}R^{(k)}+\mu_{zc}, \tag{68}\]
where \(\mu_{xr}=\frac{\rho_{xz}\rho_{xxr}+\rho_{xz}}{\rho_{xz}\rho_{xz}}\) and \(\mu_{zc}=\frac{\rho_{xz}\epsilon_{x}+\epsilon_{x}}{\rho_{xz}\rho_{xz}}\).
From \(L\)-Lipschitz continuity of the gradient:
\[f_{i}(y)\leq f_{i}(x)+g(x)^{\mathsf{T}}(y-x)+\frac{L_{i}}{2}\left\|y-x\right\| _{2}^{2},\ \forall i\in\mathcal{V}. \tag{69}\]
Hence:
\[\mathbf{f}(\overline{\mathbf{x}}^{(k+1)}) \leq\mathbf{f}(\overline{\mathbf{x}}^{(k)}) \tag{70}\] \[\quad+\frac{1}{N}\mathrm{trace}\left(\mathbf{g}(\overline{\mathbf{x}}^{( k)})\cdot(\overline{\mathbf{x}}^{(k+1)}-\overline{\mathbf{x}}^{(k)})^{\mathsf{T}}\right)\] \[\quad+\frac{1}{N}\cdot\frac{L}{2}\left\|\overline{\mathbf{x}}^{(k+1) }-\overline{\mathbf{x}}^{(k)}\right\|_{F}^{2},\] \[\leq\mathbf{f}(\overline{\mathbf{x}}^{(k)})+\frac{1}{N}\left\|\overline {\mathbf{g}}^{(k)}\right\|_{F}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)} }\right\|_{F}\] \[\quad+\frac{1}{N}\cdot\frac{L}{2}\left\|\overline{\mathbf{\alpha}^{( k)}\cdot\mathbf{z}^{(k)}}\right\|_{F}^{2}\] \[\quad+\frac{1}{N}\left\|\mathbf{g}(\overline{\mathbf{x}}^{(k)})-\mathbf{g}( \mathbf{x}^{(k)})\right\|_{F}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}} \right\|_{F}.\]
Considering the second term in (70):
\[\left\|\overline{\mathbf{g}}^{(k)}\right\|_{F} =\left\|\overline{\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}}- \overline{\mathbf{s}}^{(k)}\right\|_{F}, \tag{71}\] \[\leq\left\|\overline{\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}}\right\| _{F}+\left\|\overline{\mathbf{s}}^{(k)}\right\|_{F},\]
from (28). Further:
\[\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}=\overline{\mathbf{\beta}}^{(k)}\cdot \overline{\mathbf{s}}^{(k)}+\overline{\overline{\mathbf{\beta}}^{(k)}\cdot\bar{\mathbf{s} }^{(k)}}, \tag{72}\]
which shows that:
\[\overline{\mathbf{s}}^{(k)}=\overline{\mathbf{\beta}}^{(k)^{\dagger}}\left(\overline{ \mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}-\overline{\overline{\mathbf{\beta}}^{(k)}\cdot \bar{\mathbf{s}}^{(k)}}\right), \tag{73}\]
where \(\overline{\mathbf{\beta}}^{(k)^{\dagger}}\) denotes the inverse of \(\overline{\mathbf{\beta}}^{(k)}\). Hence, from (71) and (73):
\[\left\|\overline{\mathbf{g}}^{(k)}\right\|_{F} \leq\left\|\overline{\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}}\right\| _{F} \tag{74}\] \[\quad+\frac{1}{\left\|\overline{\mathbf{\beta}}^{(k)}\right\|_{F}} \left(\left\|\overline{\mathbf{\beta}}^{(k)}\cdot\mathbf{s}^{(k)}\right\|_{F}+\left\| \overline{\overline{\mathbf{\beta}}^{(k)}\cdot\bar{\mathbf{s}}^{(k)}}\right\|_{F} \right),\]
where \(\left\|\overline{\mathbf{\beta}}^{(k)^{\dagger}}\right\|_{F}=\frac{\sqrt{N}}{\left\| \overline{\beta}^{(k)}\right\|_{2}}\). Further:
\[\left\|\overline{\overline{\mathbf{\beta}}^{(k)}\cdot\bar{\mathbf{s}}^{( k)}}\right\|_{F} =\left\|\frac{1}{N}\mathbf{1}_{N}^{\mathsf{T}}\left(\tilde{\mathbf{\beta} }^{(k)}\cdot\tilde{\mathbf{s}}^{(k)}\right)\right\|_{F}, \tag{75}\] \[\leq\left\|\tilde{\mathbf{\beta}}^{(k)}\right\|_{F}\left\|\tilde{\bm {s}}^{(k)}\right\|_{F}.\]
In addition, we note that:
\[\left\|\overline{\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{(k-1)}}\right\|_{F} \left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{F} \leq\frac{1}{2}\left\|\overline{\mathbf{\beta}^{(k-1)}\cdot\mathbf{s}^{( k-1)}}\right\|_{F}^{2} \tag{76}\] \[\quad+\frac{1}{2}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{( k)}}\right\|_{F}^{2},\]
from the relation: \(a\cdot b\leq\frac{1}{2}(a^{2}+b^{2})\).
Hence, from (70):
\[\mathbf{f}(\overline{\mathbf{x}}^{(k+1)}) \leq\mathbf{f}(\overline{\mathbf{x}}^{(k)}) \tag{77}\] \[\quad+\frac{1}{2}\left(\left\|\overline{\mathbf{\beta}}^{(k-1)} \cdot\mathbf{s}^{(k-1)}\right\|_{2}^{2}+\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z }^{(k)}}\right\|_{2}^{2}\right)\] \[\quad+\frac{1}{2}\left\|\overline{\mathbf{\beta}}^{(k)}\right\|_{2} \left(\left\|\overline{\mathbf{\beta}}^{(k)}\cdot\mathbf{s}^{(k)}\right\|_{2}^{2}+ \left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}^{2}\right)\] \[\quad+\frac{\left\|\tilde{\mathbf{\beta}}^{(k)}\right\|_{2}}{\left\| \overline{\mathbf{\beta}}^{(k)}\right\|_{2}}\left\|\overline{\mathbf{\beta}}^{(k)} \right\|_{2}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}\] \[\quad+\frac{L}{2}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{( k)}}\right\|_{2}^{2}\] \[\quad+L\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}\left\|\overline{ \mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2},\]
where the last term results from Lipschitz continuity of \(\nabla f\). Summing (77) over \(k\) from \(0\) to \(t\):
\[\mathbf{f}(\overline{\mathbf{x}}^{(t+1)}) \leq\mathbf{f}(\overline{\mathbf{x}}^{(0)})+\frac{1}{2}\left(1+\frac{1}{ \sqrt{N}\beta_{\max}}+L\right)\left(R^{(t)}\right)^{2} \tag{78}\] \[\quad+r_{\beta}S^{(t)}R^{(t)}+LX^{(t)}R^{(t)}\]
where \(\mathbf{\beta}^{(-1)}=\mathbf{s}^{(-1)}=\mathbf{0}\), and we have added the term \(\frac{1}{2}\left\|\overline{\mathbf{\beta}}^{(t)}\cdot\mathbf{s}^{(t)}\right\|_{2}^{2}\).
Given (65) and (67):
\[\mathbf{f}(\overline{\mathbf{x}}^{(t+1)})\leq\mathbf{f}(\overline{\mathbf{x}}^{(0)})+a_{1} \left(R^{(t)}\right)^{2}+a_{2}R^{(t)}, \tag{79}\]
where \(a_{1}=\frac{1}{2}\left(1+\frac{1}{\sqrt{N}\beta_{\max}}+L\right)+r_{\beta} \mu_{sr}+L\mu_{xr}\) and \(a_{2}=r_{\beta}\mu_{sc}+L\mu_{xc}\).
Subtracting \(f^{\star}=f(\mathbf{x}^{\star})\) from both sides in (79) yields:
\[\mathbf{f}(\overline{\mathbf{x}}^{(t+1)})-f^{\star}\leq\mathbf{f}(\overline{\mathbf{x}}^{(0)})-f ^{\star}+a_{1}\left(R^{(t)}\right)^{2}+a_{2}R^{(t)}, \tag{80}\]
showing that:
\[0\leq\mathbf{f}(\overline{
Using the monotone convergence theorem in (85): \(R^{(t)}\rightarrow\Sigma\), and:
\[\lim_{k\rightarrow\infty}\left(\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k )}}\right\|_{2}^{2}+\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}}\right\|_ {2}^{2}+\left\|\tilde{\mathbf{g}}^{(k+1)}\right\|_{2}^{2}\right)=0, \tag{86}\]
which shows that:
\[\begin{split}\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{ \alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}^{2}&=\lim_{k \rightarrow\infty}\left\|\overline{\mathbf{\beta}^{(k)}\cdot\mathbf{s}^{(k)}} \right\|_{2}^{2}\\ &=\lim_{k\rightarrow\infty}\left\|\tilde{\mathbf{g}}^{(k+1)}\right\|_ {2}^{2}\\ &=0.\end{split} \tag{87}\]
From (65):
\[\lim_{k\rightarrow\infty}S^{(k)}\leq\lim_{k\rightarrow\infty}(\mu_{sr}R^{(k) }+\mu_{sc})\leq\mu_{sr}\Sigma+\mu_{sc}<\infty. \tag{88}\]
Similarly, from the monotone convergence theorem:
\[\lim_{k\rightarrow\infty}\left\|\tilde{\mathbf{s}}^{(k)}\right\|_{2}=0. \tag{89}\]
Likewise, from (67):
\[\lim_{k\rightarrow\infty}X^{(k)}\leq\lim_{k\rightarrow\infty}(\mu_{xr}R^{(k )}+\mu_{xc})\leq\mu_{xr}\Sigma+\mu_{xc}<\infty; \tag{90}\]
\[\lim_{k\rightarrow\infty}\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}=0. \tag{91}\]
Similarly, from (68):
\[\lim_{k\rightarrow\infty}Z^{(k)}\leq\lim_{k\rightarrow\infty}(\mu_{xr}R^{(k )}+\mu_{zc})\leq\mu_{zr}\Sigma+\mu_{zc}<\infty, \tag{92}\]
showing that:
\[\lim_{k\rightarrow\infty}\left\|\tilde{\mathbf{z}}^{(k)}\right\|_{2}=0. \tag{93}\]
From (89), (91), and (93), we note that the agents reach _agreement_ or _consensus_, with the local iterate of each agent converging to the mean as \(k\rightarrow\infty\).
Moreover, from (44):
\[\left\|\overline{\mathbf{z}}^{(k)}\right\|_{2}\leq\frac{1}{\left\|\overline{\mathbf{ \alpha}}^{(k)}\right\|_{2}}\left\|\overline{\mathbf{\alpha}^{(k)}\cdot\mathbf{z}^{(k) }}\right\|_{2}+\frac{\left\|\tilde{\mathbf{\alpha}}^{(k)}\right\|_{2}}{\left\| \overline{\mathbf{\alpha}}^{(k)}\right\|_{2}}\cdot\left\|\tilde{\mathbf{z}}^{(k)} \right\|_{2}; \tag{94}\]
hence:
\[\begin{split}\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{z}}^ {(k)}\right\|_{2}&\leq\lim_{k\rightarrow\infty}\left(\frac{1}{ \left\|\overline{\mathbf{\alpha}}^{(k)}\right\|_{2}}\cdot\left\|\overline{\mathbf{ \alpha}^{(k)}\cdot\mathbf{z}^{(k)}}\right\|_{2}\right.\\ &\qquad\qquad\qquad\left.+\frac{\left\|\tilde{\mathbf{\alpha}}^{(k)} \right\|_{2}}{\left\|\overline{\mathbf{\alpha}}^{(k)}\right\|_{2}}\cdot\left\| \tilde{\mathbf{z}}^{(k)}\right\|_{2}\right),\\ &=0,\end{split} \tag{95}\]
yielding:
\[\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{z}}^{(k)}\right\|_{2}=0. \tag{96}\]
Likewise, from (54):
\[\begin{split}\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{s}}^ {(k)}\right\|_{2}&\leq\lim_{k\rightarrow\infty}\left(\frac{1}{ \left\|\overline{\mathbf{\beta}}^{(k)}\right\|_{2}}\cdot\left\|\overline{\mathbf{ \beta}}^{(k)}\cdot\mathbf{s}^{(k)}\right\|_{2}\right.\\ &\qquad\qquad\qquad\left.+\frac{\left\|\tilde{\mathbf{\beta}}^{(k)} \right\|_{2}}{\left\|\overline{\mathbf{\beta}}^{(k)}\right\|_{2}}\cdot\left\| \tilde{\mathbf{s}}^{(k)}\right\|_{2}\right),\\ &=0,\end{split} \tag{97}\]
giving the result:
\[\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{s}}^{(k)}\right\|_{2}=0. \tag{98}\]
Further, from (28):
\[\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{g}}^{(k)}\right\|_{2}\leq\lim_{k \rightarrow\infty}\left(\left\|\overline{\mathbf{s}}^{(k)}\right\|_{2}+\left\| \overline{\mathbf{\beta}}^{(k-1)}\cdot\mathbf{s}^{(k-1)}\right\|_{2}\right)=0, \tag{99}\]
yielding:
\[\lim_{k\rightarrow\infty}\left\|\overline{\mathbf{g}}^{(k)}\right\|_{2}=0. \tag{100}\]
### _Proof of Theorem 2_
Since \(\mathbf{f}\) is convex:
\[\begin{split}\mathbf{f}(\overline{\mathbf{x}}^{(k)})-f^{\star}& \leq\frac{1}{N}\mathrm{trace}\left(\mathbf{g}(\overline{\mathbf{x}}^{(k)}) \cdot(\overline{\mathbf{x}}^{(k)}-\mathbf{x}^{\star})^{\mathsf{T}}\right),\\ &\leq\frac{1}{N}\left\|\overline{\mathbf{g}}(\overline{\mathbf{x}}^{(k)}) \right\|_{F}\left\|\overline{\mathbf{x}}^{(k)}-\mathbf{x}^{\star}\right\|_{F}\\ &\quad+\frac{1}{N}\left\|\mathbf{g}(\overline{\mathbf{x}}^{(k)})-\mathbf{g}( \mathbf{x}^{(k)})\right\|_{F}\left\|\overline{\mathbf{x}}^{(k)}-\mathbf{x}^{\star}\right\|_{F },\\ &\leq\left\|\overline{\mathbf{g}}(\mathbf{x}^{(k)})\right\|_{2}\left\| \overline{\mathbf{x}}^{(k)}-\mathbf{x}^{\star}\right\|_{2}\\ &\quad+\frac{L}{2}\left\|\overline{\mathbf{x}}^{(k)}-\mathbf{x}^{(k)} \right\|_{2}\left\|\overline{\mathbf{x}}^{(k)}-\mathbf{x}^{\star}\right\|_{2},\end{split} \tag{101}\]
where \(\mathbf{x}^{\star}=\mathbf{1}_{N}\left(\mathbf{x}^{\star}\right)^{\mathsf{T}}\).
Since \(\mathbf{f}\) is coercive by assumption and \(\mathbf{f}(\overline{\mathbf{x}}^{(k)})\) is bounded from (79), \(\left\|\overline{\mathbf{x}}^{(k)}\right\|_{2}\) is bounded, and thus, \(\left\|\overline{\mathbf{x}}^{(k)}-\mathbf{x}^{\star}\right\|_{2}\leq\left\|\overline{ \mathbf{x}}^{(k)}\right\|_{2}+\left\|\mathbf{x}^{\star}\right\|_{2}\) is bounded. Hence:
\[\lim_{k\rightarrow\infty}\left(\mathbf{f}(\overline{\mathbf{x}}^{(k)})-f^{\star} \right)\leq 0, \tag{102}\]
which indicates that:
\[\lim_{k\rightarrow\infty}\mathbf{f}(\overline{\mathbf{x}}^{(k)})=f^{\star}. \tag{103}\]
From the mean-value theorem:
\[\mathbf{f}(\mathbf{x}^{(k)})=\mathbf{f}(\overline{\mathbf{x}}^{(k)})+\frac{1}{N}\mathrm{trace} \left(\mathbf{g}(\overline{\mathbf{x}}^{(k)}+\xi\tilde{\mathbf{x}}^{(k)})\cdot\left( \tilde{\mathbf{x}}^{(k)}\right)^{\mathsf{T}}\right), \tag{104}\]
where \(0\leq\xi\leq 1\).
In addition, \(\left\|\overline{\mathbf{x}}^{(k)}+\xi\tilde{\mathbf{x}}^{(k)}\right\|_{2}\leq\left\| \overline{\mathbf{x}}^{(k)}\right\|_{2}+\xi\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}\) is bounded, as well as \(\left\|\mathbf{g}(\overline{\mathbf{x}}^{(k)}+\xi\tilde{\mathbf{x}}^{(k)})\right\|_{2}\), since \(\mathbf{g}\) is Lipschitz-continuous. As a result, from (104),
\[\begin{split}\left|\mathbf{f}(\mathbf{x}^{(k)})-\mathbf{f}(\overline{\mathbf{x}} ^{(k)})\right|\\ &\quad=\frac{1}{N}\left|\mathrm{trace}\left(\mathbf{g}(\overline{\mathbf{x}} ^{(k)}+\xi\tilde{\mathbf{x}}^{(k)})\cdot\left(\tilde{\mathbf{x}}^{(k)}\right)^{ \mathsf{T}}\right)\right|,\\ \leq\left\|\mathbf{g}(\overline{\mathbf{x}}^{(k)}+\xi\tilde{\mathbf{x}}^{( k)})\right\|_{2}\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}.\end{split} \tag{105}\]
Hence:
\[\begin{split}\lim_{k\to\infty}\left|\mathbf{f}(\mathbf{x}^{(k)})-\mathbf{f}( \overline{\mathbf{x}}^{(k)})\right|&\leq\lim_{k\to\infty}\left(\left\| \mathbf{g}(\overline{\mathbf{x}}^{(k)}+\xi\tilde{\mathbf{x}}^{(k)})\right\|_{2}\right.\\ &\qquad\qquad\left.\left\|\tilde{\mathbf{x}}^{(k)}\right\|_{2}\right), \end{split} \tag{106}\] \[=0,\]
from (91). As a result:
\[\lim_{k\to\infty}\mathbf{f}(\mathbf{x}^{(k)})=\lim_{k\to\infty}\mathbf{f}(\overline{\mathbf{x }}^{(k)})=f^{\star}, \tag{107}\]
from (103), proving convergence to the optimal objective value.
|
2303.18025 | Short vs. long range exchange interactions in twisted bilayer graphene | We discuss the effect of long-range interactions within the self-consistent
Hartree-Fock (HF) approximation in comparison to short-range atomic Hubbard
interactions on the band structure of twisted bilayer graphene (TBG) at charge
neutrality for various twist angles. Starting from atomistic calculations, we
determine the quasi-particle band structure of TBG with Hubbard interactions
for various magnetic orderings: modulated anti-ferromagnetic (MAFM), nodal
anti-ferromagnetic (NAFM) and hexagonal anti-ferromagnetic (HAFM). Then, we
develop an approach to incorporate these magnetic orderings along with the HF
potential in the continuum approximation. Away from the magic angle, we observe
a drastic effect of the magnetic order on the band structure of TBG compared to
the influence of the HF potential. Near the magic angle, however, the HF
potential seems to play a major role on the band structure compared to the
magnetic order. These findings suggest that the spin-valley degenerate broken
symmetry state often found in HF calculations of charge neutral TBG near the
magic angle should favour magnetic order, since the atomistic Hubbard
interaction will break this symmetry in favour of spin polarization. | Alejandro Jimeno-Pozo, Zachary A. H. Goodwin, Pierre A. Pantaleón, Valerio Vitale, Lennart Klebl, Dante M. Kennes, Arash Mostofi, Johannes Lischner, Francisco Guinea | 2023-03-31T13:03:16Z | http://arxiv.org/abs/2303.18025v1 | # Short vs. long range exchange interactions in twisted bilayer graphene
###### Abstract
We discuss the effect of long-range interactions within the self-consistent Hartree-Fock (HF) approximation in comparison to short-range atomic Hubbard interactions on the band structure of twisted bilayer graphene (TBG) at charge neutrality for various twist angles. Starting from atomistic calculations, we determine the quasi-particle band structure of TBG with Hubbard interactions for various magnetic orderings: modulated anti-ferromagnetic (MAFM), nodal anti-ferromagnetic (NAFM) and hexagonal anti-ferromagnetic (HAFM). Then, we develop an approach to incorporate these magnetic orderings along with the HF potential in the continuum approximation. Away from the magic angle, we observe a drastic effect of the magnetic order on the band structure of TBG compared to the influence of the HF potential. Near the magic angle, however, the HF potential seems to play a major role on the band structure compared to the magnetic order. These findings suggest that the spin-valley degenerate broken symmetry state often found in HF calculations of charge neutral TBG near the magic angle should favour magnetic order, since the atomistic Hubbard interaction will break this symmetry in favour of spin polarization.
## I Introduction
Magic-angle twisted bilayer graphene (TBG) has generated tremendous interest in twinstronics [1; 2; 3] since the discovery of correlated insulating states and superconductivity in the \(\sim\)1.1\({}^{\circ}\) moire superlattice [4; 5]. The initial reports [4; 5; 6] indicated strong electron-electron correlations in TBG which gives rise to unconventional superconductivity [5]. While this is not unanimously agreed upon [7; 8; 9; 10; 11], TBG has also been found to host strange metallic behaviour [12; 13], nematic order [14; 15; 16; 17], Dirac revivals [18; 19], Pomeranchuk effect [20; 21] and Chern insulators [22; 23; 24; 25; 26], amongst other effects and phases [27; 28; 29; 30; 31; 32; 33].
To understand these phases, given that the magic-angle of TBG contains \(\sim\)12,000 atoms [34; 35; 36; 37], it is typical for the low-energy continuum model [38; 39], based on that of Bistritzer and MacDonald [40], to be utilised. This theory couples states of the Dirac cones of each layer and valley at different moire crystal momenta, which causes the onset of flat bands at \(\sim\)1.1\({}^{\circ}\)[8]. The continuum model of TBG can naturally be extended to include long-ranged Hartree-Fock interactions [41; 42; 43; 44; 45; 46; 47; 48; 49], since it is an expansion in the moire crystal momenta (which are very small values, corresponding to large length scales). This interacting theory has provided some understanding into the phase diagram of TBG [7], in terms of the superconducting phase, correlated insulating states, Dirac revivals, pinning of van Hove singularities [41; 45; 48], for example.
Including short-ranged interactions in this continuum model has proven more difficult, however. Short-ranged interactions can be included in the Wannier orbital Hamiltonians of the flat bands [50; 51; 52; 53; 54; 55; 56; 57; 58], which provides a reduced Hamiltonian matrix which can be solved with strongly correlated methods [52; 59; 60; 61; 62], but this also has long-ranged interactions. It is more natural, however, to include short-ranged interactions in atomistic models, such as DFT [34; 35] or tight-binding (TB) [36; 37], since the atomic-scale information is retained in such approaches. For example, Klebl and Honerkamp [63] studied the magnetic phase diagram of TBG based on on-site Hubbard interactions from RPA spin-susceptibility calculations [64; 65]. Moreover, these atomistic approaches can also handle long-ranged interactions, such as self-consistent Hartree interactions [66; 67; 68; 69; 70; 71].
A significant limiting factor of self-consistent atomistic approaches for broken symmetry phases is their computational cost [69; 72; 73]. Some examples exist in the literature [69; 70; 71; 74], but a full phase diagram - over twist angle and doping level, amongst other experimental variables - has not yet been achieved. For example, Stauber and
Gonzalez [70; 71] developed a theory based on Green's functions, where only some of the states were retained. They were able to study long-ranged interactions, and also the interplay between long and short ranged interactions, but only at \(1.16^{\circ}\) and either at charge neutrality or \(-2\) electrons per moire unit cell. Moreover, Vahedi _et al._[69] investigated several twist angles (\(1.08^{\circ}\), \(1.30^{\circ}\) and \(1.47^{\circ}\)) but only focused on charge neutrality. Usually, this has been overcome from re-scaling the TB parameters [75; 76; 77; 69] or applying hydrostatic pressure [78; 79; 80], such that flat bands can be created with unit cells only containing a few hundred to a few thousand atoms.
Here we develop an approach which can include short-ranged interactions, such as the on-site Hubbard interaction of the p\({}_{z}\) orbitals, in the continuum model. Starting from the RPA spin susceptibility calculations of Klebl and Honerkamp [63], we perform self-consistent atomistic Hubbard calculations to obtain the mean-field magnetic order parameters for different ordering tendencies (at a large twist angle and charge neutrality). We develop analytical forms for the magnetic ordering in real space, which we are able to include in the continuum model as an effective a scalar sublattice potential. In order to elucidate the angle dependence of the interplay between the magnetic orderings and the Hartree-Fock potential, we perform self-consistent Hartree-Fock calculations at charge neutrality, to which we later add the effective magnetic potential at different twist angles. Overall, it is found that the long range contribution dominates at the magic angle, but away from the magic-angle, these magnetic orders become more significant. We discuss the competition between these long and short range exchange interactions in detail, and finish with a discussion of future directions.
## II Methods
### Atomistic Calculations
We study commensurate moire unit cells of TBG [36], starting from AA stacked bilayers and rotating the top layer anticlockwise about an axis perpendicular to the graphene sheets that passes through a carbon atom in each layer. The moire lattice vectors of the commensurate structures are \(\mathbf{R}_{1}=n\mathbf{a}_{1}+m\mathbf{a}_{2}\) and \(\mathbf{R}_{2}=-m\mathbf{a}_{1}+(n+m)\mathbf{a}_{2}\), where \(n\) and \(m\) are integers which define the commensurate TBG structure, and \(\mathbf{a}_{1}=(\sqrt{3}/2,-1/2)a_{0}\) and \(\mathbf{a}_{2}=(\sqrt{3}/2,1/2)a_{0}\) are the lattice vectors of graphene, where \(a_{0}=2.46\) A is the lattice constant of graphene.
At small twist angles, TBG undergoes significant atomic relaxations [81; 82; 83; 34; 35; 36; 84; 85]. We calculate these relaxations using a classical force field implemented in the LAMMPS software package [86]. The interlayer interactions are modelled using the AIREBO-Morse potential [87], while intralayer interactions are described with the Kolmogorov-Crespi potential [88].
To investigate the electronic structure of TBG, we use the Hamiltonian
\[\hat{\mathcal{H}}=\sum_{i\sigma}\varepsilon_{i\sigma}\hat{c}_{i\sigma}^{ \dagger}\hat{c}_{i\sigma}+\sum_{ij\sigma}[t(\mathbf{r}_{i}-\mathbf{r}_{j}) \hat{c}_{j\sigma}^{\dagger}\hat{c}_{i\sigma}+\text{H.c.}], \tag{1}\]
where \(\varepsilon_{i\sigma}\) and \(\hat{c}_{i\sigma}^{\dagger}\) (\(\hat{c}_{i\sigma}\)) denote the on-site energy of atom \(i\) with spin \(\sigma\) and the electron creation (annihilation) operator associated with atom \(i\) and spin \(\sigma\), respectively. The hopping parameters between atoms \(i\) and \(j\), \(t(\mathbf{r}_{i}-\mathbf{r}_{j})\), are calculated using the Slater-Koster rules [89]
\[t(\mathbf{r})=V_{pp\sigma}(\mathbf{r})\bigg{(}\frac{\mathbf{r}\cdot\mathbf{e} _{z}}{|\mathbf{r}|}\bigg{)}^{2}+V_{pp\pi}(\mathbf{r})\bigg{(}1-\frac{\mathbf{ r}\cdot\mathbf{e}_{z}}{|\mathbf{r}|}\bigg{)}^{2}, \tag{2}\]
where \(V_{pp\sigma}(\mathbf{r})=V_{pp\sigma}^{0}\exp\{q_{\sigma}(1-|\mathbf{r}|/d_{ AB})\}\Theta(R_{c}-|\mathbf{r}|)\) and \(V_{pp\pi}(\mathbf{r})=V_{pp\pi}^{0}\exp\{q_{\pi}(1-|\mathbf{r}|/a)\}\Theta(R_ {c}-|\mathbf{r}|)\). We take the pre-factor for the \(pp\sigma\)-hopping and \(pp\pi\)-hopping to be \(V_{pp\sigma}^{0}=0.48\) eV and \(V_{pp\pi}^{0}=-2.7\) eV, respectively. The carbon-carbon bond length is given by \(a=a_{0}/\sqrt{3}\) and the interlayer separation is taken to be \(d_{\text{AB}}=3.35\) A. We take the decay parameters of the Slater-Koster rules to be \(q_{\sigma}=d_{\text{AB}}/0.184a_{0}\) and \(q_{\pi}=1/0.184\sqrt{3}\)[36; 37]. Hoppings between carbon atoms separated by more than \(R_{c}=10\) A are neglected [90].
To include the effects of short-range Hubbard interactions, the on-site energy is determined by the mean-field Hubbard interaction
\[\varepsilon_{i\sigma}=U[n_{i\sigma^{\prime}}-\text{const.}], \tag{3}\]
where \(U\) is the Hubbard parameter of the carbon p\({}_{z}\) orbital, and \(n_{i\sigma^{\prime}}\) is the mean-field electron density on atom \(i\) with the spin \(\sigma^{\prime}\) being the opposite to \(\sigma\).
The electron density can be determined from the Bloch eigenstates \(\psi_{n\mathbf{k}\sigma}(\mathbf{r})\) (with subscripts \(n\) and \(\mathbf{k}\) denoting a band index and the crystal momentum, respectively) according to
\[\begin{split} n_{\sigma}(\mathbf{r})&=\sum_{n\mathbf{ k}}f_{n\mathbf{k}\sigma}|\psi_{n\mathbf{k}\sigma}(\mathbf{r})|^{2}\\ &=\sum_{j}n_{j\sigma}\chi_{j}(\mathbf{r}),\end{split} \tag{4}\]
where \(f_{n\mathbf{k}\sigma}=\Theta(\varepsilon_{F}-\varepsilon_{n\mathbf{k}\sigma})\) is the occupancy of state \(\psi_{n\mathbf{k}\sigma}\) with eigenvalue \(\varepsilon_{n\mathbf{k}\sigma}\) (where \(\varepsilon_{F}\) is the Fermi energy), \(\chi_{j}(\mathbf{r})=\sum_{\mathbf{R}}\phi_{z}^{2}(\mathbf{r}-\mathbf{t}_{j}- \mathbf{R})\) (with \(\mathbf{R}\) denoting the moire lattice vectors) and \(n_{j\sigma}\) is the total number of electrons in the \(j\)-th orbital with spin \(\sigma\). To characterizes the magnetic ordering, we calculate the spin polarisation
\[\zeta=\frac{n_{\uparrow}-n_{\downarrow}}{n_{\uparrow}+n_{\downarrow}}. \tag{5}\]
To start the mean-field calculations, instead of guessing random magnetic configurations, we perform RPA spin-susceptibility calculations, following the methods outlined in Refs. [63; 64]. The eigenvalues of these calculations provide the critical interaction strength of an instability (\(U_{c}\)) and the eignevector is the form of the
magnetic order. By using these eigenvectors as an initial on-site interaction, we can induce spin polarisation, which can then be used to perform self-consistent calculations.
To obtain a self-consistent solution of the atomistic Hubbard equation, we use a simple mixing scheme with a mixing parameter of \(0.1\) typically (\(0.1\) of the new electron spin density is mixed into the same spin density). When determining the Fermi energy, the total electron number is again forced to be \(N+\nu\), but this does not restrict the spin densities to be the same. We mix the up and down spin density by the same amount, instead of choosing to work with the total electron density and magnetic order parameter, as we find it is sometimes more stable.
Only the leading instability was able to be stabilised with this method [91]. Therefore, we also perform constrained calculations. The constrained calculations require an analytical form for the magnetic order to be specified (which is outlined in Section III.1), say \(\zeta^{(j)}\), and we self-consistently determine the magnitude of this magnetic order through projecting onto the magnetic order parameter, \(\zeta\). The on-site energy term is then given by
\[\varepsilon_{i\sigma}=\pm\frac{U}{2}\zeta_{i}^{(j)}, \tag{6}\]
where the sign depends on the spin. A self-consistent solution is obtained through a linear mixing of the spin densities.
### Continuum Model Calculations
The mini-Brillouin Zone (mBZ) of the continuum model is spanned by the two reciprocal lattice vectors given by, \(\mathbf{G}_{1}=\frac{2\pi}{L_{m}}\left(\frac{1}{\sqrt{3}},1\right)\) and \(\mathbf{G}_{2}=\frac{4\pi}{L_{m}}\left(-\frac{1}{\sqrt{3}},0\right)\), where \(L_{m}=\frac{a_{0}}{2\sin\left(\theta/2\right)}\) is the moire period and \(a_{0}\) is the lattice constant of graphene. These vectors form the basis to define any reciprocal lattice vector, \(\mathbf{G}_{i}=n\mathbf{G}_{1}+m\mathbf{G}_{2}\) with \(n,m\in\mathbb{Z}\), \(i\in\mathbb{N}\). The first star of reciprocal lattice vectors, i.e. the six first \(\mathbf{G}_{i}\), are defined by \(n,m\in[-1,1]\).
The non-interacting continuum model is 4-fold degenerate, since it accounts for valley and spin quantum numbers, with the Hamiltonian at crystal momentum \(\mathbf{k}\) being written as
\[\hat{\mathcal{H}}_{TBG,\xi}\left(\mathbf{k}\right)=\begin{pmatrix}\hat{H}_{1,\xi}\left(\mathbf{k}\right)&\hat{T}\\ \hat{T}^{\dagger}&\hat{H}_{2,\xi}\left(\mathbf{k}\right)\end{pmatrix}, \tag{7}\]
where \(\hat{H}_{l,\xi}\) is the continuum single layer graphene Hamiltonian of valley \(\xi\), given by
\[\hat{H}_{l,\xi}\left(\mathbf{k}\right)=\xi hv_{F}\left(\mathbf{k}-\xi\mathbf{ K}_{l}\right)\tau_{\theta,l}. \tag{8}\]
with \(v_{F}=(\sqrt{3}V_{pp\sigma}^{0}a)/(2\hbar)\) denoting the Fermi velocity, \(\mathbf{K}_{l}\) is the position of the Dirac point of layer \(l\), \(\tau_{\theta,l}=e^{\mathrm{i}\xi\tau_{\theta}\theta/2}\big{(}\tau_{x},\xi\tau _{y}\big{)}e^{-\mathrm{i}\xi\tau_{\theta}\theta/2}\), with \(\tau_{i}\) being the Pauli matrices acting on the sublattice degree of freedom. The matrix \(\hat{T}\) is a periodic function in the moire unit cell that hybridises layers. For small angles, the main contribution comes from the first three reciprocal lattice vectors, \(\mathbf{G}=(0,0)\), \(\mathbf{G}=\mathbf{G}_{1}\) and \(\mathbf{G}=\mathbf{G}_{1}+\mathbf{G}_{2}\)[38]
\[\begin{split}\hat{T}=\sum_{\mathbf{G}}\hat{T}(\mathbf{G})& =\begin{pmatrix}u_{1}&u_{2}\\ u_{2}&u_{1}\end{pmatrix}+\begin{pmatrix}u_{1}&u_{2}e^{-2\mathrm{i}\theta \pi/3}\\ u_{2}e^{2\mathrm{i}\theta\pi/3}&u_{1}\end{pmatrix}\\ &+\begin{pmatrix}u_{1}&u_{2}e^{2\mathrm{i}\theta\pi/3}\\ u_{2}e^{-2\mathrm{i}\theta\pi/3}&u_{1}\end{pmatrix}\!,\end{split} \tag{9}\]
where \(u_{1}=0.0797\) eV and \(u_{2}=0.0975\) eV [92] are, respectively, the hopping amplitudes between AB/BA and AA stacking, which takes into account the atomic relaxation in the continuum model.
To account for electron-electron interactions, we include the mean-field Hartree-Fock terms to the Hamiltonian. The Hartree contribution to the Hamiltonian is given by
\[\hat{\mathcal{H}}_{H}=\sum_{i,\xi,\sigma}\int_{\Omega}\mathrm{d}^{2}\mathbf{r} \psi_{\xi,\sigma}^{i,\dagger}\left(\mathbf{r}\right)\psi_{\xi,\sigma}^{i} \left(\mathbf{r}\right)V_{H}\left(\mathbf{r}\right), \tag{10}\]
where \(i\in[1,4]\) labels the combined sublattice and layer degree of freedom, \(\sigma\) accounts for the spin, \(\Omega\) is the area of the mBZ, and the local Hartree potential is given by
\[V_{H}\left(\mathbf{r}\right)=\int_{\Omega}\mathrm{d}^{2}\mathbf{r}^{\prime}v_ {C}\left(\mathbf{r}-\mathbf{r}^{\prime}\right)\left\langle\delta\rho\left( \mathbf{r}^{\prime}\right)\right\rangle. \tag{11}\]
Here \(\delta\rho\left(\mathbf{r}\right)\equiv\rho\left(\mathbf{r}\right)-\rho_{CN} \left(\mathbf{r}\right)\) denotes the fluctuation in charge density, with \(\rho\left(\mathbf{r}\right)=\sum_{\xi,\sigma}\psi_{\xi,\sigma}^{\dagger}\left( \mathbf{r}\right)\psi_{\xi,\sigma}\left(\mathbf{r}\right)\) corresponding to the charge density and \(\rho_{CN}\left(\mathbf{r}\right)\) is the average density corresponding to the non-interacting TBG at charge neutrality point. We assume that the Coulomb interaction is screened by a double-metallic gate [45]
\[v_{C}\left(\mathbf{q}\right)=\frac{2\pi e^{2}}{\epsilon}\frac{\tanh\left(d| \mathbf{q}|\right)}{|\mathbf{q}|}, \tag{12}\]
where \(d=40\) nm is the distance to the metallic gates and \(\varepsilon=10\) is the dielectric constant [49; 58; 67].
The Fock contribution to the Hamiltonian is given by
\[\hat{\mathcal{H}}_{F}=\sum_{i,j,\xi,\sigma}\int_{\Omega}\mathrm{d}^{2}\mathbf{r }\mathrm{d}^{2}\mathbf{r}^{\prime}\psi_{\xi,\sigma}^{i,\dagger}\left(\mathbf{r }\right)V_{F}^{ij}\left(\mathbf{r},\mathbf{r}^{\prime}\right)\psi_{\xi,\sigma}^{j }\left(\mathbf{r}^{\prime}\right), \tag{13}\]
where \(i,j\) run over the sublattice and layer indexes. The non-local Fock potential is described by,
\[V_{F}^{ij}=-\left\langle\psi_{\xi,\sigma}^{j,\dagger}\left(\mathbf{r}^{\prime} \right)\psi_{\xi,\sigma}^{i}\left(\mathbf{r}\right)\right\rangle v_{C}\left( \mathbf{r}-\mathbf{r}^{\prime}\right), \tag{14}\]
As we want to express the matrix elements of \(\hat{\mathcal{H}}_{F}\), defined in Eq. (13), in the reciprocal space for which we must transform the non-local Fock potential into the Fourier space. By this procedure we compute the Fock matrix
elements as
\[\langle\mathbf{k}+\mathbf{G},\xi,\sigma,i|\mathcal{H}_{F}|\mathbf{k}^{ \prime}+\mathbf{G}^{\prime},\xi^{\prime},\sigma^{\prime},i^{\prime}\rangle=\] \[-\sum_{i,j,\xi,\sigma,n}\sum_{\mathbf{k}^{\prime\prime},\sigma} \psi_{n,\xi,\sigma}^{i}\left(\mathbf{k}+\mathbf{G}^{\prime}+\mathbf{G}^{\prime \prime}\right)\psi_{n,\xi,\sigma}^{j,*}\left(\mathbf{k}^{\prime}+\mathbf{G}+ \mathbf{G}^{\prime\prime}\right)\] \[\qquad\times v_{C}\left(\mathbf{k}-\mathbf{k}^{\prime\prime}+ \mathbf{G}^{\prime\prime}\right), \tag{15}\]
where the index \(n\) runs over the occupied bands at a given Fermi energy. For the Hartree-Fock calculations we work with a continuum model of TBG expanded up to the third star. We use a density of points between \(2-6\times 10^{5}\) A\({}^{2}\) in the mBZ, depending on the twisting angle. The convergence of the Hartree-Fock potential is normally reached after \(5-6\) self-consistency steps and it has been proved for increasing density of points.
To include the \(\alpha\)-magnetic potential (\(\alpha=\text{M, N, H}\)) in the continuum model we use a scalar sublattice and spin dependent potential expressed through its harmonic decomposition in the first star reciprocal lattice vectors
\[\hat{H}^{\alpha}(\mathbf{G}_{i},\mathbf{G}_{j})=\sum_{i,j=0}^{6}U_{\delta}^{ \alpha}(\mathbf{G}_{i}-\mathbf{G}_{j}), \tag{16}\]
The full form of \(U_{\delta}^{\alpha}(\mathbf{G}_{i}-\mathbf{G}_{j})\) is discussed in the Section III.3. The final Hamiltonian that combines both the effective magnetic potential derived from atomistic calculations and the Hartree-Fock potential at half-filling is given by,
\[\tilde{\mathcal{H}}\left(\mathbf{k}\right)=\tilde{\mathcal{H}}_{TBG}\left( \mathbf{k}\right)+\tilde{\mathcal{H}}_{F}\left(\mathbf{k}\right)+\tilde{ \mathcal{H}}^{\alpha}. \tag{17}\]
Note that this final Hamiltonian is not treated in a self-consistent way since the effective magnetic Hamiltonian is just added to the self-consistent Hartree-Fock Hamiltonian, which would be equivalent to a first-order approximation of the magnetic orderings in perturbation theory.
## III Results
### Short Range Atomistic Hubbard Interactions
From the RPA (\(\mathbf{q}=0\)) spin-susceptibility calculations [63; 64], a number of leading antiferromagnetic instabilities are found: modulated antiferromagnetic order (MAFM), nodal antiferromagnetic order (NAFM) and hexagonal antiferromagnetic order (HAFM). To find which instability is the ground state at a given twist angle and doping level, (mean-field) atomistic Hubbard calculations must be performed. As these atomistic calculations are extremely computationally expensive at the magic angle, we focus on \(1.54^{\circ}\) at charge neutrality. At this twist angle and doping level, TBGs leading instability was found to be MAFM (with a critical Hubbard interaction of \(U_{c}\approx 5.1\) eV), with NAFM and HAFM having slightly larger critical interaction strengths (of \(U_{c}\approx 5.4\) eV). For these latter instabilities, we perform unconstrained and constrained atomistic Hubbard calculations, as outlined in Section II.1.
The MAFM instability, as seen in Fig. 1(b) and (c), is characterised by a sub-lattice oscillation in the magnetic order parameter \(\zeta=(n_{\uparrow}-n_{\downarrow})/(n_{\uparrow}+n_{\downarrow})\) that is modulated throughout the moire supercell [63]. In Fig. 1(b), which plots the magnetic structure along the diagonal of the moire superlattice, sublattice A is shown in black and sublattice B is shown in grey for the top graphene layer. To show its real-space structure more clearly, Fig. 1(c) plots the magnetic order parameter on sublattice B of the top layer over the moire superlattice. The constant contribution of the MAFM order is larger than the moire-scale variation, which means the sub-lattice polarisation is the same throughout the moire unit cell. The moire scale variation of the magnetic order peaks in the AA regions, as might be expected from the LDOS of TBG peaking peaked in the AA regions [36]. The MAFM order can be approximated with the following analytical form
\[\zeta^{M}(\mathbf{r})\approx\zeta_{s}^{\prime}+\frac{\zeta_{s}}{6}\sum_{i=1}^ {6}\cos(\mathbf{G}_{i}\cdot\mathbf{r}), \tag{18}\]
where \(\mathbf{G}_{i}\) are reciprocal lattice vectors, and the sublattice oscillation \(\zeta_{s}^{\prime}\) is the constant sublattice polarisation and \(\zeta_{s}\) describes how this sublattice polarisation changes on the more scale. For sublattice \(A_{l}\) (\(B_{l}\)), where \(l=1,2\) is the layer index, the sign of the polarisation is \(-\) (\(+\)), or vice versa. Note this equation assumes the AA region is located at the origin of the moire unit cell.
In Fig. 1(b) and (c) the unconstrained, self-consistent \(\zeta\) for \(1.54^{\circ}\) at charge neutrality and \(U=5.4\) eV is shown. The peaks of \(\zeta\approx 0.1\) reside in the AA regions. If one electron is delocalized across all atoms in the moire unit cell, which contains \(5548\) atoms at \(1.54^{\circ}\), then \(\zeta\approx 10^{-4}\). Therefore, the magnetic structures from these atomistic calculations are involving many more electrons than just the 4 flat band electrons, which is in agreement with other works [69; 70; 71].
In Fig. 1(a), we show the corresponding self-consistent quasi-particle band structure. The different valleys, K and K', have been identified by applying the valley operator to the states (see Refs. [76; 93; 77] for details of this calculation), and shown in solid black and dotted grey, respectively. Since the MAFM order breaks C\({}_{2}\) symmetry, it causes a gap to open at the Dirac cones at the K/K' points of the moire Brillouin zone of TBG. This instability was not able to be stabilised at \(U=5.1\) eV, but for \(U\)'s larger than \(5.4\) eV, the constant contribution dominates and it becomes graphene-like (\(\zeta_{s}^{{}^{\prime}}\gg\zeta_{s}\), such that there is only a constant sub-lattice polarisation in each graphene sheet), with the gap at the K/K' points becoming very large (100's of meV) [91].
Similarly, NAFM also has a moire-scale peak in the magnetic order in the AA regions, but it does not possess a constant contribution to \(\zeta\), as seen in the self-consistent
values plotted in Fig. 1(e). The corresponding real-space structure is shown in Fig. 1(f), where nodes in the magnetic order around the AA region separate the regions of opposite signs of \(\zeta\) spin polarisation. This magnetic order is referred to as nodal anti-ferromagnetic order because \(\zeta\) goes through 0 between the AA and AB/BA regions, causing the sign of \(\zeta\) to change on each sub-lattice between these types of stacking [63]. Therefore, it can be described by
\[\zeta^{N}(\mathbf{r})\approx\frac{\zeta_{s}}{6}\sum_{i=1}^{6}\cos(\mathbf{G} _{i}\cdot\mathbf{r}), \tag{19}\]
This instability was not the leading instability [63], and we found the unconstrained calculations could never stabilise this order, as it would always eventually revert to MAFM order. Therefore, we performed constrained mean-field Hubbard calculations to find the "excited" magnetic order, as explained in Section II.1. The quasiparticle band structure is shown in Fig. 1(d), and again it is a Mott insulator. A large gap is obtained in the Dirac cone because of the Hubbard interaction parameter being taken as \(U=5.94\) eV. This instability could be stabilised with \(U=5.67\) eV and also larger \(U\)'s in the constrained method.
Finally, the HAFM instability is similar to NAFM, but where the magnetic order parameter peaks on the AB/BA regions instead of the AA regions. The self-consistent HAFM magnetic structure is shown in
Figure 1: (a,d,g) Self-consistent quasi-particle band structures for the studied magnetic orders of TBG at \(1.54^{\circ}\) and charge neutrality along the high symmetry path. For MAFM, \(U=5.4\) eV and unconstrained calculations were performed; whereas, for NAFM and HAFM, \(U=5.94\) eV and constrained calculations were performed. (b,e,h) Corresponding self-consistent magnetic order parameter plotted along the diagonal of the moiré supperlattice, where \(\mathbf{R}_{1}\) and \(\mathbf{R}_{2}\) are the moiré lattice vectors. Sublattice A is shown in black and sublattice B is shown in grey for the top graphene layer (bottom layer not shown). (c,f,i) Corresponding plots in real space for a single layer and sublattice, where only sublattice B of the top layer is shown. Note, the \(U\)’s were chosen to be slightly above the critical values. For MAFM, if \(U=5.94\) eV is used, the gap at the Dirac point is extremely large [91].
Fig. 1(h) and (i). The sign of \(\zeta\) changes between the AB and BA regions of the moire unit cell. As the peaks of \(\zeta\) occur on the AB/BA regions, which form a hexagonal lattice on the moire scale, this ordering is referred to as hexagonal anti-ferromagnetic order. This magnetic order is shown in Fig. 1(h) and (i), and can be described by the analytical form
\[\zeta^{H}(\mathbf{r})\approx\frac{\zeta_{s}}{6}\sum_{i=1}^{6}\sin(\mathbf{G}_ {i}\cdot\mathbf{r}). \tag{20}\]
This instability never appears to be a leading instability, but its critical Hubbard interaction was \(\sim 5.4\) eV, which is only slightly higher than the leading instability. Therefore, we again found that constrained calculations were required to obtain mean-field values of its order parameter, as explained in Section II.1. In Fig. 1(g) we show the mean-field quasi-particle band structure for \(U=5.94\) eV (this was the smallest \(U\) which could stabilise the order with constrained calculations), where the different valleys have been coloured solid black and dotted grey. We find that this magnetic order does not create a gap at the K/K' point, despite it having a sublattice oscillation. This is because of the moire-scale sine nature of its variation, which means it does not break C\({}_{2}\) on the moire scale. It does cause the valleys to split at the K/K' points, however, resulting in one valley being pushed higher in energy and the other to lower energies. This is analogous to the effect of a perpendicular electric field has on the electronic structure.
These calculations give some insight into the magnetic order from Hubbard interactions in TBG. However, we performed these well away from the magic angle and for Hubbard interaction parameters that are large (\(U\approx 5.5\) eV for these calculations, but the value is thought to be \(\sim 4\) eV [75]). Therefore, to understand the role of these instabilities close to the magic angle, we aim to include these magnetic states in the continuum model.
### Long Range Interactions in the Continuum Model
In Fig. 2 we show the Hartree-Fock quasi-particle band structure (solid blue line, for all subplots) for a number of twist angles at charge neutrality, in addition to the non-interacting band structure (solid red line, for all subplots). At \(1.54^{\circ}\), Figs. 2(a), (d) and (g), we find that the Hartree-Fock potential slightly modifies the non-interacting band structure, indicating that this twist angle is too large for the formation of an insulating state. At a twist angle of \(1.25^{\circ}\), Figs. 2(b), (e) and (h), the non-interacting bands are flat enough for the onset of a small gap due to the Hartree-Fock potential, significantly smaller than the bandwidth, at the Dirac cones K and K' points in the moire Brillouin zone. Right at the magic angle of \(1.05^{\circ}\), Figs. 2(c), (f) and (i), the Hartree-Fock interactions induce a large gap at the K and K' points, on a similar scale to the bandwidth. At half filling (charge neutrality) these insulating states are characterised by a broken sublattice symmetry and a preserved spin and valley symmetry with respect to the non-interacting picture. This implies that the mean-field ground state associated to the Hartree-Fock band structure is actually a linear combination of 4 states with different spin and valley indexes. These calculations are in good agreement with a large body of literature which investigates these long ranged interactions...
### Short Range Interactions in the Continuum Model
In Section III.1, we described the leading anti-ferromagnetic instabilities obtained from RPA calculations of the atomistic model, performed mean-field Hubbard calculations of these, and found analytical forms which approximate the spin polarisation well, with the magnetic potential they create simply being \(U\zeta/2\). Taking inspiration from how the Hartree potential is calculated in the atomistic model relative to the continuum model, a general expression for the potential induced by these anti-ferromagnetic instabilities would be
\[U_{\delta}(\mathbf{r})=\delta_{1}\tau_{z}+\frac{1}{6}\sum_{i=1}^{6}\delta_{2} \tau_{z}e^{\mathbf{i}\mathbf{G}_{i}\cdot\mathbf{r}}, \tag{21}\]
where \(\delta_{1}\) corresponds to the constant sublattice polarization strength, \(\delta_{2}(\mathbf{G}_{i})\) is the moire modulated part of the potential, i.e. the weight associated to the expansion of the potential in the \(i\)-th BZ reciprocal lattice vector, and \(\tau_{z}\) is the Pauli matrix.
For the MAFM instability both \(\delta_{1}\) and \(\delta_{2}\) are real valued. In the case of NAFM order there is not a constant contribution to \(\zeta^{N}\) so \(\delta_{1}=0\), and \(\delta_{2}\) is a real number. Similarly for the HAFM instability \(\delta_{1}=0\) but \(\delta_{2}\) is a purely imaginary number so the modulation is a sine function. Both MAFM and NAFM orderings are degenerate in the valley index but the HAFM is not and the modulated contribution to the potential must be complex conjugated when exchanging valleys. The value of the parameters \(\delta_{1}\) and \(\delta_{2}\) are obtained numerically within the self-consistent atomistic Hubbard calculation at a twist angle of \(1.54^{\circ}\) and at charge neutrality, as described in
\begin{table}
\begin{tabular}{l c c} \hline \hline Order & \(|\delta_{1}|\) & \(|\delta_{2}|\) \\ \hline MAFM (\(U=5.4\) eV) & 12.47 & 12.54 \\ NAFM (\(U=5.94\) eV) & 0 & 237.09 \\ HAFM (\(U=5.94\) eV) & 0 & 211.89 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters \(|\delta_{1}|\) and \(|\delta_{2}|\) extracted form the atomistic Hubbard calculations for the different orderings considered in this work. All values are expressed in meV.
Section III.1. Their values depending on the magnetic orders are summarized in Tab. 1.
These parameters should be linear in the interaction strength and the spin-polarised electron density, meaning an overall non-linear dependence on \(U\), but we shall not seek self-consistent solutions to this potential in the continuum model here. Instead, these potentials are included in the continuum model through perturbation theory on the converged self-consistent Hartree-Fock calculations, as explained in Section II.2. Note that the sublattice is polarised in the Hartree-Fock calculations, which means the overall sign of the potential can be chosen in two different ways, but we always choose the sign such that the sublattice polarisation matches, as this should increase any gaps, and further lower the energy. In the Appendix, we show the band structures in the continuum model (without the Hartree-Fock contribution) at \(1.54^{\circ}\) with these perturbed potentials, and find good agreement with the atomistic calculations.
For twist angles smaller than \(1.54^{\circ}\), the Hubbard interaction in the continuum model should scale as \(\left|\delta_{1/2}\left(\theta\right)\right|=\left|\delta_{1/2}\left(1.54^{ \circ}\right)\right|\left(\frac{\theta}{1.54^{\circ}}\right)^{2}\), [41]. However, extending the parameters to the magic angle in this way does not yield very physical results, as the gaps in the flat bands are much larger than is ever found in experiments, as shown in Fig. A2(c), (f) and (i). This is because to obtain a mean-field solution of the magnetic order at \(1.54^{\circ}\) a \(U=5.4-6\) eV was required. However, a more physical value is \(U=4\) eV, or smaller [75]. Moreover, the large interaction strength gives rise to large polarised spin densities, as seen in Section III.1, where the \(\zeta\) values indicated that not just the flat band electrons were being polarised. Thus, we rescale these parameters to obtain more suitable estimates for these for the changes to the electronic structure. In Fig. 2 we show the results with a scaling factor of 3 in the main text, with other scaling factors being shown in the Appendix. As these perturbed calculations are not self-consistent, we cannot say what is the ground state. However, we can interpret the changes to the electronic structure, which provides information about where these interactions might be significant.
First we shall discuss MAFM order, as shown in Figs. 2(a), (b) and (c). At a twist angle of \(1.54^{\circ}\), Fig. 2(a), we find a small gap at the K/K' points, which is more significant than the Hartree-Fock distortions. For a smaller twist angle of \(1.25^{\circ}\), Fig. 2(b), a similar situation is found, the magnetic order opens a gap at the K/K' points while the Hartree-Fock potential is responsible for minor adjustments in the band structure. At the magic angle of \(1.05^{\circ}\), Fig. 2(c), the situation totally changes,
Figure 2: Comparison of low energy band structure, along the path K \(\rightarrow\Gamma\rightarrow\) M \(\rightarrow\) K’ in the mBZ, of the non-interacting case in red, HF in blue and HF+MAFM (a, b, c), HF+NAFM (d, e, f), HF+HAFM (g, h, i) in orange, for \(\theta=1.54^{\circ}\) (a, d, g) \(\theta=1.25^{\circ}\) (b, e, f) and \(\theta=1.05^{\circ}\) (c, f, i). Dashed lines correspond to valley flip.
now the magnetic potential only slightly modifies the gap at the K/K' points, while the Hartree-Fock contribution dominates the deformations to the electronic structure.
Next, we move on to describing the NAFM ordering, as seen in Fig. 2(d), (e) and (f). At the largest twist angle of \(1.54^{\circ}\), Fig. 2(d), this magnetic order creates a significant gap at the K/K' points. This could be attributed to the non-self-consistent nature of the calculations, and form using large values of the parameters. This large band deformations persists at the smaller twist angles of \(1.25^{\circ}\), Fig. 2(e), and \(1.05^{\circ}\), Fig. 2(f). Even if smaller values of the parameters are utilised, this NAFM order induces large band deformations, lowering the energy of the occupied valence band. Therefore, it appears that this magnetic order can compete with the Hartree-Fock contribution.
Finally, we describe the effect of HAFM. As can be seen in Figs. 2(g), (h) and (i), the HAFM order does not create a gap at the K/K' points. Instead it causes the Dirac cones at K and K' to shift up and down, respectively, for the single spin and valley channel. At \(1.54^{\circ}\), Fig. 2(g), this effect is almost imperceptible but stronger than the one induced by the Hartree-Fock interactions. For the smaller twist angle of \(1.25^{\circ}\), Fig. 2(h) the situation is similar but the energy gap between K and K' points increases. While at the magic-angle, Fig. 2(i), it slightly contributes to reshaping the band structure, which in contrast is heavily affected by the Hartree-Fock contribution, so we can safely say that the HAFM is a secondary effect in this case.
## IV Discussion
Overall, it appears that these magnetic potentials are more significant away from the magic angle, but at angles close enough that there could still be broken symmetry phases [64; 26]. For example, \(1.25^{\circ}\) seems to be the most significantly affected by these Hubbard potentials relative to the Hartree-Fock contribution. At the magic angle, the Hartree-Fock contribution dominates, and at large twist angles the effects are small relative to the bandwidth, suggesting that these magnetic orders are not significant at these twist angles. This twist angle dependence could suggest why the predictions of Klebl _et al._[64], in terms of the twist angle and doping dependence of magnetic states, agreed well with subsequent experiments [26]. This could be because these Hubbard interactions are important close to the onset of broken symmetry phases, but close to the magic angle these Hubbard interactions are dominated by long-ranged Hartree-Fock interactions [41].
The NAFM order appears to effect the electronic structure most significantly, significantly lowering the eigenvalues of the occupied valence band, and therefore, it is a possible candidate for magnetic order in TBG. In contrast, the MAFM order effects the electronic structure more weakly. Finally, the HAFM appears to only effect the electronic structure slightly. This magnetic order should, however, couple to perpendicular electric fields [91; 75], which could make this ordering tendency more important. These perturbative calculations are interesting to be performed because they are a very natural explanation for the correlated insulating states in TBG [4]. From the Hartree-Fock calculations, a spin-valley degenerate insulating state is obtained. The atomistic Hubbard interaction, however, should break this symmetry and cause the onset of magnetic order.
We have focused on TBG here, but many more moire materials comprised of graphene exist [94; 95]. Perhaps the most promising ones are where there is a \(\pm\theta\) twist between each adjacent graphene layer [96; 97; 98; 93; 99; 100; 101; 102]. These moire graphene multilayers have been shown to host highly tunable superconducting phases [103; 104; 105; 106; 107], and as the number of layers increases, the superconducting phase occurs over wider and wider doping ranges [108; 109]. Fischer _et al._[100] has shown similar types of magnetic order occur in these systems, which means it will be possible to use the developed method. Another example of moire structures is graphene twisted on a graphene multilayer, such as twisted monolayer graphene [110; 111]. The magnetic structure of these systems was shown to be more complex by Goodwin _et al._[112], which suggests the approach described here could be difficult to utilise. Finally, another class of moire graphene multilayers is twisted bilayers composed of graphene multilayers, such as twisted double bilayer graphene [113; 114; 115; 116; 117; 118]. Further investigation of this system would be of interest.
## V Conclusion
In summary, starting from atomistic methods, we studied several leading magnetic instabilities of charge neutral TBG at a large twist. These calculations permitted analytical forms for the magnetic ordering. These Hubbard potentials were investigated perturbatively from self-consistent Hartree-Fock calculations in the continuum model, allowing a comparison of long and short range exchange interactions. From these calculations, our take-home conclusions are:
1. These atomistic Hubbard interactions break the spin-valley degeneracy of the insulating state at charge neutrality of TBG obtained from self-consistent Hubbard calculations. Therefore, these insulating states are likely to have some Mottness.
2. These magnetic orders are most significant for intermediate twist angles between the magic-angle and angles where non-interacting physics is sufficient. At the magic-angle, the Hartree-Fock contribution dominates, and at large angles the bandwidth dominates.
3. Out of the studied magnetic orders, nodal antiferromagnetic order appears to be the most significant for changes to the electronic structure.
It is hoped that these results further motivate inclusion of atomistic effects in the continuum model. Moreover, performing self-consistent magnetic calculations should also be possible, and investigating such ordering tendencies in other moire graphene multilayers is possible now.
## VI Acknowledgments
A.J.P., P.A.P. and F.G. acknowledge support from the Severo Ochoa programme for centres of excellence in R&D (Grant No. SEV-2016-0686, Ministerio de Ciencia e Innovacion, Spain); from the European Commission, within the Graphene Flagship, Core 3, grant number 881603 and from grants NMAT2D (Comunidad de Madrid, Spain) and SprQuMat (Ministerio de Ciencia e Innovacion, Spain). ZG was supported through a studentship in the Centre for Doctoral Training on Theory and Simulation of Materials at Imperial College London funded by the EPSRC (EP/L015579/1). We acknowledge funding from EPSRC grant EP/S025324/1 and the Thomas Young Centre under grant number TYC-101. We acknowledge the Imperial College London Research Computing Service (DOI:10.14469/hpc/2232) for the computational resources used in carrying out this work. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101067977. The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) is acknowledged for support through RTG 1995, within the Priority Program SPP 2244 "2DMP" and under Germany's Excellence Strategy-Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC2004/1 - 390534769. We acknowledge support from the Max Planck-New York City Center for Non-Equilibrium Quantum Phenomena. Spin susceptibility calculations were performed with computing resources granted by RWTH Aachen University under projects rwth0496 and rwth0589.
|
2309.14877 | Explainable Sustainability for AI in the Arts | AI is becoming increasingly popular in artistic practices, but the tools for
informing practitioners about the environmental impact (and other
sustainability implications) of AI are adapted for other contexts than creative
practices -- making the tools and sustainability implications of AI not
accessible for artists and creative practitioners. In this position paper, I
describe two empirical studies that aim to develop environmental sustainability
reflection systems for AI Arts, and discuss and introduce Explainable
Sustainability in for AI Arts. | Petra Jääskeläinen | 2023-09-26T12:20:18Z | http://arxiv.org/abs/2309.14877v2 | # Explainable Sustainability for AI in the Arts
###### Abstract.
AI is becoming increasingly popular in artistic practices, but the tools for informing practitioners about the environmental impact (and other sustainability implications) of AI are adapted for other contexts than creative practices - making the tools and sustainability implications of AI not accessible for artists and creative practitioners. In this position paper, I describe two empirical studies that aim to develop environmental sustainability reflection systems for AI Arts, and discuss and introduce Explainable Sustainability in for AI Arts.
Keywords:Artificial intelligence, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable, Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable, Sustainability, AI, Explainable Sustainability, AI, Explainable Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain,, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explainable, Explainable, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Sustainability, AI, Explain, Sustainability, AI, Explainable, Explain,, Sustainability, AI, Explainable, Sustainability, AI, Explain,, Explainable
solutionism is one of the large problems from sustainability point-of-view) and 2) that the complexity of climate crisis and sustainability phenomena can not only be encompassed by explainability regarding energy consumption, CO2, or other quantified measures that relate to the immediate use of the system. However, these quantifications can provide a feasible starting point for practical research explorations in Explainable Sustainability for AI arts. As two cases of our on-going work, I present 1) The Green Notebook and 2) Visual Feedback System for Environmental Impact as two exploratory and preliminary cases of explainable environmental sustainability for AI in the arts.
## 2. Sustainability of AI art and explainable Sustainability for AI arts
While there is limited amount of prior research on sustainability of AI Art, one of the central papers regarding it (Gomez et al., 2017) has proposed research agenda in sustainability and AI Arts, while also providing a conceptual framework that takes into account different stages of the artists' process and different AI hardware used in these stages. It is also highlighted in the paper how this type of quantified analysis has its limitations and in absence of political steering (or to add - in aims for facilitating cultural change) the environmental impact of AI arts is likely to be a concern (Gomez et al., 2017). In this paper, I approach environmental sustainability specifically from the perspective of facilitating cultural change, and build on an assumption that in order to create such change, the users of these systems should firstly understand what the environmental impact of their actions might be. This is, where _Explainable Sustainability for AI in the Arts_ comes in.
As (Bogorian et al., 2016) note, there is a significant lack of explainable AI research for the arts. While explainable AI is well established approach for designing AI systems that is aiming for providing good and relevant explanations about the reasonings of the AI system to the user, when it comes to Explainable Sustainability - and more specifically environmental sustainability - relevant questions arise regarding what explanations are good for users in different contexts? We propose this as one of the central questions for _Explainable Sustainability for AI in the Arts_. Environmental sustainability in some cases can be approaches as CO2 emissions or energy impact, but sustainability as a phenomena is more complex than that. It involves relation to planetary boundaries (Kurz et al., 2017), users attitudes towards sustainability - as a few examples that are harder to explain in and by the system. Nevertheless, we propose that explainable sustainability for AI Arts should entail the immediate measures of the AI Art technology itself (energy consumption, CO2, life cycle) but also aim to inform the users about these more abstract concepts that relate to sustainability.
## 3. Case 1: the Green Notebook
The Green Notebook (Bogorian et al., 2016) is a design artifact that was developed in an exploratory RtD study to explore notebook-based environmental sustainability reflection. Essentially, the diary resembled a workbook/notebook where artists could write down their ideas and the Notebook would provide them information about the environmental sustainability of their choices (energy consumption that relates to choices of GPU, hardware, AI training, etc.) and prompt questions that would facilitate guided sustainability reflection (see Fig. 1 for image of the prototype). In this study, many insights emerged (which we can not cover sufficiently in the scope of this paper but rather highlight the key findings). One of them being varying conversational strategies used by artists. In (Bogorian et al., 2016), we propose two dimensions for conversational strategies that artists used when engaging in such sustainability reflection: command vs. conversation based and abstract vs. specific. These identified strategies can be taken into consideration when developing conversational explainable sustainability systems for AI arts. A particular challenge in artists' communication about their work was translating the abstract and conversation-based commands into concrete information about the environmental impact. Furthermore, design trade-offs found in the context of the Notebook study entailed efficiency vs. politeness, and focus vs. integration. The efficient conversational strategies were more information based, whereas polite conversation strategies involved
more filler words, and artists seemed to have differing preferences regarding them. The Notebook itself was prioritizing focus (being de-attached from the artistic process), while in our other prototype in Case 2 (see Section 4) an integrated tool design was explored.
## 4. Case 2: Integrated Environmental Impact Feedback System
In case 2 (Kumar et al., 2017), we explored a system design that integrated environmental sustainability to the system interface itself. Thus, this design approach was an integrative approach in contrast to the Case 1 (see Section 3, focus vs. integration). This prototype included color coded and graphic feedback of the energy consumption of one specific browser-based generative AI system (see Fig. 1). This system changed the visual feedback based on choices that the user made, so that the environmental sustainability feedback would be presented in real-time to the users. With increasing energy consumption, the prototype changed into red, while intermediate consumption was represented by orange, and a light consumption with green. The visual feedback was informed by a survey study of associations regarding energy consumption and environmental sustainability, including colors, symbols, and graphs. Such feedback can prove effective, but as our initial results suggest, caution should be practiced in utilizing such feedback. For example, providing green feedback might contribute into green-washing by giving the impression that users are working in a sustainable manner, while the overarching consumption of all users across the world may still have a significant environmental impact. Thus, these design strategies may be useful in prompting individual users about their consumption in specific tasks, but designers of the systems need to be mindful of their definition of the consumption and the notion of sustainability in a wider context.
## 5. Discussion and Conclusion
In this position paper, we have introduced two RtD studies on explainable environmental sustainability, with a focus on informing AI artists about energy consumption of their practices. We have also discussed dimensions that would be relevant to include in efforts towards explainable sustainability in the future that go past the immediate energy usage of the system. We suggest, that while simultaneously exploring the design space and concrete strategies for changing user behavior towards reduction of computing resources and Co2, these systems should also aim to explain matters that are difficult to quantify but nevertheless contribute in the state of sustainability. For example, (Beng et al., 2017) has explored earlier cases in which users of AI Art systems are not aware of the complex structures regarding the environmental exploitation that lie behind the technologies (such as mineral mining and capitalist related exploitation of labourers
Figure 1. Case 1 and Case 2: Two different types of sustainability reflection systems, prioritizing focus vs. integration
in low wage countries) and suggested exploring how these distanced aspects could be brought closer to the users of the technologies. We anticipate, that the core challenge of Explainable Sustainability of AI arts will lie in how these complex causalities, impacts, and processes will and can be brought to the users of the reductionist systems in a manner that will facilitate knowledge-building, behavior change, and shift in attitudes and values towards sustainability.
|
2306.17523 | Nonlinear Yang-Mills black holes | This paper is devoted to investigating the nonlinear non-abelian Yang-Mills
black holes. We consider three Born-Infeld, exponential, and logarithmic
nonlinear Yang-Mills theories with $SO(n-1)$ and $SO(n-2,1)$ semi-simple
groups, which n is the dimension of spacetime, and obtain a new class of
nonlinear Yang-Mills (NYM) black hole solutions. Depending on the values of
dimension $n$, Yang-Mills charge $e$ and the mass $m$ and nonlinear parameters
$\beta$, our solutions can lead to a naked singularity, a black hole with two
horizons, an extreme or a Schwarzschild-type black hole. We also investigate
the thermodynamic behaviors of the NYM black holes. For small charge values,
the NYM solutions may be thermally stable in the canonical ensemble, if we
consider an AdS spacetime with spherical $k=+1$ and hyperbolic $k=-1$
coordinates or a flat one with $k=+1$. However, there are no stable regions in
the grand canonical ensemble in higher dimensions. For the NYM black hole, we
observe a reentrant phase transition between large and small black holes in the
BI-branch with small $\beta$, which cannot be visible for the nonlinear
Reissner-Nordstrom AdS black hole in the higher dimension. For the limit
$\beta\rightarrow\infty$, the critical ratio $\frac{P_{c} v_{c}}{T_{c}}$ tends
to the constant value $3/8$ for each dimension $n$, while it depends on the
dimension for the case of nonlinear electrodynamics black holes. | Fatemeh Masoumi Jahromi, Behrouz Mirza, Fatemeh Naeimipour, Soudabe Nasirimoghadam | 2023-06-30T10:21:01Z | http://arxiv.org/abs/2306.17523v1 | # Nonlinear Yang-Mills black holes
###### Abstract
This paper is devoted to investigating the nonlinear non-abelian Yang-Mills black holes. We consider three Born-Infeld, exponential, and logarithmic nonlinear Yang-Mills theories with \(SO(n-1)\) and \(SO(n-2,1)\) semi-simple groups, which n is the dimension of spacetime, and obtain a new class of nonlinear Yang-Mills (NYM) black hole solutions. Depending on the values of dimension \(n\), Yang-Mills charge \(e\) and the mass \(m\) and nonlinear parameters \(\beta\), our solutions can lead to a naked singularity, a black hole with two horizons, an extreme or a Schwarzschild-type black hole. We also investigate the thermodynamic behaviors of the NYM black holes. For small charge values, the NYM solutions may be thermally stable in the canonical ensemble, if we consider an AdS spacetime with spherical \(k=+1\) and hyperbolic \(k=-1\) coordinates or a flat one with \(k=+1\). However, there are no stable regions in the grand canonical ensemble in higher dimensions. For the NYM black hole, we observe a reentrant phase transition between large and small black holes in the BI-branch with small \(\beta\), which cannot be visible for the nonlinear Reissner-Nordstrom AdS black hole in the higher dimension. For the limit \(\beta\rightarrow\infty\), the critical ratio \(\frac{P_{\text{c}\text{c}}}{T_{\text{c}}}\) tends to the constant value \(3/8\) for each dimension n, while it depends on the dimension for the case of nonlinear electrodynamics black holes.
## I Introduction
The idea of nonlinear electrodynamics has been known as a powerful tool to modify the classical Maxwell theory. The first nonlinear model was introduced by Born and Infeld to remove the self-energy singularity of the point-like particles [1] by an upper bound imposing on the electric field. They proposed a Lagrangian with a nonlinear parameter \(\beta\) that has a dimension of mass and measures the strength of the nonlinearity. As the Born-Infeld (BI) action describes the low energy open superstrings and D-branes dynamics [2; 3; 4; 5; 6], nonlinear electrodynamics can be significant in the framework of string theory. Heisenberg and Euler concluded that vacuum polarization effects are obtained only when the Maxwell equations are substituted by more fundamental \(U(1)\) gauge theories of nonlinear electrodynamics [7]. Nonlinear electrodynamics reduces to the linear Maxwell theory in the weak field limit. For \(\beta=0\), the theory is in the most strongest nonlinear regime, while for large \(\beta\), it reduces to the linear-Maxwell theory plus some correction terms proportional to \(1/\beta^{2}\). Born-Infeld Lagrangian could also solve the problems caused by infinite self-energy in the formulation of a quantum theory of electrodynamics [8]. This theory has also corrected shock wave characteristics [9] and it cannot predict vacuum birefringence. In recent times, the nonlinear electrodynamics theory has attracted great attention. For instance, the BI action as a specific model naturally appears in D-branes and open superstrings [6]. This theory also plays an effective role to construct a regular black hole solution [10] and avoids the singularity problem in the early universe [11]. It can also influence the critical quantities such as the phase transition point and the gap frequency [12]. In addition to the BI theory, some other types of nonlinear electrodynamics such as the exponential and logarithmic forms [13; 14] can remove or reduce the singularity of the electric point charge field as well. Within the framework of exponential \(U(1)\) gauge theory [15], the results indicate a finite value for the total electrostatic field energy, while the electric field at the location of the elementary point charge is not finite. The authors have proved that BI, logarithmic, and exponential \(U(1)\) gauge theories can lead to a finite self-energy in arbitrary dimensions [15]. In Refs. [16; 17], the general properties of non-linear electrodynamics are reviewed.
The linear abelian Maxwell theory is a subset of a larger class of non-abelian theories such as the Yang-Mills theory. The Maxwell equations can be regarded as the Yang-Mills ones with the gauge group \(U(1)\). There are many motivations in order to consider the non-abelian Yang-Mills theory. Spin currents of the ferromagnets correspond to the SU(2) gauge fields in the dual gravitational theory, which open new perspectives for condensed matter systems dual to non-abelian Yang-Mills gauge theories [18]. The Yang-Mills equations may also appear in the low energy limit of some string models. Furthermore, the Yang-Mills theory may also be an important issue in order to characterize the quark confinement by magnetic monopoles and their condensation with a dual superconductor picture. Due to the partial gauge fixing, the abelian projection method explicitly breaks both the local and global gauge symmetry
[19; 20]. However, using a new gauge invariant procedure in SU(N) Yang-Mills theory, the gauge-invariant non-abelian magnetic monopole has been successfully introduced [21]. It has also been shown that these non-abelian magnetic monopoles contribute significantly to the confinement of the fundamental quarks in the SU(3) Yang-Mills theory. Moreover, non-abelian excitations like Majorana fermions, can also be used in topologically protected quantum computations [22; 23; 24]. Therefore, in order to obtain a broader class of black hole solutions and study the effects of the non-abelian gauge fields on gravity, it is worthwhile to consider the Einstein-Yang-Mills (EYM) theory. Bartnik and McKinnon were the first who found a static and spherically symmetric solitonic solution for the \(SU(2)\) gauge group in EYM theory [25]. Some black hole solutions have also been found in this theory. The black hole solution in EYM theory with SO(3) gauge group has been studied in [26], and the colored black holes with SU(2) gauge group have been studied in [27; 28].
Naturally, considering a non-abelian theory as a matter field may lead to a set of complicated field equations. In this regard, most of the black hole solutions in EYM theory are numerical [29; 30; 31]. However, some authors have considered some particular ansatzes in order to find analytical solutions [32; 33; 34]. For the first time, Yasskin could achieve the black hole solution for the EYM equations using the Wu-Yang ansatz [26]. Wu-Yang [35] is one of the ansatzes which has been used in a lot of papers to obtain black hole solutions [36; 37; 38; 39; 40; 41; 42]. Topological black hole in Gauss-Bonnet-Yang-Mills gravity has been studied in Ref. [43]. Thermodynamic behaviors of the EYM black hole in the presence of massive gravity have been also probed [44]. In Ref. [45], static spherically symmetric Einstein-Maxwell-Yang-Mills-dilaton black hole and the related thermodynamics have been investigated. Recently, we have reached the Yang-Mills black hole solutions in the modified quasitopological gravity [46]. A set of numeric non-abelian Einstein-Born-Infeld solutions has been also studied in Ref. [47]. Some have already obtained the analytic solutions for the non-abelian BI theory in the presence of the Einstein and Gauss-Bonnet gravities. [48]. Some studies of different non-abelian Yang-Mills black holes were done in Refs. [49; 50; 51; 52; 53] as well. Now, we aim to use the Wu-Yang ansatz and access a vast \(n\)-dimensional Yang-Mills black hole solutions in the presence of three Born-Infeld, exponential, and logarithmic nonlinear forms. We also investigate the physical and thermodynamic properties such as thermal stability and critical behavior, and also the dynamical stability of the obtained solutions.
This paper is organized as follows: In Sec. II, we define the main structure of the Einstein gravity coupled to the nonlinear non-abelian Yang-Mills gauge fields and then obtain the related black hole solutions. We also investigate the physical structures of the solutions in Sec. III. Thermodynamic properties and thermal stability of the obtained black hole solutions are probed in Secs. IV and V, respectively. In Sec. VI, we study the critical behavior of the related black holes in the extended phase space. We also study the dynamical stability of the NYM black holes in Sec. VI E. Finally, we have a conclusion of the whole paper in Sec. VII.
## II Main structure of the NYM black hole solutions
The theory of the \(n\)-dimensional (\(n\geq 4\)) nonlinear-Yang-Mills (NYM) black hole solutions originate from the action
\[I=\frac{1}{16\pi}\int_{\mathcal{M}}d^{n}x\sqrt{-g}(R-2\Lambda+L(F)), \tag{1}\]
where \(R\) and \(\Lambda\) are respectively the Ricci scalar and the cosmological constant. We classify the nonlinear Lagrangian \(L(F)\) for three Born-Infeld, exponential and logarithmic cases [54]
\[L(F)=\left\{\begin{array}{ll}4\beta^{2}\bigg{[}1-\sqrt{1+\frac{F^{2}}{2\beta ^{2}}}\bigg{]},&BI\\ \\ 4\beta^{2}\bigg{[}e^{-\frac{F^{2}}{4\beta^{2}}}-1\bigg{]},&EN\\ \\ -8\beta^{2}{\rm ln}\bigg{[}1+\frac{F^{2}}{8\beta^{2}}\bigg{]},&LN\end{array}\right. \tag{2}\]
where we have abbreviated the exponential and logarithmic nonlinear cases to EN and LN, respectively. If we consider a gauge group with N-parameters, so
\[F^{2}=\gamma_{ab}F^{(a)}_{\mu\nu}F^{(b)\mu\nu},\ \ \gamma_{ab}\equiv-\frac{ \Gamma_{ab}}{|{\rm det}\Gamma_{ab}|^{1/N}}, \tag{3}\]
where \(\Gamma_{ab}=C^{c}_{ad}C^{d}_{bc}\) is the metric tensor of the gauge group and \({\rm det}\Gamma_{ab}\) is the related determinant. The indices \(a,b,c\) take values from 1 to \(N\) and \(C^{a}_{bc}\)'s are the structure constants of the gauge group theory. The gauge field tensor is
defined as
\[F^{(a)}_{\mu\nu}=\partial_{\mu}A^{(a)}_{\nu}-\partial_{\nu}A^{(a)}_{\mu}+\frac{1} {e}C^{a}_{bc}A^{(b)}_{\mu}A^{(c)}_{\nu}, \tag{4}\]
where \(e\) is the coupling constant of the non-abelian theory and \(A^{(a)}_{\mu}\)'s represent the gauge potentials. For simplicity, we use the redefinition \(L(F)=\zeta_{1}\beta^{2}{\cal L}(Y)\), where
\[{\cal L}(Y)=\left\{\begin{array}{ll}1-\sqrt{1+Y},&BI\\ \\ e^{-Y}-1,&EN\\ \\ \ln(1+Y),&LN\end{array}\right. \tag{5}\]
and \(\zeta_{1}=+4,+4,-8\) and \(Y=\frac{F^{2}}{2\beta^{2}},\frac{F^{2}}{4\beta^{2}},\frac{F^{2}}{8\beta^{2}}\) are described for BI, EN and LN Yang-Mills theories, respectively. If we vary the action (1) with respect to the metric \(g_{\mu\nu}\) and the gauge potential \(A^{(a)}_{\mu}\), then the gravitational and gauge field equations are obtained as follows
\[R_{\mu\nu}=\frac{2\Lambda}{n-2}g_{\mu\nu}+\zeta_{2}\gamma_{ab}\partial_{Y}{ \cal L}(Y)F^{(a)\lambda}_{\mu}F^{(b)}_{\nu\lambda}+\frac{\zeta_{3}\beta^{2}}{ n-2}[2Y\partial_{Y}{\cal L}(Y)-{\cal L}(Y)]g_{\mu\nu}, \tag{6}\]
\[\nabla_{\nu}(\partial_{Y}{\cal L}(Y)F^{(a)\mu\nu})=\frac{1}{e}\partial_{Y}{ \cal L}(Y)C^{a}_{bc}A^{(b)}_{\nu}F^{(c)\nu\mu}, \tag{7}\]
where \(\zeta_{2}=-4,-2,+2\) and \(\zeta_{3}=+4,+4,-8\) have been specified for \(BI\), \(EN\) and \(LN\) theories, respectively. To obtain a set of static and spherically symmetric non-abelian black hole solutions with spherical and hyperbolic horizons, we consider the following metric
\[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega_{k}^{2}, \tag{8}\]
where \(d\Omega_{k}^{2}\) denotes the line element of an \((n-2)\)-dimensional hypersurface \(\Sigma\) with the constant curvature \((n-2)(n-3)k\). It is determined as
\[d\Omega_{k}^{2}=d\theta^{2}+k^{-1}{\rm sin}^{2}(\sqrt{k}\theta)\biggl{(}d \phi_{1}^{2}+\sum_{i=2}^{n-3}\Pi_{j=1}^{i-1}{\rm sin}^{2}\phi_{j}d\phi_{i}^{2} \biggr{)}, \tag{9}\]
where \(k=1,-1\) are devoted to the spherical and hyperbolic geometries, respectively, and \(\theta\in[0,\frac{\pi}{2}]\). If we introduce the coordinates \(x_{i}\)'s
\[x_{1}=\frac{r}{\sqrt{k}}\sin(\sqrt{k}\,\theta)\,\Pi_{j=1}^{n-3} \sin\phi_{j},\] \[x_{i}=\frac{r}{\sqrt{k}}\sin(\sqrt{k}\,\theta)\cos\phi_{n-i-1}\, \Pi_{j=1}^{n-i-2}\sin(\phi_{j})\ \,\ i=2,...,n-2\] \[x_{n-1}=r\cos(\sqrt{k}\,\theta), \tag{10}\]
and employ the Wu-Yang ansatz, then the gauge potentials are obtained from
\[A^{(a)} = \frac{e}{r^{2}}(x_{a}dx_{n-1}-x_{n-1}dx_{a})\,\,\,{\rm for}\,\,\, a=1,...,n-2\] \[A^{(b)} = \frac{e}{r^{2}}(x_{i}dx_{j}-x_{j}dx_{i})\,\,\,{\rm for}\,\,\,i=1,...,n-3\,,\,j=2,...,n-2,\,{\rm and}\,i<j \tag{11}\]
where \(b\) goes from \((n-1)\) to \((n-1)(n-2)/2\). The Lie algebra of the gauge potentials with \(k=+1\) and \(-1\) in Eq. (11) is isomorphic to \(SO(n-1)\) and \(SO(n-2,1)\) gauge groups, respectively. It should be noted that \(n\) is equal to the spacetime dimension. We redefine \(\gamma_{ab}\) in Eq. (3) as
\[\gamma_{ab}=\epsilon_{a}\delta_{ab},\quad{\rm no\,\,sum\,\,on\,\,a}, \tag{12}\]
where for \(SO(n-1)\) gauge group,
\[\epsilon_{a}=1\ \,\ \ \mathrm{for}\,a=1\ \,...,\frac{(n-1)(n-2)}{2}, \tag{13}\]
and for \(SO(n-2,1)\) gauge group
\[\epsilon_{a}=\left\{\begin{array}{ll}-1&\qquad 1\leq a\leq n-2\\ \\ 1&\qquad n-1\leq a\leq\frac{(n-1)(n-2)}{2}.\end{array}\right. \tag{14}\]
For a better understanding, we have written the gauge potentials of the groups \(SO(3)\), \(SO(2,1)\), \(SO(4)\) and \(SO(3,1)\) in appendix (VIII.1). The gauge potentials (11) can satisfy the gauge field equation (7). So, if we substitute these gauge potentials (11) and the metric (8) in Eq. (6), we can obtain the NYM black hole solutions
\[f(r) = k-\frac{m}{r^{n-3}}-\frac{2\Lambda r^{2}}{(n-1)(n-2)} \tag{15}\] \[+ \left\{\begin{array}{ll}\frac{4\beta^{2}r^{2}}{(n-1)(n-2)} \bigg{[}1-\frac{n-1}{r^{n-1}}\int r^{n-2}\sqrt{1+\frac{\eta}{2}}dr\bigg{]},& BI\\ \\ -\frac{4\beta^{2}r^{2}}{(n-1)(n-2)}\bigg{[}1-\frac{n-1}{r^{n-1}}\int r^{n-2} \mathrm{exp}\big{(}-\frac{\eta}{4}\big{)}dr\bigg{]},&EN\\ \\ -\frac{8\beta^{2}}{(n-2)r^{n-3}}\int r^{n-2}\mathrm{ln}[1+\frac{\eta}{8}]dr,& LN\end{array}\right.\]
where \(\eta=\frac{(n-2)(n-3)e^{2}}{\beta^{2}r^{4}}\). The parameter \(m\) is an integration constant relating to the mass of the NYM black hole. It should be noted that for \(n=4z+1\) where \(z\in N\), the metric function in Eq. (15) can be written in terms of elementary functions. We have shown the function \(f(r)\) for \(n=5\) and \(n=9\) dimensions in appendix (VIII.2). For \(n\neq 4z+1\), \(f(r)\) may be written as
\[f(r) = k-\frac{m}{r^{n-3}}-\frac{2\Lambda r^{2}}{(n-1)(n-2)} \tag{16}\] \[+ \left\{\begin{array}{ll}\frac{4\beta^{2}r^{2}}{(n-1)(n-2)} \big{[}1-{}_{2}F_{1}\big{(}\big{[}\tfrac{-1}{2},\tfrac{1-n}{4}\big{]}\,,\big{[} \tfrac{5-n}{4}\big{]}\,,-\tfrac{\eta}{2}\big{)}\big{]},&BI\\ \\ -\frac{4\beta^{2}r^{2}}{(n-1)(n-2)}\big{[}1-{}_{2}F_{1}\big{(}\big{[}\tfrac{1 -n}{4}\big{]}\,,\big{[}\tfrac{5-n}{4}\big{]}\,,-\tfrac{\eta}{4}\big{)}\big{]},& EN\\ \\ -\frac{8\beta^{2}r^{2}}{(n-1)(n-2)}\mathrm{ln}\big{[}1+\frac{\eta}{8}\big{]}- \frac{4(n-3)e^{2}}{(n-1)(n-5)r^{2}}F_{1}([1,\tfrac{5-n}{4}]\,,[\tfrac{9-n}{4}] \,,-\tfrac{\eta}{8}),&LN\end{array}\right.\]
Eq. (16) shows that there is an equivalence between the four-dimensional NYM black hole solutions with \(SO(3)\) and \(SO(2,1)\) gauge groups and a set of topological black hole solutions with \(k=1\) and \(k=-1\) in nonlinear electrodynamics theory [54; 55]. Therefore, we can deduce that there is a transformation between the non-abelian gauge fields and a set of abelian ones in \(n=4\) which satisfies the Yasskin theory [26]. However, for \(n>4\), we achieve a new class of solutions for the NYM black hole which is different from the nonlinear electrodynamics one[54; 55]. So, the NYM solutions with \(n>4\) do not respect the Yasskin theorem.
For large \(\beta\), we assume that all three types of the metric functions in Eq. (15) reduce to the Einstein-Yang-Mills black hole solution as follows
\[f(r) = k-\frac{m}{r^{n-3}}-\frac{2\Lambda r^{2}}{(n-1)(n-2)}+\left\{ \begin{array}{ll}-\frac{(n-3)e^{2}}{(n-5)r^{2}}+\mathcal{O}\big{(}\frac{1}{ \beta^{2}}\big{)},&n\neq 5\\ \\ -\frac{2e^{2}\mathrm{ln}(r/r_{0})}{r^{2}}+\mathcal{O}\big{(}\frac{1}{\beta^{2} }\big{)},&n=5\end{array}\right. \tag{17}\]
We choose \(r_{0}=1\) for simplicity.
## III Physical Behaviors of the NYM black hole solutions
In this section, we aim to study the physical structures of the NYM black hole solutions. If we calculate the Kretschmann scalar, \(R_{abcd}R^{abcd}\), it goes to infinity as \(r\to 0\). Therefore, we can deduce an essential singularity
located at \(r=0\) for the NYM black holes.
In the previous section, we concluded that the NYM and nonlinear electrodynamics black hole solutions are the same in \(n=4\). In Refs. [56; 57; 58; 59], the authors have discussed the structure of the nonlinear electrodynamics black holes horizon in \(n=4\) and higher dimensions. To find the possible horizons, we investigate the behavior of the metric function \(f(r)\) near \(r=0\). We have shown the horizon structure for the Born-Infeld Yang-Mills (BIYM) black hole in \(n=5,6\) and for \(k=1\) in Fig. (1). Considering \(k=1\), and Solving Eq. (15) for \(n=5\) the metric function is
\[f(r)=1-\frac{m}{r^{2}}-\frac{\Lambda r^{2}}{6}+\frac{\beta^{2}r^{2}}{3}\bigg{[} 1-\sqrt{1+\frac{\eta}{2}}\,\bigg{]}-\frac{e^{2}}{r^{2}}\ln\bigg{[}r^{2}(1+ \sqrt{1+\frac{\eta}{2}})\bigg{]}-\frac{4\,\beta^{2}\,{\rm C}_{5}}{3\,{\rm r}^{2 }}, \tag{18}\]
where \(C_{5}\) is the integration constant for \(n=5\), which is related to the integral in Eq. (15). The integral in Eq. (15) is indefinite, and so we have considered the integration constant. We assume that the expansion of \(f(r)\) at large values of \(\beta\) (\(\beta\to\infty\)) reduces to the Yang-Mills (YM) solution in Eq. (17) and so we find
\[C_{5}=-\frac{3\,e^{2}}{8\,\beta^{2}}\big{(}1+2\ln(2)\big{)}. \tag{19}\]
Now, the expansion of \(f(r)\) close to \(r=0\) in Eq. (18) takes the following form
\[f(r)\ =\ 1-\frac{m-A_{5}}{r^{2}}-\frac{2\sqrt{3}}{3}\beta e+{\cal O}(r), \ \ A_{5}=\frac{1}{2}e^{2}\big{(}1+\ln\big{(}\frac{4\beta^{2}}{3e^{2}}\big{)} \big{)}\ \ {\rm for}\ \,n=5, \tag{20}\]
where \(A_{5}\) is the'marginal' mass for \(n=5\), which depends on the values of parameters \(\beta\) and \(e\). As we observe in Fig. (1(a)), independent of the parameters, \(f(r)\) goes to \(\infty\) as \(r\to\infty\). However, for \(r\to 0\), we have the following cases:
For the marginal case which is characterized by \(m=A_{5}\), the function \(f(r)\) in Eq. (20) has a finite value at \(r=0\) which is
\[f(r)\ =\ 1-\frac{2\sqrt{3}}{3}\beta e\ \ {\rm for}\ \,n=5. \tag{21}\]
For \(m>A_{5}\), the function \(f(r)\) goes to \(-\infty\) as \(r\to 0\). Therefore, there is just one horizon and the AdS-BIYM black hole behavior is analogous to the Schwarzschild black hole (we abbreviate it to Schw-type).
For \(m<A_{5}\), the solution goes to \(\infty\) at the limit \(r\to 0\). Thus, the BIYM black hole has a similar behavior like the 'Reissner-Nordstrom' black hole (we abbreviate it to RN-type). In this case, the black hole may have zero (naked singularity), one (extremal black hole) or two horizons. Horizons (\(r_{+}\)) are the roots of the equation \(f(r_{+})=0\). If the finite value of the metric function in Eq. (21) is positive,(i.e., for \(\beta e<\frac{\sqrt{3}}{2}\)), then the solution with \(m<A_{5}\) leads to a naked singularity and so the only solution is the Swch-type with \(m>A_{5}\). However, when the finite value in Eq. (21) is negative (\(\beta e>\frac{\sqrt{3}}{2}\)), the solution with \(m<A_{5}\) can describe a black hole with horizons for \(m_{ex}<m<A_{5}\). The parameter \(m_{ex}\) is the mass of the extremal black hole, which is determined from the conditions \(f(r=r_{ex})=0\) and \(f^{{}^{\prime}}(r=r_{ex})=0\). For \(n=5\), it is given by
\[\left(\frac{4}{l^{2}}+\frac{4\beta^{2}}{3}\right)r_{ex}^{3}+\left(2+\frac{4 \beta}{3}\sqrt{\beta^{2}r_{ex}^{2}+3e^{2}}\right)r_{ex}=0,\ \ \ \ {\rm for}\ \,n=5 \tag{22}\]
We have probed the horizon structure of the BIYM black hole for \(n=6\) In Fig. (1(b)). For \(n=6\), the expansion of the metric function in Eq. (15) around \(r=0\) becomes
\[f(r)\ =\ 1-\frac{m-A_{6}}{r^{3}}-\frac{\sqrt{6}}{3}\beta e+{\cal O}(r), \ \ A_{6}=-\frac{12}{5}\sqrt[4]{\frac{6}{\pi^{2}\beta^{2}}}e^{5/2}\,\Gamma\big{(} \frac{3}{4}\big{)}^{2},\ \ {\rm for}\ \,n=6 \tag{23}\]
This shows that the marginal case in \(n=6\) happens only for \(m<0\). Therefore, the six-dimensional BIYM black hole has only one horizon when \(m>0\), which is the Schw-type. This behavior is a general feature of the nonlinear Yang-Mills black holes in some higher dimensions that the marginal mass is negative and there can be only one horizon and thus the Schw-type is the only solution.
In Figs. (2(a)) and (2(b)), we have investigated the horizon structures of the exponential and logarithmic nonlinear Yang-Mills black holes (we abbreviate them to ENYM and LNYM, respectively) in five dimensions. We observe the same behavior as the BIYM case when \(r\to 0\). Expanding the metric function, one can examine the horizon structure near the origin for ENYM and LNYM cases in the same way as the BIYM case. The expansions of the function \(f(r)\) in Eq. (15) around \(r=0\) for the ENYM and LNYM cases in \(n=5\) and for \(k=1\) are given by
\[f(r)\ =\ 1-\frac{m-A_{5}}{r^{2}}+{\cal O}(r),\ \ \ A_{5}=\frac{1}{2}e^{2}\big{(}1- \gamma+\ln\big{(}\frac{2\beta^{2}}{3e^{2}}\big{)}\big{)}\ \ {\rm for}\ \,n=5, \tag{24}\]
and
\[f(r)\ =\ 1-\frac{m-A_{5}}{r^{2}}+\mathcal{O}(r),\ \ \ A_{5}=\frac{1}{2}e^{2}\big{(}1+ \ln\!\big{(}\frac{4\beta^{2}}{3e^{2}}\big{)}\big{)}\ \ \text{for}\ \ n=5, \tag{25}\]
respectively, where \(\gamma\) is Euler-Mascheroni constant. As we observe in Figs. (2(a)) and (2(b)) when \(m=A_{5}\) we have \(f(r)=1\). For \(m>A_{5}\) the behavior of the metric function is Schw-type. For \(m<A_{5}\) depending on the parameters \(\beta\) and \(e\), we may have zero(naked singularity), one(extremal black hole) or two horizons. In fact, the marginal mass, which depends on both \(e\) and \(\beta\), is a boundary(or a margin) between two qualitatively different kinds of solutions shown in Fig. (1(a)) and Fig. (2). If the constant of integration \(m\) is larger than the marginal mass then the exact solution in Eq. (15) has only one horizon that is similar to the Schwarzchild black hole behavior, despite the fact that the black hole is charged. If \(m\) is smaller than the marginal mass then the exact solution (15) has two horizons, which is the same as Reissner Nordstrom behavior. The marginal mass forms the boundary between these two cases for \(n=5\) and some higher dimensions.
We can also investigate the horizon structure of the solutions in higher dimensions. In general, the expansion of the metric function \(f(r)\) around \(r=0\) in Eq. (15) for the BJYM case is given by
\[f(r)\ =\ k-\frac{m}{r^{n-3}}+\frac{4\beta^{2}\,C_{n}}{(n-2)\,r^{n-3}}-\frac{2 \sqrt{2(n-2)(n-3)}}{(n-2)(n-3)}\beta e+\mathcal{O}(r), \tag{26}\]
and for the ENYM and LNYM cases are
\[f(r)\ =\ k-\frac{m}{r^{n-3}}+\frac{4\beta^{2}\,C_{n}}{(n-2)\,r^{n-3}}+ \mathcal{O}(r), \tag{27}\]
and
\[f(r)\ =\ k-\frac{m}{r^{n-3}}-\frac{8\beta^{2}\,C_{n}}{(n-2)\,r^{n-3}}+ \mathcal{O}(r), \tag{28}\]
respectively, where \(C_{n}\) is the integration constant for dimension n. One may obtain a value for \(C_{n}\) by assuming that for large values of \(\beta\), \(f(r)\) tends to the Yang-Mills solution.
## IV Thermodynamic quantities of the NYM black hole solutions
According to the gauge/gravity duality, one can provide a relation between the strongly coupled gauge theories and the related weakly coupled string theories. From the holography viewpoint, the bulk string theory may inform
Figure 1: Horizon structure for AdS-BJYM black hole solutions in 5 and 6 dimensions. In 5 dimensions, we have the marginal case with red dash-dot line (for \(m=A_{5}\)), the Schw-type case with blue solid lines (for \(m>A_{5}\)) and the RN-type case with black dash lines (for \(m<A_{5}\)). The black dash RN-type lines are defined for the naked singularity, extremal black hole and a black hole with two horizons from top to down. In 6 dimensions, there is only the Schw-type black hole. We have set \(e=1\), \(k=1\) and \(l=1\).
the boundary gauge theory. Using AdS/CFT correspondence[60; 61] which maps the conformal field theory to the asymptotically AdS spacetime with a higher dimension, thermodynamic properties of a black hole may reveal the dual physical system properties. For instance, the horizon of a black hole in the asymptotically AdS spacetime can give information about the finite temperature of its dual field theory.
In this part, we would like to investigate the thermodynamic quantities of the NYM black hole and then check the first law of thermodynamics. The Hawking temperature of the NYM black hole may be obtained from
\[T_{+} = \frac{\kappa}{2\pi}=\frac{f^{{}^{\prime}}(r_{+})}{4\pi}=\frac{k( n-3)}{4\pi r_{+}}-\frac{\Lambda}{2\pi(n-2)}r_{+}\] \[+ \Bigg{\{}\begin{array}{ll}+\frac{\beta^{2}r_{+}}{\pi(n-2)} \bigg{(}1-\sqrt{1+\frac{(n-2)(n-3)e^{2}}{2\beta^{2}r_{+}^{4}}}\bigg{)},&BI\\ \\ -\frac{\beta^{2}r_{+}}{\pi(n-2)}\bigg{[}1-\exp\bigg{(}-\frac{(n-2)(n-3)e^{2} }{4\beta^{2}r_{+}^{4}}\bigg{)}\bigg{]},&EN\\ \\ -\frac{2\beta^{2}r_{+}}{\pi(n-2)}{\rm ln}\bigg{(}1+\frac{(n-2)(n-3)e^{2}}{8 \beta^{2}r_{+}^{4}}\bigg{)},&LN\end{array} \tag{29}\]
where we have used \(f(r_{+})=0\). From the so-called area law (which states that the entropy is one-quarter of the event horizon area of the black hole [62]), the entropy can be calculated as below
\[S=\frac{A}{4}=\frac{r_{+}^{n-2}}{4}\omega_{n-2}, \tag{30}\]
where \(\omega_{n-2}\) represents the volume of a \((n-2)\)-dimensional unit sphere and a \((n-2)\)-dimensional hypersurface with constant negative curvature for \(k=1\) and \(k=-1\) respectively. To obtain the mass, we use the subtraction method of Brown and York [63]. For this purpose, let us write the metric (8) in the following form
\[ds^{2}=\lambda_{ab}dx^{a}dx^{b}=-V(r)dt^{2}+\frac{dr^{2}}{V(r)}+r^{2}d\Omega_{ n-2}^{2}, \tag{31}\]
and choose a background metric with
\[V_{0}(r)=\left\{\begin{array}{ll}k-\frac{2\Lambda}{(n-1)(n-2)}r^{2},&n\neq 5 \\ \\ k-\frac{\Lambda}{6}r^{2}-\frac{2e^{2}\ln(r/r_{0})}{r^{2}}&n=5.\end{array}\right. \tag{32}\]
\(V_{0}(r)\) is an arbitrary function that determines the zero of the energy to avoid the infinities of the mass. We again choose \(r_{0}=1\). If we characterize \(\sigma_{ab}\) as the metric of the spacelike surface \(\Sigma\) in \(\partial\mathcal{M}\), and \(n^{a}\) and \(\xi^{b}\) as the unit normal and the timelike killing vectors of this boundary, respectively, the mass of this black hole is calculated by
\[M=\frac{1}{8\pi}\int_{\Sigma}d^{n-2}\sqrt{\sigma}\{(K_{ab}-K\lambda_{ab})-(K_{ ab}^{0}-K^{0}\lambda_{ab}^{0})\})n^{a}\xi^{b}, \tag{33}\]
Figure 2: Horizon structure for AdS-LNYM and AdS-ENYM black hole solutions in 5 dimensions, respectively. We have set \(e=1\), \(k=1\) and \(l=1\).
where \(\sigma\) is the determinant of the metric \(\sigma_{ab}\) and \(K^{0}_{ab}\) is the extrinsic curvature tensor of the background metric. As we use the limit \(r\rightarrow\infty\), the mass of the NYM black hole yields to
\[M=\frac{(n-2)\omega_{n-2}}{16\pi}m, \tag{34}\]
where \(m\) is found from the equation \(f(r_{+})=0\) in Eq. (15).
The global Yang-Mills charge of the NYM black hole is obtained from the Gauss law
\[Q=\frac{1}{4\pi}\int d^{n-2}x\sqrt{Tr(F^{(a)}_{\mu\nu}F^{(a)\mu\nu})}=\frac{ \sqrt{(n-2)(n-3)}\,\omega_{n-2}}{4\pi}e. \tag{35}\]
We consider the mass \(M\) in Eq. (34) as a function of the entropy (30) and the charge (35), so the first law of thermodynamics is specified by
\[dM=TdS+\Phi dQ, \tag{36}\]
where \(T=\left(\frac{\partial M}{\partial S}\right)_{Q}\) and \(\Phi\) is the gauge potential, \(\Phi=\left(\frac{\partial M}{\partial Q}\right)_{S}\). Numerical calculations demonstrate that the evaluated \(T\) is in agreement with \(T_{+}\) in Eq. (29). So, the first law of the black hole thermodynamics is satisfied, if we obtain the gauge potential of the solutions with \(n\neq 4z+1\) as below
\[\Phi=\left(\frac{\partial M}{\partial Q}\right)_{S}=\left\{\begin{array}{ll}- \frac{2\pi Q(n-2)(n-3)r_{+}^{n-5}}{(n-5)}{}_{2}F_{1}\big{(}\big{[}\frac{5-n}{ 4}\big{]}\,,\big{[}\frac{9-n}{4}\big{]}\,,-8\xi_{+}\big{)},&BI\\ \\ -\frac{2\pi Q(n-2)(n-3)r_{+}^{n-5}}{(n-5)}{}_{2}F_{1}\big{(}\big{[}\frac{5-n}{ 4}\big{]}\,,\big{[}\frac{9-n}{4}\big{]}\,,-4\xi_{+}\big{)},&EN\\ \\ -\frac{2\pi Q(n-2)(n-3)r_{+}^{n-5}}{(n-1)(1+2\xi_{+})}-\frac{8\pi Q(n-2)(n-3)r_{ +}^{n-5}}{(n-1)(n-5)}{}_{2}F_{1}\big{(}\big{[}1,\frac{5-n}{4}\big{]}\,,\big{[} \frac{9-n}{4}\big{]}\,,-2\xi_{+}\big{)}+\\ \frac{16\pi Q\xi_{+}(n-2)(n-3)r_{+}^{n-5}}{(n-1)(n-9)}{}_{2}F_{1}\big{(}\big{[} 2,\frac{9-n}{4}\big{]}\,,\big{[}\frac{13-n}{4}\big{]}\,,-2\xi_{+}\big{)}.&LN \end{array}\right. \tag{37}\]
where \(\xi_{+}=\frac{(n-2)(n-3)\pi^{2}Q^{2}}{\beta^{2}r_{+}^{2}}\). The gauge potential of the solutions with \(n=4z+1\) can be derived exactly for each dimension, however it does not have a general form.
## V Thermal stability of the NYM black hole solutions
The thermal stability of a black hole is determined if one analyzes the behavior of the entropy \(S\) or the energy \(M\) with respect to the small variations of the thermodynamic coordinates around the equilibrium. In this section, we aim to consider \(S\) and \(Q\) as a set of thermodynamic variables and probe the NYM black hole thermal stability in the canonical and grand canonical ensembles. In the canonical ensemble, the parameter charge \(Q\) is fixed and so the black hole is thermally stable if the heat capacity
\[C_{Q}=T_{+}\big{(}\frac{\partial S}{\partial T_{+}}\big{)}_{Q}=T_{+}\bigg{(} \frac{\partial^{2}M}{\partial S^{2}}\bigg{)}_{Q}^{-1}, \tag{38}\]
is positive. It should be noted that in order to have physical solutions, the temperature should be also positive. So, a physically stable NYM black hole in the canonical ensemble is obtained, if the conditions \(T_{+}>0\) and \(C_{Q}>0\) are satisfied. In the grand canonical ensemble, both parameters \(S\) and \(Q\) are variables and so the positive value of the Hessian matrix determinant may lead to stable solutions. The Hessian matrix is defined as
\[H=\left[\begin{array}{cc}\bigg{(}\frac{\partial^{2}M}{\partial S^{2}}\bigg{)} _{Q}&\Big{(}\frac{\partial^{2}M}{\partial S\partial Q}\bigg{)}\\ \bigg{(}\frac{\partial^{2}M}{\partial Q\partial S}\bigg{)}&\Big{(}\frac{ \partial^{2}M}{\partial Q^{2}}\bigg{)}_{S}\end{array}\right], \tag{39}\]
where we abbreviate the related determinant to \(det(H)\). In this ensemble, the two conditions \(\big{(}\frac{\partial^{2}M}{\partial S^{2}}\big{)}_{Q}>0\) and \(\big{(}\frac{\partial^{2}M}{\partial Q^{2}}\big{)}_{S}>0\) should be satisfied. If all three quantities, temperature \(T_{+}\), heat capacity \(C_{Q}\) and \(det(H)\) are positive, then these two conditions are established spontaneously from Eqs. (38) and (39).
To find a physical stable region for the NYM black hole in both canonical and grand canonical ensembles, we have
plotted the temperature \(T_{+}\), heat capacity \(C_{Q}\) and Hessian-matrix determinant \(det(H)\) versus \(r_{+}\) in Figs. (3-6). As the third term in Eq. (29) is negative for the three BHYM, ENYM, and LNYM cases, a positive value for the temperature depends on the values of the first two terms. To reduce the effect of the third term in Eq. (29), we choose a small value for the parameter \(Q\). We can also find from Eq. (29) that the temperature is positive just for the AdS (\(\Lambda<0\)) solutions with \(k=\pm 1\), and also for dS(\(\Lambda>0\)) and flat (\(\Lambda=0\)) solutions with \(k=1\). We have probed the solutions with these features in Figs. (3-6). We refuse to investigate the thermal stability of the dS solutions since our results have shown that it is not possible to obtain a positive region for both quantities \(T_{+}\) and \(C_{Q}\).
For an economic reason, we have studied just the stability of the AdS-BJYM solutions with \(k=1\) in Fig. (3), the AdS-ENYM solutions with \(k=-1\) in Fig. (4) and flat LNYM solutions with \(k=1\) in Figs. (5) and (6).
In Fig. (3), the temperature is positive for all values of \(r_{+}\) in dimensions \(n=4,5,6\). So, the thermal stability of these solutions in the canonical ensemble depends only on the heat capacity value. There is a \(r_{+\text{min1}}\) which the heat capacity is positive for \(r_{+}>r_{+\text{min1}}\) and it increases as the dimension \(n\) increases. To obtain a physically stable region in the grand canonical ensemble, all the quantities \(T_{+}\), \(C_{Q}\) and \(det(H)\) should be positive. We can conclude from Fig. (3) that the stable region becomes smaller as the dimension \(n\) increases. For example, the four-dimensional BJYM solutions are stable for \(r_{+}>r_{+\text{min1}}\), while there is no stable regions for \(n=6\). For \(n=5\), there is just a small stable region for the BHYM black hole. Obviously, the obtained stable regions are different in these two ensembles, which is expected. However, one may choose the cosmological constant as a thermodynamics variable. It is argued in [64] that considering cosmological constant as a variable may lead to the same stable regions for the two ensembles.
For the AdS-ENYM black hole in Fig. (4), there is a \(r_{+\text{min2}}\) for each dimensions \(n=4,5,6\), which both \(T_{+}\) and \(C_{Q}\) are positive for \(r_{+}>r_{+\text{min2}}\). By increasing the dimension \(n\), the positive value of \(det(H)\) decreases. The four-dimensional AdS-ENYM solutions with \(k=-1\) are thermally stable for \(r_{+}>r_{+\text{min2}}\) in the grand canonical ensemble, however, there is no thermal stability for \(n=6\).
In Fig. (5), we have probed the stability of the flat LNYM solutions with \(k=1\). As the heat capacity behavior is not clear in \(n=4,5,6\) for the range of \(0\leq r_{+}\leq 1\), so we have magnified it in Fig. (6). Figs. (5) and (6) show that there are two values \(r_{+\text{min3}}\) and \(r_{+\text{max}}\) which both \(T_{+}\) and \(C_{Q}\) are positive for \(r_{+\text{min3}}\leq r_{+}\leq r_{+\text{max}}\). As the dimension \(n\) increases, the values of \(r_{+\text{min3}}\) and \(r_{+\text{max}}\) decrease. As the quantity \(det(H)\) is negative for the range \(r_{+\text{min3}}\leq r_{+}\leq r_{+\text{max}}\), so we cannot find a stable region for the flat solutions in the grand canonical ensemble.
## VI Critical behavior of the NYM black hole solutions
In this section, we would like to study the critical behavior and phase transitions of the NYM black holes in the extended phase space. One can enlarge the thermodynamic phase space and consider the cosmological constant as a thermodynamic pressure [65; 66; 67]. Hawking and Page were the first who showed a phase transition for the Schwarzschild AdS black hole [68]. Recently, many studies about the critical behavior and phase transition of black holes have been done [69; 70; 71; 72]. The critical behavior of the BI Maxwell (BIM) black hole in the AdS spacetime has been also investigated [56; 58; 73]. In Ref.[74], the critical behavior of the BI-dilaton black holes has been discussed. In the following, we first obtain a Smarr relation, then we get to an equation of state and a Gibbs energy to check out the phase transition of the NYM black hole. We also obtain the critical exponents of this black hole and compare
them with the Van der Waals fluid.
### Smarr relation
To obtain a Smarr relation, we should investigate the thermodynamics of the black hole in the extended phase space. For the NYM black hole, we consider the quantities \(S\) and \(Q\), the dimensionful parameters \(\Lambda\) and \(\beta\), and their conjugates as thermodynamic variables. In this way, we can write the first law of thermodynamics in the extended phase space
\[dM=TdS+\Phi dQ+VdP+Bd\beta, \tag{40}\]
where the pressure \(P\) is defined as
\[P=-\frac{\Lambda}{8\pi}. \tag{41}\]
If we consider the specific volume \(v=\frac{4r_{+}}{n-2}\) and use Eq. (34), then the conjugate quantity of \(P\) is
\[V=\frac{\omega_{n-2}}{n-1}r_{+}^{n-1}, \tag{42}\]
and the related conjugate of \(\beta\) in the \(n\neq 4z+1\) case is defined as
\[B = \frac{\partial M}{\partial\beta} \tag{43}\] \[= \left\{\begin{array}{ll}\frac{e^{2}(n-2)(n-3)r^{n-5}}{8\pi\beta (n-5)}F_{1}\big{(}\big{[}\frac{5-n}{4}\big{]}\,,\big{[}\frac{9-n}{4}\big{]}\,,- \frac{\eta_{+}}{4}\big{)}+\frac{\beta r^{n-1}}{2\pi(n-1)}\big{(}{}_{2}F_{1} \big{(}\big{[}\frac{1-n}{4}\big{]}\,,\big{[}\frac{5-n}{4}\big{]}\,,-\frac{\eta _{+}}{4}\big{)}-1\big{)},&EN\\ \\ \frac{(n-2)(n-3)e^{2}r^{n-5}}{8\pi\beta(n-1)}(1+\frac{\eta_{+}}{8})^{-1}-\frac {\beta r^{n-1}}{\pi(n-1)}{\rm ln}(1+\frac{\eta_{+}}{8})-\frac{\eta_{+}(n-2)(n -3)e^{2}r^{n-5}}{16\pi(n-1)(n-9)\beta}{}_{2}F_{1}\big{(}\big{[}2,\frac{9-n}{4 }\big{]}\,,\big{[}\frac{13-n}{4}\big{]}\,,-\frac{\eta_{+}}{8}\big{)}.&LN\end{array}\right.\]
By using Eqs. (29), (30), (34), (35), (37), (41), (42) and (43), the Smarr relation of the NYM black hole for \(n\neq 4z+1\) can be derived as
\[M=\frac{1}{n-3}[\Phi Q-\beta B-2PV+(n-2)TS],\quad\mbox{ for }\,n\neq 4z+1. \tag{44}\]
It should be noted that the Smarr relation is not satisfied for the case \(n=4z+1\). This is not unexpected as there are some other black hole solutions in the context of nonlinear electrodynamics for which the Smarr relation is not satisfied[75, 76, 77]. It was argued that the reason is that the trace of energy momentum tensor is not zero. The trace of energy momentum tensor is not zero for the NYM black holes. However, we have Smarr relation for \(n\neq 4z+1\). We guess it may originate from some properties of hypergeometric functions which have no explicit form for the case \(n\neq 4z+1\). We hope to investigate this issue in the future and find a physical reason for the Smarr relation not being satisfied.
### Equation of state
To compare the critical behavior of the NYM black hole with that of the Van der Waals fluid, we first obtain the equation of state \(P(T,v,\beta)\equiv P\), using equation (29). The critical points may be determined by using the following
conditions
\[\frac{\partial P}{\partial v}=0\,\,\,\,,\,\,\,\frac{\partial^{2}P}{\partial v^{2}}=0. \tag{45}\]
We denote the volume, temperature and pressure of the critical points by \(v_{c}\), \(T_{c}\) and \(P_{c}\), respectively. In the following, we will discuss the critical behavior of the NYM black hole for three types BIYM, ENYM, and LNYM, separately:
**Critical behavior of the BIYM solutions**
By substituting the relation (41) in Eq. (29), one can find the equation of state for the BIYM black hole,
\[P=\frac{T}{v}-k\frac{n-3}{\pi(n-2)v^{2}}-\frac{\beta^{2}}{4\pi}\bigg{(}1-\sqrt {1+\frac{128(n-3)e^{2}}{(n-2)^{3}\beta^{2}v^{4}}}\bigg{)}. \tag{46}\]
The critical behavior of NYM and nonlinear electrodynamics black holes are the same in four dimensions. The critical behavior of the BIM black hole in four and higher dimensions are in Refs. [56; 57]. In this section, we study the critical behavior of NYM black holes in higher dimensions. If we consider the spherical case with \(k=1\) and impose the conditions (45) on the equation (46), we arrive at a cubic equation for the critical points
\[x^{3}+px+q=0\,, \tag{47}\]
where \(x\), \(p\) and \(q\) are given by
\[x=\bigg{[}v_{c}^{4}+\frac{128(n-3)e^{2}}{(n-2)^{3}\,\beta^{2}} \bigg{]}^{-\frac{1}{2}}\,\,\,,\,\,\,p=-\frac{3\,(n-2)^{3}\,\beta^{2}}{256\,(n -3)\,e^{2}}\,\,\,,\,\,\,q=\frac{(n-2)^{5}\,\,\beta^{2}}{8192\,(n-3)\,\,e^{4}}. \tag{48}\]
\(x\) is in terms of \(v_{c}\), therefor to have a positive value for \(v_{c}\), we have the condition
\[|x|\leq\frac{(n-2)\,\sqrt{2\,(n-2)(n-3)}\,\beta}{16\,(n-3)\,e}. \tag{49}\]
To obtain an expression for the critical volume, we have to find the roots of equation (47). For equation (47), with \(p<0\) and real \(q\), one can find one or three real roots. It has three real roots when \(4p^{3}+27q^{2}\leq 0\), which leads to
\[\beta\geq\beta_{0}=\frac{\sqrt{(n-2)(n-3)}}{4\,e}. \tag{50}\]
We can write the roots in trigonometric form as
\[x_{k^{\prime}}=2\,\sqrt{\frac{-p}{3}}\cos\left(\frac{1}{3}\, \arccos\left(\,\frac{3\,q}{2\,p}\sqrt{\frac{3}{\pi}}\right)-\frac{2\pi k^{{}^{ \prime}}}{3}\right)\,\,\,\,,\,\,k^{{}^{\prime}}=0,1,2, \tag{51}\]
where just \(x_{0}\) and \(x_{1}\) give a physical value for the critical volume \(v_{c}\) in Eq.(48). \(x_{0}\) and \(x_{1}\) also satisfy the condition (49) which provides
\[\beta_{0}=\frac{\sqrt{(n-2)(n-3)}}{4\,e}\leq\beta\leq\beta_{2}, \tag{52}\]
where \(\beta_{2}=\frac{\sqrt{2\,(n-2)(n-3)}}{4\,e}\). There is also one real root when \(\beta<\beta_{0}\), for which \(v_{c}\) is negative and so the critical point can exist only for \(\beta\geq\beta_{0}\). In the range of \(\beta_{0}\leq\beta\leq\beta_{2}\), which we call the 'BI regime', there are two critical points corresponding to \(x=x_{0,1}\) in equation (51). Although the critical temperature \(T_{c}\) is positive for both critical points \(v_{c}\), one can find that for \(\beta\geq\beta_{1}=\frac{\sqrt{(n-2)(n-3)\left(6+4\,\sqrt{3}\right)}}{12\,e}\) one of the critical points in the BI regime has negative pressure and thus it is unphysical. We have shown the P-v isotherms in the range of \(\beta_{0}\leq\beta\leq\beta_{1}\) and \(\beta_{1}\leq\beta\leq\beta_{2}\) for \(n=5\) in Figs. (8) and (9), respectively. For values of \(\beta\) greater than \(\beta_{2}\), there is also one critical point corresponding to \(x=x_{1}\) in equation (51). In this range (\(\beta>\beta_{2}\)), which we call the 'YM regime', the critical behavior is identical to YM-AdS black hole. In other words, there is just one inflection point for \(T<Tc\) and P-v isotherms are qualitatively identical to that of a Van der Walls fluid. We have depicted the critical behavior in this range (\(\beta>\beta_{2}\)) in Fig. (7) for \(n=5\). For a better understanding, we have brought the above results for \(n=5\) in Table (1).
We can show that as \(\beta\rightarrow\infty\), the critical volume determined from \(x_{1}\) reduces to the critical volume of the YM-AdS
black hole. So, we name the branch determined from \(x_{1}\) as the YM branch and the branch determined from \(x_{0}\) as the BI branch,
\[v_{c}=\left[\frac{1}{x^{2}}-\frac{128\left(n-3\right)e^{2}}{\left(n-2\right)^{3} \beta^{2}}\right]^{\frac{1}{4}},\,x=\left\{\begin{array}{ll}x_{1},&\beta\geq \beta_{0}\ \ \ \ (YM-branch)\\ \\ x_{0},&\beta\in(\beta_{0},\beta_{2})\ \ \ \ (BI-branch).\end{array}\right. \tag{53}\]
The behaviors of the critical values, \(T_{c}\), \(v_{c}\) and \(P_{c}\) with respect to the nonlinear parameter \(\beta\) are depicted in Fig. (10) and Fig. (11). For large \(\beta\), the critical values expand as
\[v_{c} = \frac{4\sqrt{6}\,e}{n-2}-\frac{7\sqrt{6}\left(n-3\right)}{216e \beta^{2}}+\mathcal{O}\bigg{(}\frac{1}{\beta^{3}}\bigg{)}, \tag{54}\] \[T_{c} = \frac{\sqrt{6}\left(n-3\right)}{18\pi e}+\frac{\sqrt{6}\left(n-2 \right)\left(n-3\right)^{2}}{5184e^{3}\pi\beta^{2}}+\mathcal{O}\bigg{(}\frac{ 1}{\beta^{3}}\bigg{)},\] (55) \[P_{c} = \frac{(n-2)(n-3)}{192\pi e^{2}}+\frac{7(n-2)^{2}(n-3)^{2}}{16588 8\pi e^{4}\beta^{2}}+\mathcal{O}\bigg{(}\frac{1}{\beta^{3}}\bigg{)},\] (56) \[\rho_{c} = \frac{P_{c}V_{c}}{T_{c}}=\frac{3}{8}-\frac{(n-2)(n-3)}{768e^{2} \beta^{2}}+\mathcal{O}\bigg{(}\frac{1}{\beta^{3}}\bigg{)}. \tag{57}\]
We can observe that in the limit \(\beta\rightarrow\infty\), the critical values asymptote to those of the YM-AdS black hole. In this limit, the critical ratio tends to \(\rho_{c}\to 3/8\), independent of the dimension \(n\). It is in contrary to the abelian Maxwell theory in which the critical ratio, \(\rho_{c}=\frac{P_{c}v_{c}}{T_{c}}\) depends on the dimension \(n\)[57].
**Critical behaviors of the ENYM and LNYM solutions**
The equations of state for the ENYM and LNYM solutions are defined respectively as
\[P=\frac{T}{v}-k\frac{(n-3)}{\pi(n-2)v^{2}}+\frac{\beta^{2}}{4\pi}\bigg{[}1- \exp\bigg{(}-\frac{64(n-3)e^{2}}{(n-2)^{3}\beta^{2}v^{4}}\bigg{)}\bigg{]},\
and
\[P =\frac{T}{v}-k\frac{(n-3)}{\pi(n-2)v^{2}}+\frac{\beta^{2}}{2\pi}\text {ln}\bigg{(}1+\frac{32(n-3)e^{2}}{(n-2)^{3}\beta^{2}v^{4}}\bigg{)}, \text{LN}. \tag{59}\]
The critical behaviors of the ENYM and LNYM black holes are qualitatively the same as the BBYM case. However, the critical points and thus the branches mentioned in relation (53) cannot be determined analytically. Therefore, we solved Eq. (45) numerically in order to obtain the critical points. In the following, we will discuss the critical behavior just for the LNYM case. We have obtained the critical values of the six-dimensional LNYM solutions for different values of \(\beta\) in Table (2). In 6 dimensions, \(\beta_{0}=\frac{\sqrt{3}}{2e}\approx\frac{0.86}{e}\), \(\beta_{1}=\frac{\sqrt{6(3+2\sqrt{3})}}{(6e})\approx\frac{1.03}{e}\) and \(\beta_{2}=\frac{\sqrt{6}}{2e}\approx\frac{1.22}{2e}\). The P-v isotherms for \(n=6\) and \(e=k=1\) are displayed in Figs. (12(a)) and (12(b)). In the range of \(\beta_{0}\leq\beta\leq\beta_{2}\), there are two critical points as we see in Fig. (12(a)), but for \(\beta>\beta_{2}\), there is only one critical point in Fig. (12(b)) and the P-v isotherm is analogous to the Van der Waals fluid.
Figure 8: P-v diagram of BYM theory for different values of temperature \(T\) with \(\beta\in(\beta_{0},\beta_{1})\) and \(n=5\). We have set \(e=1\) and \(\beta=0.7\) for which \(T_{c1}\approx 0.091626\) and \(T_{c2}\approx 0.079242\).
Figure 9: P-v diagram of BYM theory for \(\beta\in(\beta_{1},\beta_{2})\) and \(n=5\). We have set \(e=1\) and \(\beta=0.77\) for which \(T_{c1}\approx 0.090607\) and \(T_{c2}\approx 0.061157\).
### Gibbs Energy
To get more information about the phase transition of the NYM black holes, we analyze the behavior of the Gibbs free energy, \(G\). It can be achieved from the relation \(G(T,P)=M-TS\). We have plotted \(G\) versus \(T\) for the five-dimensional BIYM solution in Figs. (13) and (14). We can observe that the Gibbs free energy depends on the value of the nonlinear parameter \(\beta\), as in the case of critical behavior. For \(\beta>\beta_{2}\), which we called the YM-branch, the phase transition is the same as the one for YM-AdS black holes or the RN-AdS black holes [56]. Namely, there is one critical point that corresponds to a phase transition from a small black hole to a large black hole when \(P<P_{c1}\), see Fig. (13).
We have also shown the behavior of \(G\) in the BI branch in Figs. (14(a)) and (14(b)). In Fig. (14(a)) with the range of \(\beta\in(\beta_{0},\beta_{1})\), there are two physical (with positive pressure) critical points described by \(T_{c1}\) and \(T_{c2}\). The phase transition at \(T=T_{c2}\) is not physical, since the Gibbs energy is not globally minimized at this point. However, there is a first-order phase transition from a small to a large black hole for \(T<T_{c1}\) which ends at \(T=T_{t}\). On the other hand, in a specific range of \(T\in(T_{t},T_{z})\), one can observe a discontinuity in the global minimum of Gibbs energy which shows a reentrant LBH/SBH/LBH (large black hole/small black hole/large black hole) phase transition. Interestingly, this is a special significance for the higher-dimensional NYM solutions while there is no reentrant phase transition for the BI RN-AdS black holes in higher dimensions [56].
For the range of \(\beta_{1}\leq\beta\leq\beta_{2}\) in Fig. (14(b)), there is only one critical point with positive pressure at \(T=T_{c1}\) and a
Figure 11: The critical ratio \(\rho_{c}\) with respect to \(\beta\) for \(n=5\) and \(e=1\). For the limit \(\beta\to\infty\), the critical ratio asymptotes to the YM-AdS one with \(\rho_{c,YM}=\frac{3}{8}\). It has a unique value for the YM-AdS black hole in each dimension.
Figure 10: critical values \(T_{c}\), \(v_{c}\) and \(P_{c}\) versus \(\beta\) for \(n=5\) and \(e=1\). The black and blue solid lines indicate the YM and BI-branches respectively. The BI-branch corresponds to \(\beta\in(\beta_{0},\beta_{1})\). The dashed horizontal lines are the critical values for the YM-AdS black hole for which \(T_{c,YM}=\sqrt{6}/9\pi e\), \(v_{c,YM}=4\sqrt{6}e/3\) and \(P_{c}=1/32\pi e^{2}\). They can be obtained from the limit \(\beta\to\infty\).
first order SBH/LBH phase transition occurs for \(T_{t}\leq T<T_{c1}\). Similar to the previous case \(\beta\in(\beta_{0},\beta_{1})\), in a specific range of \(T\in(T_{t},T_{z})\), the global minimum of G is not continuous, which represents a reentrant LBH/SBH/LBH phase transition.
For the LNYM and ENYM cases, the Gibbs energy behavior is qualitatively the same as the BHYM case, so we do not probe them.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(\beta\) & \(v_{c1}\) & \(v_{c2}\) & \(T_{c1}\) & \(T_{c2}\) & \(P_{c1}\) & \(P_{c2}\) & \(\rho_{c1}\) & \(\rho_{c2}\) \\ \hline
0.9 & 2.2598 & 1.2866 & 0.1340 & 0.1035 & 0.0214 & 0.0028 & 0.3608 & 0.0350 \\
1 & 2.3052 & 1.1734 & 0.1331 & 0.0769 & 0.0210 & -0.0150 & 0.3645 & -0.2295 \\
5 & 2.4446 & 0.4079 & 0.1300 & -3.2717 & 0.0199 & -4.8685 & 0.3746 & 0.6070 \\
10 & 2.4482 & 0.2777 & 0.1299 & -10.9403 & 0.0199 & 22.4528 & 0.3749 & 0.5699 \\ \hline \end{tabular}
\end{table}
Table 2: Critical values of the six-dimensional LNYM black hole for different values of \(\beta\). We have set \(e=k=1\). As \(\beta\) increases, the second critical volume \(v_{c2}\) decreases and for \(\beta\rightarrow\infty\), the second critical volume disappears and the first one approaches to the critical values of 6D YM-AdS black hole (\(T_{c,YM}=\sqrt{6}/6\pi e\approx 0.1300\), \(v_{c,YM}=\sqrt{6}e\approx 2.4494\), \(P_{c,YM}=1/16\pi e^{2}\approx 0.0199\) and \(\rho_{c,YM}=\frac{3}{8}\approx 0.375\)).
Figure 12: P-v diagram of LNYM theory in \(n=6\). We have set \(e=1\) and \(k=1\).
Figure 13: \(\beta>\beta_{2}\). We have set \(\beta=2\) and \(e=1\). For \(\beta>\beta_{2}\) the behavior is the same as YM-AdS black hole. There is only one critical point and the corresponding phase transition occurs for \(P<P_{c1}\).
### Critical exponents
Let us now calculate the critical exponents of the NYM black hole. These values can describe the physical quantities' behavior in the vicinity of the critical point. It should be noted that we use just the physical critical point that we named in the previous sections. There is only one physical critical point for each ranges \(\beta>\beta_{2}\), \(\beta_{0}\leq\beta\leq\beta_{1}\) and \(\beta_{1}\leq\beta\leq\beta_{2}\). To calculate the exponent \(\alpha\), we should study the behavior of the heat capacity at constant volume. So, we write the entropy \(S\) as a function of \(T\) and \(V\). If we use Eqs. (30) and (42), we can obtain
\[S(T,V)=\frac{1}{4}\omega_{n-2}^{\frac{1}{n-2}}\bigg{(}V(n-1)\bigg{)}^{\frac{n- 2}{n-1}}, \tag{60}\]
where it shows that the entropy is independent of the temperature \(T\) and
\[C_{V}=T\bigg{(}\frac{\partial S}{\partial T}\bigg{)}_{V}=0\ \Rightarrow\ \alpha=0. \tag{61}\]
With defining the variables \(p\), \(\nu\) and \(\tau\) as below
\[p=\frac{P}{P_{c}}\ \,\ \ \nu=\frac{v}{v_{c}}\ \,\ \ \tau=\frac{T}{T_{c}}, \tag{62}\]
the Eqs. (46), (58) and (59) can be rewritten as
\[p\ =\ \frac{\tau}{\nu\rho_{c}}+h(\nu), \tag{63}\]
where \(\rho_{c}=\frac{P_{c}v_{c}}{T_{c}}\) and
\[h(\nu) = -k\frac{n-3}{\pi P_{c}(n-2)\nu^{2}v_{c}^{2}} \tag{64}\] \[+ \frac{1}{P_{c}}\times\left\{\begin{array}{ll}-\frac{\beta^{2}} {4\pi}\bigg{(}1-\sqrt{1+\frac{128(n-3)e^{2}}{(n-2)^{3}\beta^{2}\nu^{4}v_{c}^{2} }}\bigg{)},&BI\\ \\ \frac{\beta^{2}}{4\pi}\bigg{[}1-\exp\bigg{(}-\frac{64(n-3)e^{2}}{(n-2)^{3} \beta^{2}\nu^{4}v_{c}^{4}}\bigg{)}\bigg{]},&EN\\ \\ \frac{\beta^{2}}{2\pi}\mathrm{ln}\bigg{(}1+\frac{32(n-3)e^{2}}{(n-2)^{3}\beta^{2 }\nu^{4}v_{c}^{2}}\bigg{)}.&LN\end{array}\right.\]
Figure 14: G-T diagram of BIYM theory in \(n=5\). We have set \(e=k=1\). In the range of \(\beta_{0}\leq\beta\leq\beta_{1}\), there are two critical points with positive pressure. The Gibbs energy is not minimized at \(P=P_{c2}\), and so the first order phase transition only occurs at \(P=P_{c1}\). There is also a reentrant LBH/SBH/LBH transition for \(P_{t}\leq P\leq P_{z}\). For \(\beta\in(\beta_{1},\beta_{2})\), there is only one physical critical point and so a first order small-large phase transition occurs for \(P_{t}\leq P\leq P_{c1}\). We also observe a reentrant phase transition for the range of \(P\in(P_{t},P_{z})\).
Now if we expand Eq. (63) near the critical point, \(\tau=t+1\) and \(\nu=(1+\omega)^{1/z}\), then we can obtain
\[p=1+At-Btw-Cw^{3}+\mathcal{O}(tw^{2},w^{4}), \tag{65}\]
where \(B=\frac{1}{z\rho_{c}}\), \(C=\frac{1}{z^{3}}\bigg{(}\frac{1}{\rho_{c}}-\frac{h^{(3)}|_{\nu=1}}{6}\bigg{)}\) and \(z=3\) for all dimensions. The derivative of Eq. (65) with respect to \(\omega\) for \(t<0\) gets to
\[dP=-P_{c}(Bt+3Cw^{2})dw. \tag{66}\]
By imposing the Maxwell's equal area law and considering a constant value for the pressure during the transition, we conclude that
\[p=1+At-Bt\omega_{l}-C\omega_{l}^{3}=1+At-Bt\omega_{s}-C\omega_{s }^{3},\] \[0=\int_{\omega_{l}}^{\omega_{s}}\omega dP, \tag{67}\]
where \(\omega_{s}\) and \(\omega_{l}\) denote the volume of the small and large black holes. The non-trivial solution of Eq. (67) is obtained only for \(BCt<0\) as below
\[w_{s}=-\omega_{l}=\sqrt{-\frac{Bt}{C}}. \tag{68}\]
As it is not possible to find an analytic result for the quantity \(C\), so we gather the numeric results of \(A\), \(B\), and \(C\) for the BI case in Table(3). The results show that \(\eta=V_{c}(\omega_{l}-\omega_{s})=2V_{c}\omega_{l}\propto\sqrt{-t}\) and so we get to the following result
\[\beta^{{}^{\prime}}=\frac{1}{2}. \tag{69}\]
We can differentiate of Eq. (65) with respect to \(V\) and obtain the isothermal compressibility, \(\kappa_{T}=-\frac{1}{V}\frac{\partial V}{\partial P}|_{T}\propto-\frac{V_{c}} {B\tilde{P}_{c}}\frac{1}{t}\). This leads to the critical exponent
\[\gamma=1. \tag{70}\]
The critical isotherm \(t=0\) in Eq. (65) reduces to \(p-1=-C\omega^{3}\), where it results to the "shape"
\[\delta=3. \tag{71}\]
The obtained results for \(\alpha\), \(\beta^{{}^{\prime}}\), \(\gamma\), and \(\delta\) indicate that the critical exponents of the NYM black hole are exactly the same as those of the Van der Waals fluid [69].
### Dynamical stability of the NYM black hole solutions
In this section, we would like to investigate the dynamical stability of the nonlinear Yang-Mills black hole. Regge and Wheeler were the first who studied the stability by perturbation modes behaviors[78]. They decomposed the perturbations of 4-dimensional static and spherically symmetric background into odd-and even-parity sectors under a two dimensional rotation transformation. Dynamical stability of the nonlinear black hole solutions in Einstein gravity has been studied in Ref. [79]. Now we follow this paper to study the dynamical stability of the NYM black holes. We derive the dynamical stability of the four-dimensional NYM solutions. If one substitutes \(\hat{F}\equiv\frac{1}{4}F^{2}\) in
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Parameters & \multicolumn{3}{c|}{\(n=5\)} & \multicolumn{3}{c|}{\(n=6\)} \\ \hline \(\beta\) & 0.9 & 1 & 2 & 1.5 & 2 & 3 \\ A & 2.7603 & 2.7372 & 2.6813 & 2.7274 & 2.6976 & 2.6795 \\ B & -0.9201 & -0.9124 & -0.8937 & -0.9091 & -0.8992 & -0.8931 \\ C & -0.0386 & -0.0410 & -0.0475 & -0.0421 & -0.0455 & -0.0477 \\ \hline \end{tabular}
\end{table}
Table 3: The parameters A, B and C for different values of \(\beta\) in BYM theory for \(n=5\), \(n=6\) and \(e=1\).
the Lagrangian (2) and also define \({\cal L}(\hat{F})\equiv-L(\hat{F})/4\), then the Hamiltonian of the NYM Lagrangian is specified by \({\cal H}\equiv 2{\cal L}_{\hat{F}}\hat{F}-{\cal L}\) where \({\cal L}_{\hat{F}}\equiv\frac{\partial{\cal L}}{\partial{\hat{F}}}\). It is proper to study the dynamical stability in the so called P frame where \(P\equiv{\cal L}_{\hat{F}}^{2}\hat{F}\). If \({\cal H}_{P}\)(where \({\cal H}_{P}\equiv\frac{\partial{\cal H}}{\partial{\hat{F}}}\) ) vanishes nowhere outside the horizon, then the solutions are dynamically stable under odd parity perturbations. We obtain
\[{\cal H}_{P}=\left\{\begin{array}{ll}\sqrt{1+\frac{2\hat{F}}{ \beta^{2}}},&BI\\ e^{\frac{\hat{F}}{\beta^{2}}},&EN\\ 1+\frac{\hat{F}}{2\beta^{2}}.&LN\end{array}\right. \tag{72}\]
We evaluate \({\cal H}_{P}\) in four dimensions. Using Eq.(3) in four dimensions, we get to \(\hat{F}=\frac{e^{2}}{2r^{4}}\) that results to the positive value for \({\cal H}_{P}\). We can conclude that the NYM solutions are dynamically stable under odd type perturbations since \({\cal H}_{P}\) vanishes nowhere outside the horizon. For even-type perturbations, the solutions are dynamically unstable if the condition \({\cal H}_{xx}>0\) is stablished that \(x=\sqrt{-2Q^{2}P}\) (\(Q\) is the Yang-Mills charge in Eq.(35)), and \({\cal H}_{xx}=\frac{\partial^{2}{\cal H}}{\partial{x^{2}}}\). For the NYM black holes, \({\cal H}_{xx}\) is obtained as follows
\[{\cal H}_{xx}=-\frac{1}{Q^{2}}\times\left\{\begin{array}{ll}(1+ \frac{2\hat{F}}{\beta^{2}})^{\frac{3}{2}},&BI\\ \\ e^{\frac{\hat{F}}{\beta^{2}}}(1-\frac{2\hat{F}}{\beta^{2}})^{-1},&EN\\ \\ (1+\frac{\hat{F}}{2\beta^{2}})^{2}(1-\frac{\hat{F}}{2\beta^{2}})^{-1}.&LN\end{array}\right. \tag{73}\]
We note that ENYM and LNYM black holes are unstable under even-type perturbations for respectively the regions \(\beta^{2}<e^{2}/r^{4}\) and \(\beta^{2}<e^{2}/4r^{4}\).
## VII Concluding Results
In this paper, we attained a new \(n\)-dimensional analytic black hole solution for the non-abelian Yang-Mills gauge theory in the presence of three nonlinear Lagrangians in action, Born-Infeld, exponential and logarithmic Lagrangians. Using the Wu-Yang ansatz, we chose a set of gauge potentials with \(SO(n-1)\) and \(SO(n-2,1)\) gauge symmetric groups and obtained the spherical and hyperbolic solutions. In four dimensions, the nonlinear Yang-Mills (NYM) solutions are similar to the nonlinear electrodynamics black hole solution in Maxwell theory and so the Yasskin theorem is established. We can also conclude that there is a transformation from the non-abelian gauge fields to a set of the abelian ones in \(n=4\). However, for \(n\neq 4\), we could achieve a new set of nonlinear non-abelian Yang-Mills solutions. As expected, the nonlinear Yang-Mills solutions reduce to the Einstein-Yang-Mills ones when \(\beta\rightarrow\infty\).
We checked out the physical structure of the NYM black holes and observed that there is an essential singularity located at \(r=0\). Based on the behavior of the metric function in the limit \(r\to 0\), we found two different types of solutions for the horizon (Schw-type and RN-type) in \(n=5\). In some higher dimensions, the marginal mass is negative and so the Schw-type is the only solution.
We also calculated the thermodynamic quantities of the NYM black hole and probed the first law of thermodynamics. We analyzed the thermal stability of the solutions in both the canonical and the grand canonical ensembles. The stable regions happen for the NYM-AdS solutions with \(k=-1,1\) and for the flat ones with \(k=1\). The physically stable regions decrease as the dimension \(n\) increases. In the grand canonical ensemble with the mentioned parameters, stability emerges just for \(n=4\), and there are no stable regions for \(n=6\).
Furthermore, we investigated the critical behavior of the NYM black holes. We could access a Smarr-type formula for the dimensions \(n\neq 4z+1\), where \(z\) is an integer parameter. For the spherical solution with \(k=1\), we got to the exact critical points for the BHYM black hole, while we used a numeric method to obtain the critical points of the ENYM and LNYM black holes. There is an interesting range of \(\beta\in(\beta_{0},\beta_{2})\), in which the critical behavior differs. One can find a discontinuity in the Gibbs energy for \(\beta\in(\beta_{0},\beta_{2})\), which indicates a reentrant phase transition, and it is known as the zeroth-order phase transition. This reentrant phase transition only occurs in four dimensions for the nonlinear black hole in Maxwell theory[56]. However, in the case of NYM black holes, the reentrant phase transition occurs in four and higher dimensions. For \(\beta\rightarrow\infty\), the critical ratio goes to a constant value of \(3/8\), independent of the dimension \(n\). This is one of the differences between the NYM black hole and the nonlinear electrodynamics one,
in which the critical ratio depends on the dimension \(n\)[57]. We also calculated the critical exponents for the NYM black holes and found that the critical exponents are the same as those in the Van der Waals system.
We also probed the dynamical stability of the NYM black holes. For odd-type perturbations, the NYM black holes are dynamically stable. Under even-type perturbations, we face instability for the ENYM and LNYM black holes in the regions \(\beta^{2}<e^{2}/r^{4}\) and \(\beta^{2}<e^{2}/4r^{4}\), respectively.
In this paper, we could reach the nonlinear Yang-Mills solutions in higher dimensions. The compactification of higher dimensional Yang-Mills theories is a very interesting subject. We also expect the compactified Yang-Mills theories to be useful and helpful to better understand phenomenological and theoretical aspects of fundamental physics. We hope to study gravitational aspects of the compactified Yang-Mills theories in the future.
###### Acknowledgements.
This work is supported by Isfahan University of Technology (IUT).
## VIII Appendix
### The Gauge potentials for \(So(3)\), \(So(2,1)\), \(So(4)\) and \(So(3,1)\) gauge groups
The structure constants and the gauge potentials for some gauge groups are defined as follows:
For \(SO(3)\) gauge group with \(k=1\) and \(n=4\), the coupling constants and the gauge potentials are defined as
\[C^{1}_{23}\ =\ C^{2}_{31}=C^{3}_{12}=-1\,\ \gamma_{ab}=\mbox{diag}(1,1,1) \tag{74}\]
and
\[A^{(i)}_{\mu}\ =\ A^{(i)}_{\theta}\,d\theta+A^{(i)}_{\phi}\,d\phi\,i=1,2,3, \tag{75}\]
where
\[\left[\begin{array}{c}A^{(1)}_{\mu}\\ A^{(2)}_{\mu}\\ A^{(3)}_{\mu}\end{array}\right]\ =\ e\left[\begin{array}{cc}-\cos\phi&\sin \theta\cos\theta\sin\phi\\ -\sin\phi&-\sin\theta\cos\theta\cos\phi\\ 0&\sin^{2}\theta\end{array}\right]\left[\begin{array}{c}d\theta\\ d\phi\end{array}\right]. \tag{76}\]
For \(SO(2,1)\) gauge group with \(k=-1\) and \(n=4\), the coupling constants are
\[C^{1}_{23}\ =\ C^{2}_{31}=-C^{3}_{12}=1\,\ \gamma_{ab}=\mbox{diag}(-1,-1,1), \tag{77}\]
where the gauge potentials are followed by Eq. (75) with
\[\left[\begin{array}{c}A^{(1)}_{\mu}\\ A^{(2)}_{\mu}\\ A^{(3)}_{\mu}\end{array}\right]\ =\ e\left[\begin{array}{cc}-\cos\phi&\sinh \theta\cosh\theta\sin\phi\\ -\sin\phi&-\sinh\theta\cosh\theta\cos\phi\\ 0&\sinh^{2}\theta\end{array}\right]\left[\begin{array}{c}d\theta\\ d\phi\end{array}\right]. \tag{78}\]
The coupling constants for \(SO(4)\) gauge group with \(k=1\) and \(n=5\) are described as
\[C^{1}_{24} = C^{1}_{35}=C^{2}_{41}=C^{2}_{36}=C^{3}_{51}=C^{3}_{62}=1,\] \[C^{4}_{56} = -C^{4}_{21}=C^{5}_{64}=-C^{5}_{31}=C^{6}_{45}=-C^{6}_{32}=1,\] \[\gamma_{ab} = \mbox{diag}(1,1,1,1,1,1), \tag{79}\]
where the gauge potentials are
\[A^{(i)}_{\mu}\ =\ A^{(i)}_{\theta}\,d\theta+A^{(i)}_{\phi}\,d\phi+A^{(i)}_{ \psi}\,d\psi\,i=1,2,3,4,5,6, \tag{80}\]
with definitions
\[\left[\begin{array}{c}A^{(1)}_{\mu}\\ A^{(2)}_{\mu}\\ A^{(3)}_{\mu}\\ A^{(4)}_{\mu}\\ A^{(5)}_{\mu}\\ A^{(6)}_{\mu}\end{array}\right]\ =\ e\left[\begin{array}{cc}-\sin\phi\cos\psi&- \sin\theta\cos\phi\cos\phi\cos\psi&\sin\theta\cos\theta\sin\phi\sin\psi\\ -\sin\phi\sin\psi&-\sin\theta\cos\theta\cos\phi\sin\psi&-\sin\theta\cos\theta \sin\phi\cos\psi\\ -\cos\phi&\sin\theta\cos\theta\sin\phi&0\\ 0&0&-\sin^{2}\theta\sin^{2}\phi\\ 0&\sin^{2}\theta\cos\psi&-\sin^{2}\theta\sin\phi\cos\phi\,\sin\psi\\ 0&\sin^{2}\theta\sin\psi&\sin^{2}\theta\sin\phi\cos\phi\,\cos\psi\end{array} \right]\left[\begin{array}{c}d\theta\\ d\phi\\ d\psi\end{array}\right]. \tag{81}\]
For \(SO(3,1)\) gauge group with \(k=-1\) and \(n=5\), we have
\[C_{24}^{1} = C_{35}^{1}=C_{41}^{2}=C_{36}^{2}=C_{51}^{3}=C_{62}^{3}=1\] \[C_{56}^{4} = C_{21}^{4}=C_{64}^{5}=C_{31}^{5}=C_{45}^{6}=C_{32}^{6}=1\] \[\gamma_{ab} = {\rm diag}(-1,-1,-1,1,1,1), \tag{82}\]
and
\[\left[\begin{array}{c}A_{\mu}^{(1)}\\ A_{\mu}^{(2)}\\ A_{\mu}^{(3)}\\ A_{\mu}^{(4)}\\ A_{\mu}^{(5)}\\ A_{\mu}^{(6)}\\ \end{array}\right] = e\left[\begin{array}{cccc}-\sin\phi\cos\psi&-\sinh\theta\cosh \theta\cos\phi\,\cos\psi&\sinh\theta\cosh\theta\sin\phi\sin\psi\\ -\sin\phi\sin\psi&-\sinh\theta\cosh\theta\cos\phi\sin\psi&-\sinh\theta\cosh \theta\sin\phi\cos\psi\\ -\cos\phi&\sinh\theta\cosh\theta\sin\phi&0\\ 0&0&\sinh^{2}\theta\sin^{2}\phi\\ 0&-\sinh^{2}\theta\cos\psi&\sinh^{2}\theta\sin\phi\cos\phi\sin\psi\\ 0&-\sinh^{2}\theta\sin\psi&-\sinh^{2}\theta\sin\phi\cos\phi\cos\psi\\ \end{array}\right]\left[\begin{array}{c}d\theta\\ d\phi\\ d\psi\\ \end{array}\right]. \tag{83}\]
where the gauge potentials are followed by Eq. (80).
### The metric function \(f(r)\) for \(n=5\) and \(n=9\)
For \(n=5\), the solution \(f(r)\) in Eq. (15) is obtained as follows
\[f(r) = k-\frac{m}{r^{2}}-\frac{\Lambda r^{2}}{6} \tag{84}\] \[+ \left\{\begin{array}{ll}\frac{\beta^{2}r^{2}}{3}\big{[}1-\sqrt{ 1+\frac{7}{2}}\big{]}-\frac{e^{2}}{r^{2}}\big{(}{\rm ln}[\frac{r^{2}}{2}(1+ \sqrt{1+\frac{7}{2}})]-\frac{1}{2}\big{)},&BI\\ -\frac{\beta^{2}r^{2}}{3}\big{[}1-\exp\big{(}-\frac{7}{4}\big{)}\big{]}-\frac{ e^{2}}{2r^{2}}\big{[}E_{i}(1,\frac{7}{4})-1+{\rm ln}\big{(}\frac{3e^{2}}{2 \beta^{2}}\big{)}+\gamma\big{]},&EN\\ \frac{e^{2}}{2r^{2}}[1-4\ln(r)]-\frac{2\beta^{2}r^{2}}{3}(1+\frac{7}{8}){\rm ln }(1+\frac{7}{8}),&LN\end{array}\right.\]
where \(E_{i}(a,z)=z^{a-1}\Gamma(1-a,z)\) and \(\Gamma(a,x)\) and \(\gamma\) are respectively the gamma function and Euler-Mascheroni constant.
The solution for \(n=9\) is
\[f(r) = k-\frac{m}{r^{6}}-\frac{\Lambda r^{2}}{28} \tag{85}\] \[+ \left\{\begin{array}{ll}\frac{\beta^{2}r^{2}}{14}\big{[}1-\sqrt {1+\frac{21\eta}{6}}\big{]}-\frac{3e^{2}}{4r^{2}}\sqrt{1+\frac{21\eta}{6}}+ \frac{21\eta e^{2}}{8r^{2}}\big{(}{\rm ln}[\frac{r^{2}}{2}(1+\sqrt{1+\frac{21 \eta}{6}})]+\frac{1}{4}\big{)},&BI\\ -\frac{\beta^{2}r^{2}}{14}\big{[}1-\big{(}1-\frac{7\eta}{8}\big{)}{\rm exp} \big{(}-\frac{7\eta}{8}\big{)}\big{]}+\frac{21\eta e^{2}}{16r^{2}}\big{[}E_{i}( 1,\frac{7\eta}{4})-\frac{3}{2}+{\rm ln}\big{(}\frac{21e^{2}}{2\beta^{2}}\big{)} +\gamma\big{]},&EN\\ -\frac{21\eta e^{2}}{64r^{2}}(1-8{\rm ln}(r))-\frac{3e^{2}}{4r^{2}}-\frac{\beta ^{2}r^{2}}{7}\big{[}1-\big{(}\frac{49\eta^{2}}{64}\big{)}\big{]}{\rm ln}(1+ \frac{7\eta}{8}).&LN\end{array}\right.\]
It should be noted that we have determined the integration constants in Eq. (15) for \(n=5\) and \(n=9\) by assuming correspondence of BJYM, ENYM and LNYM theories with Yang-Mills theory for large values of \(\beta\).
|
2309.08286 | Quartz as an Accurate High-Field Low-Cost THz Helicity Detector | The advent of high-field THz sources has opened the field of nonlinear THz
physics and unlocked access to fundamental low energy excitations for ultrafast
material control. Recent advances towards controlling and employing chiral
excitations, or generally angular momentum of light, not only rely on the
measurement of undistorted intense THz fields, but also on the precise
knowledge about sophisticated THz helicity states. A recently reported and
promising detector material is $\alpha$-quartz. However, its electrooptic
response function and contributing nonlinear effects have remained elusive.
Here, we establish z-cut $\alpha$-quartz as a precise electrooptic THz detector
for full amplitude, phase and polarization measurement of intense THz fields,
all at a fraction of costs of conventional THz detectors. We experimentally
determine its complex detector response function, which is in good agreement
with our model based on predominantly known literature values. It also explains
previously observed thickness-dependent waveforms. These insights allow us to
develop a swift and reliable protocol to precisely measure arbitrary THz
polarization and helicity states. This two-dimensional electrooptic sampling
(2D-EOS) in $\alpha$-quartz fosters rapid and cost-efficient THz time-domain
ellipsometry, and enables the characterization of polarization-tailored fields
for driving chiral or other helicity-sensitive quasiparticles and topologies. | Maximilian Frenzel, Joanna M. Urban, Leona Nest, Tobias Kampfrath, Michael S. Spencer, Sebastian F. Maehrlein | 2023-09-15T09:59:16Z | http://arxiv.org/abs/2309.08286v1 | # Quartz as an Accurate High-Field Low-Cost THz Helicity Detector
###### Abstract
The advent of high-field THz sources has opened the field of nonlinear THz physics and unlocked access to fundamental low energy excitations for ultrafast material control. Recent advances towards controlling and employing chiral excitations, or generally angular momentum of light, not only rely on the measurement of undistorted intense THz fields, but also on the precise knowledge about sophisticated THz helicity states. A recently reported and promising detector material is \(\alpha\)-quartz. However, its electrooptic response function and contributing nonlinear effects have remained elusive. Here, we establish z-cut \(\alpha\)-quartz as a precise electrooptic THz detector for full amplitude, phase and polarization measurement of intense THz fields, all at a fraction of costs of conventional THz detectors. We experimentally determine its complex detector response function, which is in good agreement with our model based on predominantly known literature values. It also explains previously observed thickness-dependent waveforms. These insights allow us to develop a swift and reliable protocol to precisely measure arbitrary THz polarization and helicity states. This two-dimensional electrooptic sampling (2D-EOS) in \(\alpha\)-quartz fosters rapid and cost-efficient THz time-domain ellipsometry, and enables the characterization of polarization-tailored fields for driving chiral or other helicity-sensitive quasiparticles and topologies.
## Introduction
THz sources with peak field strengths in the ~1 MV/cm regime, employing optical rectification in LiNbO\({}_{3}\)[1], difference frequency generation[2], large-area spintronic emitters[3] and accelerator-based facilities[4], are becoming more widely accessible. This development has enabled the selective drive of low-energy excitations such as phonons[5, 6], magnons[7] or other quasiparticles, thereby allowing for ultrafast control over material properties and non-equilibrium material design towards light-induced superconductivity[8], ferroelectricity[9], ferromagnetism[10] and spin-dynamics[7, 11]. However, despite large improvements in THz generation, the detection of intense single-cycle THz fields without distortions has remained challenging[12, 13].
Field-resolved THz detection provides precise frequency resolution of amplitude and phase of the light field. This feature is crucial for, e.g., THz time-domain spectroscopy (THz-TDS)[14], THz emission spectroscopy[13] and state-of-the-art experiments involving THz high-harmonic generation in topological insulators[15], graphene[16] or superconducting cuprates[17]. Moreover, emerging field-driven effects, e.g., for ultrafast control of topological[18] or chiral[19, 20, 21] material properties, are inherently sensitive to the carrier-envelope phase (CEP) and polarization (e.g., helicity) of the driving THz pulse. Full vectorial THz-field characterization is required for the precise detection of arbitrary THz polarization states. This information constitutes the basis for THz time-domain ellipsometry, which allows for the characterization of tensorial dielectric properties in opaque[22], anisotropic materials[23], and transient metamaterials[24], where traditional THz-TDS faces limitations. Another application is THz circular-dichroism spectroscopy, which has been applied in chiral nanostructures and molecular assemblies[25], thermoelectric solids[26], or bio-relevant systems such as DNA[27], and living cancer cells[28]. However, partly due to the difficulty of precise polarization-resolved THz detection, THz time-domain ellipsometry and circular-dichroism spectroscopy have not been widely adopted yet.
The common technique to detect phase-stable THz fields is electrooptic sampling (EOS)[29]. Here, the incident THz pulse induces a change in birefringence proportional to the THz electric field in a nonlinear crystal like ZnTe[30] or GaP[31], which can be stroboscopically sampled by a visible (VIS) or near-infrared (NIR) sampling pulse as a function of time delay \(t\). However, the measured instantaneous signal \(S(t)\) is, in general, not simply proportional to the instantaneous field \(E(t)\) because of noninstantaneous features such as phonon resonances and velocity mismatch of the THz and sampling pulse. Within linear response theory and in frequency space, a response function \(h\) connects \(S\) and \(E\) at THz frequencies \(\Omega\) via \(S(\Omega)=h(\Omega)E(\Omega)\) and captures the frequency dependence of the nonlinear susceptibility \(\chi^{(2)}\), which can be strongly modulated by phonons[29], and non-local effects, such as phase mismatch between THz- and sampling pulse[32, 33].
For (110)-oriented zincblende-type electrooptic crystals such as ZnTe (110), resolving the polarization state of THz pulses typically requires rotation of the detector crystal and sampling pulse polarization[34]. Unfortunately, such measurements can be easily polluted by inhomogeneities of the detector crystal, birefringence effects, or inaccurate rotation axes. On the other hand, (111)-oriented zincblende crystals enable polarization state retrieval by simply modulating the sampling pulse polarization by using, e.g., a photoelastic modulator[35] or employing a dual detection scheme based on two balanced detections[36]. Nonetheless, the specific detector requirements and additional experimental effort have limited the application of polarization-resolved EOS so far.
Extending these concepts to highly intense THz fields poses extra challenges, since they can lead to distorted signals in conventional EOS crystals, such as ZnTe or GaP, which include over-rotation[12] or higher-order nonlinearities such as the THz Kerr effect[37, 38]. This aspect means that the amplitude and phase of intense THz fields cannot be reliably extracted within the linear response. However, attenuating the THz fields by using, e.g., wiregrid polarizers or filters might induce additional spectral distortions[39].
Here, we focus on z-cut \(\alpha\)-quartz, which is a widely used substrate material for THz-TDS due to its high THz transparency[14] and in-plane optical isotropy. It recently attracted attention as a promising nonlinear THz material[40], i.e., as broadband THz emitter via optical rectification[41] or as THz detector via EOS[42]. In fact, its electrooptic coefficient, \(r_{11}=0.1-0.3\) pm/V[43], is about an order of magnitude smaller than \(r_{41}=4\) pm/V of ZnTe[30], thereby moving nonlinear EOS responses to much higher THz field amplitudes. Its large bandgap and optical transparency allow for a broad dynamic range and high damage threshold. Moreover, \(\alpha\)-quartz is widely available at 2 orders of magnitude lower cost than typical EOS crystals. However, there are significant drawbacks that prevented the reliable use of quartz for THz detection so far. In particular, the response function \(h\) has been unknown, and its peculiar thickness dependence lead to the open question regarding bulk versus surface \(\chi^{(2)}\) contributions[42]. Likewise, the polarization-sensitivity has remained mostly unexplored.
In this work, we experimentally measure the quartz response function and model it predominantly based on known literature values. We show that arbitrary THz polarization states can be measured by a simple and time-efficient method utilizing only two EOS measurements with different sampling pulse polarizations. The latter is achieved by a simple rotation of a half-waveplate (HWP) in the VIS spectral range. As a textbook example for time-domain ellipsometry, we determine the birefringence of y-cut quartz as commonly used for commercial THz waveplates. We find that the transmitted single-cycle pulses exhibit complex polarization states in the highly polychromatic regime[44], which cannot be described by a single polarization ellipse, Jones vector or set of Stokes parameters. Our study establishes z-cut \(\alpha\)
quartz as a reference detector for amplitude, phase, and arbitrary polarization states of THz fields exceeding 100 kV/cm, fostering cost-efficient high-field THz time-domain ellipsometry and tailoring helical THz driving fields for ultrafast material control.
## Results
### Experimental Setup
Intense single-cycle THz fields (1.3 THz center frequency, 1.5 THz full width at half maximum (FWHM)) with peak fields exceeding 1 MV/cm are generated by tilted-pulse-front optical rectification in LiNbO\({}_{3}\)(Ref. 1). The THz field strengths or its linear polarization angle \(\psi\) relative
Figure 1: **Electrooptic sampling in quartz and its THz fluence dependence.****a** Experimental setup: THz pulses are generated via optical rectification (OR) in LiNbO\({}_{3}\). The THz pulse induces a refractive index change in quartz, leading the sampling pulse to acquire ellipticity. This ellipticity is read out as signal \(S(t)\) as a function of time delay \(t\) in a balanced detection scheme. \(S\) is related to the incident THz field \(E_{\mathrm{THz}}\) via the complex detector response function \(h_{\mathrm{Q}}\). **b** EOS in quartz (z-cut, 50 μm thickness) for different THz fluences, normalized to the \(t=0\) peak EOS values. Inset: Linear dependence of peak \(S(t)\) on peak \(E_{\mathrm{THz}}\). **c**\(S(\Omega)\) amplitude spectrum via Fourier transform of EOS signals \(S(t)\) in (b) normalized to spectral peak amplitude.
to the vertical direction in the lab frame are altered using a THz polarizer pair, P1 and P2 in **Fig. 1a**. The THz field-induced birefringence in the EOS crystal is probed by synchronized VIS sampling pulses (800 nm center wavelength, ~20 fs duration) using a balanced detection scheme. The sampling pulse's incident linear polarization is set to arbitrary angles \(\theta\) by a broadband VIS HWP. We measure EOS in a ZnTe (110) crystal (10 \(\upmu\)m thickness) and various \(z\)-cut \(\alpha\)-quartz plates with thicknesses of 35, 50, 70, and 150 \(\upmu\)m as a function of sampling pulse polarization \(\theta\), THz polarization \(\psi\), and the crystal's azimuthal angle \(\phi\) at normal incidence (see **Fig. 1a**). Finally, we also trace the THz field after collimated transmission through highly birefringent y-cut \(\alpha\)-quartz (700 \(\upmu\)m thickness), which corresponds to a commercial quarter-wave plate (QWP) for 2.2 THz.
### Electrooptic Response Function
We first confirm the linear response function relation. **Fig. 1b** shows the measured THz-induced birefringence signals \(S(t)/S_{\mathrm{max}}\) in 50 \(\upmu\)m quartz for different THz peak fields. The induced birefringence scales linearly with the THz electric field strength (see inset of **Fig. 1b**), confirming a linear electrooptic effect as recently observed by Balos _et al.[42]_. The normalized time- and frequency-domain shapes (see **Fig. 1c**) do not change substantially for different THz fluences, ruling out over-rotation effects and demonstrating that the higher-order nonlinearities[40] (e.g. <1.5 THz) are small for THz fields on the order of 1 MV/cm. This finding confirms that quartz can reliably sample THz electric fields \(\geq\)0.1 MV/cm within the linear-response regime.
To experimentally extract the linear response function of 50 \(\upmu\)m quartz, we compare the quartz EOS signal \(S_{\mathrm{Q}}\) with the signal \(S_{\mathrm{ZnTe}}\) from 10 \(\upmu\)m ZnTe, whose response function \(h_{\mathrm{ZnTe}}\) is known[32] (see **Fig 2a**). To avoid nonlinear distortions, the THz power for ZnTe was attenuated by the THz polarizer pair by a factor of ~40. We Fourier transform these traces and extract the quartz response using \(h_{\mathrm{Q}}=h_{\mathrm{ZnTe}}(S_{\mathrm{Q}}/S_{\mathrm{ZnTe}})\) in the frequency-domain. The amplitude and phase of \(h_{\mathrm{Q}}\) are shown as blue dots in **Figs. 2b** and 2d, respectively, demonstrating that the quartz response covers the full 0.1-4 THz bandwidth of the LiNbO\({}_{3}\) source without gaps. However, it contains a substantial frequency dependence in the form of modulations with a frequency spacing of ~1.4 THz as well as an enhancement at low frequencies <0.9 THz and at around 3.9 THz.
### Modelling
To understand the experimental response function \(h_{\mathrm{Q,exp}}(\Omega)\) of 50 \(\upmu\)m quartz, we model the response \(h_{\mathrm{calc}}\) as function of THz frequency \(\Omega\) by extending the formalism of Ref. [32] and use:
\[h(\Omega)=\chi_{\rm eff}^{(2)}(\Omega)t_{\rm F}(\Omega)\int_{\omega\gg\Omega}{ \rm d}\omega\frac{\omega^{2}}{c^{2}k(\omega)}T_{\rm S}(\omega,\Omega)E_{\rm S}^{ *}(\omega)E_{\rm S}(\omega-\Omega)\;G(\omega,\Omega),\] ( 1 )
where \(h_{\rm calc}(\Omega)=[h(\Omega)+h^{*}(-\Omega)]/(I_{1}+I_{2})\) and \(I_{1}+I_{2}=\int{\rm d}\omega\;T_{\rm S}(\omega,\Omega=0)E_{\rm S}^{*}(\omega) E_{\rm S}(\omega)\). Here, \(E_{\rm S}\) is the incident sampling pulse with optical frequency \(\omega\) and wavenumber \(k(\omega)=n(\omega)\omega/c\), where \(n(\omega)\) is the corresponding refractive index. \(T_{\rm S}(\omega,\Omega)\) accounts for the sampling pulse transmission \(T_{\rm S}(\omega,\Omega)=t_{12}^{*}(\omega)t_{21}^{*}(\omega)t_{12}(\omega- \Omega)t_{21}(\omega-\Omega)\), where \(t_{12}(\omega)\) and \(t_{21}(\omega)\) are the Fresnel transmission coefficients for propagating from air into quartz and quartz into air, respectively. \(\chi_{\rm eff}^{(2)}(\Omega)=\chi_{\rm eff}^{(2)}(\omega_{c},\Omega)\) is the effective nonlinear susceptibility of the detection crystal under the assumption that \(\chi_{\rm eff}^{(2)}(\omega-\Omega,\Omega)\approx\chi_{\rm eff}^{(2)}(\omega_ {c},\Omega)\), where \(\omega_{c}\) is the sampling-pulse center frequency. The field transmission coefficient \(t_{\rm F}(\Omega)\) accounts for the transmitted THz field including its multiple reflections (see **Methods**). The phase-matching factor, \(G(\omega,\Omega)=[\exp(i\Delta k(\omega,\Omega)d)-1]/i\Delta k(\omega,\Omega)\) between THz and sampling pulse includes \(\Delta k(\omega,\Omega)=k(\omega-\Omega)+k(\Omega)-k(\omega)\) and the sample thickness \(d\).
To calculate \(h_{\rm Q}\), we use the known quartz refractive indices in the THz[14] and optical region[45]. However, the nonlinear susceptibility \(\chi^{(2)}(\Omega)\) is not known and we therefore model it by:
\[\chi_{\rm eff}^{(2)}(\Omega)=\chi_{e}^{(2)}\left[1+B(1-i\Omega t_{\rm D})^{-1 }+C\left(1-\frac{\Omega^{2}}{\Omega_{\rm TO}^{2}}-\frac{i\Omega\Gamma}{\Omega _{\rm TO}^{2}}\right)^{-1}\right],\] ( 2 )
Figure 2: **Experimentally measured and calculated detector response.****a** Normalized EOS signal \(S(t)\) in quartz (50 μm thickness) and ZnTe (10 μm thickness). **b**, **d** Complex quartz response function \(h_{\rm Q}\) for 50 μm is experimentally extracted using known ZnTe response (blue) and modeled (red) in amplitude and phase. **c**, **e** Calculated \(\chi^{(2)}\), transmitted field coefficient \(t_{\rm F}\), and phase matching factor \(G\) in amplitude and phase, showing how these factors contribute to the quartz response function.
where \(\chi_{e}^{(2)}\) is the pure electronic susceptibility. The last term corresponds to the ionic contribution with \(\omega_{\rm TO}\) being the frequency, and \(\Gamma\) being the damping of the respective transverse-optical (TO) phonon, while the Faust-Henry coefficient \(C\) defines the ratio between the lattice-induced and electronic contributions [29, 46]. We take the phonon parameters \(\Omega_{\rm TO}/(2\pi)=3.9\) THz, and \(\Gamma/(2\pi)=0.09\) THz from Davies _et al._[14] and find \(C=0.15\) to provide good agreement with our experimental values (see red curves in **Figs. 2b,d**). We assume that the striking low-frequency enhancement of \(h_{\rm Q}(\Omega)\) (see **Fig. 2b**) arises from \(\chi^{(2)}\) and model it by a phenomenological Debye-type relaxation contribution \(B\) with characteristic time scale \(\tau_{\rm D}\) (second term in **Eq. (2)**). Choosing \(B=0.7\) and \(\tau_{\rm D}=0.5\) ps provides nearly perfect agreement with the 0.1-0.9 THz range in \(h_{\rm Q,exp}\). We will discuss possible physical origins of such a contribution below. Thus, by analytic modeling, we find dominating contributions by the phase matching factor \(G\), the field transmission coeffient \(t_{\rm F}\), and the nonlinear susceptibility \(\chi^{(2)}\), disentangled in **Fig. 2c,e**.
We apply the response function to calculate the exact THz electric field (red) from the quantitative EOS signal in 50 \(\upmu\)m quartz (blue) in **Figs. 3a** and 3b in the time- and frequency-domain, respectively. To determine the absolute field strength, we use the measured THz pulse energy and focal size (see **Supplementary Note 1**). We obtain a peak field strength of 1.04 MV/cm. We can therefore estimate the effective electrooptic coefficient \(r_{\rm eff}\), which equals the \(r_{11}\) tensor component, of z-cut quartz to be 0.1 pm/V (see **Supplementary Note 2**). This value agrees well with previous reports of \(r_{11}\) at optical frequencies ranging between 0.1 and 0.3 pm/V in z-cut quartz [43, 47].
### Thickness Dependence and Nonlinear Origin
The response function also depends on the crystal thickness, which typically presents a trade-off between sensitivity and bandwidth. **Fig. 3c** shows the measured dependence of the maximum EOS signal on the quartz crystal thickness between 35 and 150 \(\upmu\)m (blue dots), which clearly deviates from an ideal phase-matched behavior, i.e., a linear scaling with the crystal thickness. We also observe a noticeable thickness dependence of the time-domain EOS shapes in **Fig. 3d**, even clearer in the spectral bandwidth in **Fig. 3e**. **Fig. 3f** displays the calculated response function for each thickness in amplitude (red) and phase (grey), which explains the measured features. For instance, the effective bandwidth is significantly lower for 150 \(\upmu\)m quartz due to the zero in the phase-matching factor \(G(\Omega,\omega)\), while the thickness-dependent frequency spacing of the modulations generally arise from Fabry-Perot fringes in the field transmission coefficient \(t_{\rm F}(\Omega)\). The calculated response function, thus, also explains
the experimentally observed EOS thickness dependence in **Fig. 3c** (red line), mainly by the phase mismatch \(G(\Omega,\omega)\) of THz and sampling pulse.
The first report of EOS in quartz suggested a strong surface \(\chi^{(2)}\) contribution[42]. Indeed, the surface and bulk \(\chi^{(2)}\) have a similar order of magnitude[48]. As the surface contribution originates from a depth of \(\sim\)1 nm (Ref. [48]), its contribution will be small in comparison to the bulk contribution for a quartz crystal with a thickness >10 \(\upmu\)m. The response functions presented here (**Figs. 2b,d** and **3f**) strongly indicate a pure bulk \(\chi^{(2)}\) effect and provide a reasonable estimate of \(r_{11}\), both sufficient to explain the experimental observations.
We suggest the low-frequency (0.1-0.9 THz) enhancement in \(\chi^{(2)}\) to be caused by disorder. In fact, the frequency region 0.1-1.2 THz of fused silica and other glasses is often associated with the so-called Boson-peak behavior corresponding to low frequency vibrational modes[49, 50]. Its nature and origin remain debated, but it is known to affect the Raman, neutron, and linear dielectric responses of quartz and related glasses[49, 51, 52]. Our finding, thus, motivates further research into the nonlinear susceptibility in the sub-0.9 THz region. In addition, there is considerable variability of the reported values for the 3.9 THz phonon damping parameter \(\Gamma/2\pi\) between 0.09 THz (Ref. [14]) and 0.39 THz (Ref. [51]). This variation indicates that the \(\chi^{(2)}\) model
Figure 3: **| Thickness dependence and extracted THz electric fields.****a** Absolute THz electric field \(E_{\text{THz}}\) extracted by applying the response function \(h_{\text{q}}\) to the measured EOS signal \(S\) with 50 \(\upmu\)m quartz and **b** corresponding Fourier amplitude spectrum. **c** Maximum EOS signal \(S(t)\) as a function of quartz thickness (blue markers) and calculated quartz response (red curve). **d** EOS signal for four different quartz thicknesses below 150 \(\upmu\)m with **e** respective Fourier amplitude spectra. **f** Modulus and phase of calculated detector response \(h_{\text{q}}\) of quartz for the respective thicknesses. The small oscillatory variations below 4 THz are Fabry-Perot resonances. The zero in (e) for 150 \(\upmu\)m is dictated by the phase matching factor \(G\). The peak at 3.9 THz stems from the phonon contribution to \(\chi^{(2)}\).
parameters are highly sensitive to the sample quality and may be fine-tuned for better agreement.
### Polarization-Resolved EOS
So far, we have treated both \(h_{\mathrm{Q}}\) and \(E_{\mathrm{THz}}\) as scalars and only considered the specific case in which the THz-pulse and sampling-pulse polarizations are parallel and the quartz azimuthal angle is optimized for maximum \(S(t)\), i.e., oriented parallel to one of the in-plane crystalline axes. However, the THz electric field is a vectorial observable and can have an arbitrary (and thus even helical) polarization state and \(h_{\mathrm{Q}}\) is generally dependent on the azimuthal angle \(\phi\) and sampling pulse polarization \(\theta\). Nonetheless, we can assume the same frequency-dependence of the allowed \(\chi^{(2)}\) tensor elements and any corresponding linear combination of them, because of the in-plane symmetry of the 3.9 THz phonon. Since the other quantities in **Eq. (1)**, such as \(t_{\mathrm{F}}\) or \(G\), refer to linear optical properties, they are also in-plane isotropic in z-cut quartz. We can, therefore, assume the same frequency evolution of the response function for all \(\phi\) and \(\theta\), but the absolute sensitivity will be rescaled by the global symmetry of \(\chi_{e}^{(2)}(\phi,\theta)\), which ultimately allows for polarization-sensitive THz EOS.
To explore the sensitivity of 50 \(\upmu\)m quartz to different THz field polarization components, **Fig. 4a** shows the measured peak EOS signal \(S\) (blue dots) as a function of quartz azimuthal angle \(\phi\) for three different probe polarizations \(\theta=0^{\circ},45^{\circ},90^{\circ}\) with respect to the THz field (\(\psi=0^{\circ}\), linearly polarized along the y-axis). Each azimuthal dependence \(S(\phi)\) exhibits a perfect 3-
Figure 4: **Polarization and azimuthal angle dependence for 2D-EOS.****a** Measured azimuthal angle \(\phi\) dependence of maximum quartz \(S(t)\) for different sampling pulse polarizations (\(\theta\)) with THz pulse polarized along y (blue dots). Blue and red lines are the calculated azimuthal angle dependence for the respective sampling pulse polarizations and THz polarized along y (blue line) and x (red line). **b** The arctan of the peak EOS signals measured at \(\theta=45^{\circ}\) (\(S_{45}\)) and \(\theta=0^{\circ}\) (\(S_{0}\)) perfectly matches the THz polarizer angle \(\psi\), demonstrating that the full THz polarization state can be extracted by measuring \(S_{0}(t)\) and \(S_{45}(t)\). **c** 2D-EOS: \(E_{x}^{\mathrm{THz}}(t)\) and \(E_{y}^{\mathrm{THz}}(t)\) for selected \(\psi\) between \(0^{\circ}\) and \(90^{\circ}\), which were extracted from \(S_{45}(t)\) and \(S_{0}(t)\) by applying the quartz response function \(h_{\mathrm{Q}}\).
fold symmetry in agreement with the first reported quartz EOS [42]. We therefore calculate the expected dependence of \(S(\psi,\theta)\) for a THz field \(\mathbf{E}_{\mathrm{THz}}\) linearly polarized at an arbitrary angle \(\psi\), and sampling field \(\mathbf{E}_{\mathrm{s}}\) linearly polarized at angle \(\theta\) in the x-y plane (see **Fig. 1a**). We use the 2\({}^{\mathrm{nd}}\) order nonlinear polarization \(P_{\mathrm{f}}^{(2)}=\epsilon_{0}X_{ijk}^{(2)}E_{j}^{\mathrm{THz}}E_{k}^{ \mathrm{s}}\), which we can rewrite using the nonlinear susceptibility tensor in contracted notation \(d_{ll}\) with only non-zero \(d_{11}\) and \(d_{14}\) terms due to quartz's \(D_{3}\) point group, evaluated for the z-cut plane [52] (see **Methods**). The blue line in **Fig. 4a** shows the expected sensitivity for a vertically polarized THz field \(E_{y}^{\mathrm{THz}}\) (i.e. \(\psi=0^{\circ}\)), in perfect agreement with the measured azimuthal dependence. The expected peak signal for a horizontally polarized THz field \(E_{x}^{\mathrm{THz}}\) (i.e. \(\psi=90^{\circ}\)), shown as a red line, features the same 3-fold symmetry but shifted by 30\({}^{\circ}\). These opposite EOS sensitivities for the x- and y-projections of the THz field allow for a full THz polarization determination by simply measuring EOS for two different sampling pulse polarizations, e.g. \(\theta=0^{\circ}\) for obtaining \(E_{y}^{\mathrm{THz}}\) and \(\theta=45^{\circ}\) for obtaining \(E_{x}^{\mathrm{THz}}\) at azimuth \(\phi=0\) (see square markers in **Fig. 4a**).
To prove this concept, we rotate the linear polarization of the THz pulse by setting polarizer P1 to 45\({}^{\circ}\) and scanning P2 by angle \(\psi\). Next, we measure \(S(t)\) for sampling pulse polarization \(\theta=0^{\circ}\) (\(S_{0}\)) and 45\({}^{\circ}\) (\(S_{45}\)) for a set of THz polarizer angles \(\psi\). **Fig. 4b** shows that \(\arctan(S_{45}/S_{0})\) is identical to the THz polarizer angle \(\psi\) and, thus, precisely measures the THz polarization by only two EOS measurements at different sampling pulse polarizations. After applying the calculated response function \(h_{\mathrm{Q}}\) to \(S_{0}\) and \(S_{45}\), the full vectorial THz field \(\mathbf{E}_{\mathrm{THz}}(t)\) can be extracted as shown in the 2D EOS traces for selected \(\psi\) between 0\({}^{\circ}\) and 90\({}^{\circ}\) in **Fig. 4c**. We note that the perfect 3-fold symmetry is not found in the common ZnTe (110) or GaP (110) EOS crystals, where this convenient procedure cannot be used [36].
### Broadband THz Helicity Measurement
For driving chiral or, generally, helicity-dependent excitations, e.g., for ultrafast control of phonon angular momentum [20, 21, 53] or topology modulation [18], CEP-stable table-top THz sources are beneficial due to their inherent synchronization with sub-cycle probing pulses. Nevertheless, to reach the required peak fields, the energy has to be squeezed into few- or single cycle pulses at low repetition rates. Therefore, the lack of broadband THz waveplates leads to complicated polarization states when aiming for THz pulses with specific helicities. In contrast to conventional multi-cycle optical light, helical few- or single-cycle THz pulses are highly polychromatic and, generally, cannot be described by a single polarization state, i.e., neither by a pair of ellipticity angles (\(\theta\), \(\eta\)) nor by one fixed Jones or Stokes vector [44]. Instead,
the polarization state must be generally described as an evolution in frequency space or, equivalently, by the full temporal trajectory of the light's electric field vector \(\mathbf{E}_{\mathrm{THz}}(t)\).
To demonstrate the complete detection of arbitrary polarization states in quartz, in particular for complicated helical fields, we characterize the polarization state of single-cycle THz pulses following collimated traversal of the textbook birefringent y-cut quartz (see **Supplementary Fig. S1**), which is nearly identical to commercially available THz waveplates. **Fig. 5a** shows the transmitted electric field of a collimated THz beam (\(\psi=45^{\circ}\)) through 0.7 mm crystalline y-cut quartz for three different crystal orientations, which is detected in 50 \(\mathrm{\SIUnitSymbolMicro m}\) z-cut quartz. The transmitted THz polarizations for \(0^{\circ}\) and \(90^{\circ}\) orientations appear highly elliptical, which is when the incident THz pulse polarization is at 45\({}^{\circ}\) to the in-plane crystal axes and therefore experiences maximum birefringence. This form of time-domain ellipsometry permits the direct measurement of the birefringence \(\Delta n(\Omega)\) using \(\arg(E_{x}^{\mathrm{THz}})-\arg(E_{y}^{\mathrm{THz}})=\Delta n\Omega d/c\) as shown in **Fig. 5b**. We find an approximately constant \(\Delta n(\Omega)\) of about 0.05 at 0.4-3.5 THz, in good agreement with literature values[51, 54].
As seen from **Fig. 5a**, the transmitted THz polarization is neither a simple polarization ellipse nor purely left- or right-handed circularly polarized. Its sophisticated electric-field trajectory can be described by a frequency-dependent rotation \(\vartheta(\Omega)\) and ellipticity \(\eta(\Omega)\) (see inset **Fig. 5a**), or any other ellipsometric set of parameters as a function of frequency. **Figs. 5c-e** show \(\vartheta(\Omega)\) and \(\eta(\Omega)\) for \(0^{\circ}\), \(45^{\circ}\), and \(90^{\circ}\) orientation of the y-cut quartz plate, respectively. For \(0^{\circ}\) and \(90^{\circ}\) orientation (see **Figs. 5c,e**), the THz pulse acquires a maximum of frequency-dependent ellipticity \(\Delta n\Omega d/c\). Since \(\Delta n(\Omega)\) is roughly constant, the transmitted THz pulse for \(0^{\circ}\) and \(90^{\circ}\) orientation is, respectively, perfectly right- and left-handed circularly polarized only at frequency \(c/(4\Delta nd)\approx 2.37\) THz and 1.96 THz, where \(\eta\) reaches -45\({}^{\circ}\) and 45\({}^{\circ}\). In other words, the y-cut quartz plate acts as a THz quarter waveplate (QWP) for only a very narrow frequency range and leads to drastically different polarization states for all other frequency components within the THz pulse (see top row of **Figs. 5c,e**). In contrast, the incident THz pulse acquires a small ellipticity for the 45\({}^{\circ}\) orientation (see **Fig. 5d**) only at higher frequencies, which are more sensitive to a small \(\Delta n\).
Usually, broadband QWPs create opposite helicities for \(\pm 45^{\circ}\) rotation. This behavior is evidently not the case here, as the two \(\mathbf{E}(t)\) trajectories in **Fig. 5f** are not perfectly opposite. We project the polarization state from a linear into a circular basis to resolve the frequency-dependent helicity (see **Methods**). **Figs. 5g,h** depict the full frequency-dependent right-handed (\(E_{\mathrm{RCP}}\)) and left-handed (\(E_{\mathrm{LCP}}\)) circularly polarized intensity components for the \(0^{\circ}\) (red) and \(90^{\circ}\) (blue) orientations, normalized for every frequency component (see **Fig. 5g**) and as absolute intensity spectra (see **Fig. 5h**). **Fig. 5g** highlights that the helicity changes quite drastically across the single THz pulse spectrum and that a circular polarization is achieved at
slightly different frequencies for opposite QWP angles (0\({}^{\circ}\) vs. 90\({}^{\circ}\)), in agreement with the ellipticity parameters \(\eta(\Omega)\) in **Figs. 5c,e**. The latter can be related to a slightly tilted axis of rotation with respect to the quartz plate's y-axis, which highlights the challenges of helicity-dependent measurements in the THz spectral range.
**Fig. 5 | Detection of arbitrary THz polarization states and their helicity.****a** 2D-EOS of the THz electric field transmitted through a 0.7 mm y-cut quartz plate for three different y-cut quartz orientations, detected in 50 \(\upmu\)m z-cut quartz. The y-cut quartz plate was aligned with one of its facets parallel to the y-axis (corresponds to 0\({}^{\circ}\)). The incident THz field was linearly polarized at 45\({}^{\circ}\). **b** Extracted birefringence for the three different y-cut quartz azimuthal angles, demonstrating that the THz field experiences the largest birefringence for 0\({}^{\circ}\) and 90\({}^{\circ}\) quartz-plate orientations. **c, d, e** Corresponding frequency-resolved THz polarization states expressed in polarization ellipse rotation \(\vartheta(\Omega)\) and ellipticity \(\eta(\Omega)\) for 0\({}^{\circ}\), 45\({}^{\circ}\) and 90\({}^{\circ}\) y-cut plate azimuthal angle, respectively. **f** Projection of \(E_{x}^{\text{THz}}(t)\) and \(E_{y}^{\text{THz}}(t)\) into the \((E_{x}^{\text{THz}},E_{y}^{\text{THz}})\) plane for 0\({}^{\circ}\) and 90\({}^{\circ}\) y-cut plate azimuthal angles, unveiling the different, but not exactly opposite helicity states. **g** Corresponding LCP and RCP intensity spectra normalized for every frequency \(\Omega\) to \(|E^{\text{THz}}(\Omega)|^{2}\) and **h** corresponding absolute intensities.
## Discussion
We now discuss the detector performance of \(\alpha\)-quartz in more detail. As we find a pure bulk \(\chi^{(2)}\) effect, the phase-matching term \(G\) governs the trade-off between detection sensitivity and bandwidth. The effective detector bandwidth is, thus, limited by the first zero in \(G\) (**Fig. 3f**), giving a cut-off frequency \(\nu_{\text{cutoff}}=c/\left[\left(n_{\text{THz}}-n_{\text{s}}^{(g)}\right)d \right]=1/(\text{GVM}\cdot d)\). The group-velocity mismatch (GVM) in quartz is about 1.8 ps/mm (assuming \(n_{\text{THz}}(1\text{ THz})=2.09\) and group index \(n_{\text{s}}^{(g)}(800\text{ nm})=1.55\)), which is only slightly inferior to ZnTe (GVM = 1.1 ps/mm)[30]. A full comparison of \(\mathbf{r}_{\text{eff}}\) and GVM between quartz and the widely used EOS crystals ZnTe and GaP is shown in **Supplementary Table S1**. Therefore, to sample the whole THz spectrum of typical high-field THz sources based on LiNbO\({}_{3}\) (-0.1-4 THz), the quartz thickness should not exceed 130 \(\upmu\)m. Sampling of higher THz frequencies poses limitations due to substantial dispersion of the linear THz refractive index and nonlinear susceptibility \(\chi^{(2)}\) due to the 3.9, 8, 12, and 13.5 THz TO phonons of quartz[55]. This fact is especially relevant for more broadband high-field sources such as large-area spintronic emitters[42, 3].
The polarization sensitivity of EOS in quartz generally permits time-domain ellipsometry, allows for the direct measurement of complex and even non-equilibrium[24] tensorial material properties in anisotropic media[22, 23] and optical activity of chiral phonons[25, 26, 56], as well as THz circular-dichroism spectroscopy[27, 26], or decoding high-harmonic THz emission of complex quantum materials[15, 16, 17]. The ability to detect intense THz fields in amplitude and phase without distortions is well suited for any ultrafast spectroscopy based on strong THz-field excitation[13], e.g., for understanding nonlinear THz polarization responses[38] or driving phase transitions[8, 9, 10, 18], where an accurate characterization of the driving field is crucial. Moreover, the demonstrated precise helicity characterization of intense THz driving fields is urgently needed for the emerging field of chiral (or circular) phononics. In this field, lattice modes are driven on chiral or circular trajectories with phonon angular momentum[53] leading to magnetization switching[57], transient multiferroicity[21], large magnetic fields[20] or other yet unexplored spin-lattice-coupled phenomena. These first explorations in the uncharted territory of phonon-angular-momentum control highlight the challenges for THz helicity differential detection, i.e., extracting signals proportional to \(S(E_{\text{RCP}})-S(E_{\text{LCP}})\), which must be employed to isolate helicity-dependent effects. Using quartz as a reliable high-field THz helicity detector will help to clarify and support these novel types of measurements and will foster further studies of chiral or helicity-selective phenomena in the THz spectral region.
As the demonstrated 2D-EOS protocol only relies on a single HWP rotation, it enables a rapid measurements and, therefore, keeps the phase error due to temporal drifts between adjacent EOS scans minimal. Accordingly, the scheme is also easy to implement in commercial time
domain spectrometer systems as it only relies on the addition of low-cost and widely available thin quartz wafers and standard HWPs in the VIS or NIR spectral range. As another benefit, quartz is well suited for measuring THz fields and their polarization states in systems, where space constraints often prohibit the use of motorized rotation mounts for the detection crystal, in particular in cryostats at cryogenic temperatures. **Supplementary Fig. S2** shows quartz EOS at 80 K, demonstrating that the THz field can still be reliably sampled at low temperatures, although the response function is modified due to the enhanced phonon contribution to \(\chi^{(2)}\) (Ref. [14]). Conveniently, our work may also allow for all-optical synchronization of THz pump and optical probe pulses via THz slicing[58] or in-situ field and polarization characterization in already installed z-cut quartz windows at free-electron-laser facilities, where even a non-collinear THz- and sampling-beam geometry is feasible (see **Supplementary Fig. S4**).
In conclusion, z-cut \(\alpha\)-quartz can reliably sample intense THz fields of the order of 1 MV/cm without over-rotation and with negligible higher-order nonlinearities. We measured and modeled the frequency-dependent electrooptic response function, consistent with a pure bulk \(\chi^{(2)}\) effect dominated by Fabry-Perot resonances, phonon modulations in the Faust-Henry formalism, phase matching effects, and a low frequency Debye-like contribution. We determined the electrooptic coefficient to the order of 0.1 pm/V and proved a perfect 3-fold symmetry of the electrooptic response. Based on this knowledge, we developed an easily implementable protocol to measure the full vectorial THz polarization state by simply toggling between 0\({}^{\circ}\) and 45\({}^{\circ}\) sampling pulse polarizations. With this approach, we establish quartz as a powerful detector for full amplitude, phase and polarization state of highly intense THz radiation at a fraction of the cost of conventional detection crystals. This work will accordingly foster rapid and cost-efficient high-field THz spectroscopy[6, 13, 15, 6], THz time-domain ellipsometry[23], THz circular-dichroism spectroscopy[26, 27] and will enable broadband THz helicity characterization of polarization-tailored pulses for driving angular-momentum phonons[25, 26, 56] or other helicity-dependent excitations[18, 19, 20, 21] in the future.
## Methods
### Generation and electrooptic sampling of intense THz pulses
Intense THz pulses (1.3 THz center frequency, 1.5 THz FWHM) with peak fields exceeding 1 MV/cm are generated using optical rectification in LiNbO\({}_{3}\) using the titled pulse front technique[1]. For this, the LiNbO\({}_{3}\) crystal is pumped with laser pulses from an amplified Ti:sapphire laser system (central wavelength 800 nm, pulse duration 35 fs FWHM, pulse energy 5 mJ, repetition rate 1 kHz). The THz field strengths are altered by rotating THz polarizer P1 (Fig. 1a), while keeping P2 fixed at \(\psi=0^{\circ}\). In this way, the peak fields of the
transmitted THz pulses are proportional to the cosine squared of the polarizer P1 angle. Similarly, the THz polarization \(\psi\) can be set to an arbitrary angle by keeping P1 fixed at 45\({}^{\circ}\) and P2 at \(\psi\). The sampling pulses are provided by a synchronized Ti:sapphire oscillator (central wavelength 800 nm, repetition rate 80 MHz) and are collinearly aligned and temporarily delayed with respect to the THz pulse. The sampling pulse polarization is set to specific angles by using a half-waveplate (HWP) before the EOS crystal.
The THz pulse induces a change in birefringence (electrooptic effect or Pockels effect) in the EOS crystal. This birefringence causes the sampling pulse to acquire a phase difference between its polarization components parallel and perpendicular to the THz pulse polarization. This phase difference is detected in a balanced detection scheme consisting of a quarter- and half-waveplate (\(\lambda/4\,,\,\lambda/2\)) followed by a Wollaston prism (WP) to spatially separate the perpendicular polarization components. The intensity of the two resulting beams is detected by two photodiodes (\(I_{1}\) and \(I_{2}\)), which leads to the EOS signal \(S=(I_{1}-I_{2})/(I_{1}+I_{2})\), that is equal to twice the THz-induced phase difference (see **Supplementary Note 2**).
#### 3.2.2 Further details of the electrooptic response function model
The electrooptic response is modeled using **Eq. (1)** and **(2)** in the main text. In this equation, the field transmission coefficient \(t_{\mathrm{F}}(\Omega)\) accounts for the transmitted THz field, includes multiple reflections of the field inside the crystal, and can be expressed using:
\[t_{\mathrm{F}}(\Omega)=\frac{t_{12}(\Omega)}{1-R(\Omega)\mathrm{e}^{\mathrm{i} \theta(\Omega)}}. \tag{3}\]
where \(R(\Omega)=r_{21}(\Omega)r_{23}(\Omega)\), and \(\theta=2\Omega n(\Omega)d/c\). Here, \(t_{12}(\Omega)\) and \(r_{12}(\Omega)\) are the Fresnel transmission and reflection coefficients at THz frequencies at the respective interfaces (1,3 - air; 2 - quartz), respectively.
#### 3.2.3 EOS response for arbitrary quartz azimuthal angles and sampling pulse polarizations
To compute the full quartz EOS dependence for the crystalline azimuthal angle \(\phi\), sampling pulse polarization \(\theta\), and THz polarization angle \(\psi\), we consider the second-order nonlinear polarization \(\mathbf{P}^{(2)}\), which for z-cut \(\alpha\)-quartz (\(D_{3}\) point group[52]) can be written using contracted notation in the matrix form:
\[\begin{bmatrix}P_{x}^{(2)}\\ P_{y}^{(2)}\\ P_{z}^{(2)}\end{bmatrix}=4\epsilon_{0}\begin{bmatrix}d_{11}&-d_{11}&0&d_{14}&0&0\\ 0&0&0&0&-d_{14}&-d_{11}\\ 0&0&0&0&0&0\\ 0&0&0&0&0\end{bmatrix}\begin{bmatrix}E_{x}^{3}E_{x}^{\text{THz}}\\ E_{y}^{3}E_{y}^{\text{THz}}\\ E_{z}^{5}E_{x}^{\text{THz}}+E_{z}^{5}E_{y}^{\text{THz}}\\ E_{x}^{5}E_{y}^{\text{THz}}+E_{y}^{5}E_{x}^{\text{THz}}\end{bmatrix}\] (4 )
where \(d_{11}=0.3\) pm/V and \(d_{14}=0.008\) pm/V (Ref. [52]). For our experimental configuration, the probe and THz polarizations are in the x-y plane, and the z components of these fields are zero. **Equation (4)** thus implies that only the \(d_{11}\) component affects quartz EOS in our geometry. The balanced-detection signal is proportional to the difference of the intensities of the orthogonally polarized x- and y-components of the total electric field at the detector, separated by the Wollaston prism and projected on the two photodiodes. Thus, the signal can be calculated as:
\[S\propto I_{1}-I_{2}\approx(E_{x}^{\text{s}}+E_{x}^{(2)})^{2}-(E_{y}^{\text{s} }+E_{y}^{(2)})^{2}.\] (5 )
where \(\mathbf{E}^{(2)}\) is the electric field emitted by the nonlinear polarization \(\mathbf{P}^{(2)}\) as described by the inhomogeneous wave equation. The sampling-pulse polarization angle is defined by \(\theta=\text{atan2}(E_{x}^{\text{s}},E_{y}^{\text{s}})\) and the THz polarization angle by \(\psi=\text{atan2}\big{(}E_{x}^{\text{s}},E_{y}^{\text{s}}\big{)}\), where \(\text{atan2}\) corresponds to the four-quadrant arctan function.
A convenient way to numerically simulate the nonlinear polarization \(\mathbf{P}^{(2)}\) obtained with an azimuthal rotation of the sample in the x-y plane by an angle \(\phi\) is to apply two-dimensional rotation matrices \(R(\phi)\) to the \(\mathbf{E}_{\text{s}}\) and \(\mathbf{E}_{\text{THz}}\) fields while using an unchanged form of \(d_{ij}\) in **Eq. (4)** and then rotate the calculated nonlinear polarization \(\mathbf{P}^{(2)^{\prime}}\) by \(\neg\phi\) back into the original lab frame. After the rotation by \(R(\phi)\), the sampling and THz field components take the form \(\mathbf{E}_{\text{s}}^{\prime}(\phi)=R(\phi)\mathbf{E}_{\text{s}}\) and \(\mathbf{E}_{\text{THz}}^{\prime}(\phi)=R(\phi)\mathbf{E}_{\text{THz}}\), and allow \(\mathbf{P}^{(2)^{\prime}}(\phi)\) to be computed. Rotating back to the lab frame, then yields \(\mathbf{P}^{(2)}(\phi)=R(-\phi)\mathbf{P}^{(2)^{\prime}}(\phi)\). The signal azimuthal angle dependence \(S(\phi)\) can then be calculated using **Eq. (5)** as before. Since \(\theta\) and \(\psi\) can be set arbitrarily, the full \(S(\phi,\theta,\psi)\) dependence of quartz can be constructed (see **Supplementary Fig. S3**).
### Polarization state representations of polychromatic THz fields
The polarization state of a THz field \(\mathbf{E}(\Omega)\) can be described using a polarization ellipse representation (see inset **Fig. 5a**), where the orientation \(\vartheta(\Omega)\) is given by:
\[\vartheta(\Omega)=\frac{1}{2}\arctan\left(\frac{2\left|E_{x}(\Omega)\right| \left|E_{y}(\Omega)\right|}{\left|E_{x}(\Omega)\right|^{2}-\left|E_{y}(\Omega )\right|^{2}}\cos(\Delta(\Omega))\right).\] (6 )
with \(\Delta(\Omega)=\arg(E_{y}(\Omega))-\arg(E_{x}(\Omega))\). The ellipticity \(\eta(\Omega)\) is given by
\[\eta(\Omega)\frac{1}{2}\arcsin\left(\frac{2|E_{x}(\Omega)||E_{y}(\Omega)|}{|E_{x} (\Omega)|^{2}+|E_{y}(\Omega)|^{2}}\sin(\Delta(\Omega))\right).\] ( 7 )
Another useful way to describe \(\mathbf{E}(\Omega)\), which is typically measured in a linear basis (\(\mathbf{\hat{x}}\), \(\mathbf{\hat{y}}\)), is the circular basis (\(\mathbf{\hat{R}}\), \(\mathbf{\hat{L}}\)). In this representation, the right- and left-hand circular polarized field components, \(E_{\text{RCP}}\left(\Omega\right)\) and \(E_{\text{LCP}}\left(\Omega\right)\) respectively, are given by the projection:
\[E_{\text{RCP}}(\Omega) =\frac{1}{\sqrt{2}}[E_{x}(\Omega)+iE_{y}(\Omega)].\] ( 8 ) \[E_{\text{LCP}}(\Omega) =\frac{1}{\sqrt{2}}[E_{x}(\Omega)-iE_{y}(\Omega)].\] ( 9 )
## Acknowledgements:
We thank A. Paarmann, Y. Behovits, A. Chekhov, and M. Wolf for fruitful discussions.
**Funding:** This project was mainly funded through S.F.M.'s Emmy Noether Independent Junior Research Group from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, no. 469405347).
**Supplementary information:**
Supplementary information accompanies this manuscript.
## References
* (1) Hirori H, Doi A, Blanchard F, Tanaka K. Single-cycle terahertz pulses with amplitudes exceeding 1 MV/cm generated by optical rectification in LiNbO3. _Appl Phys Lett_**98**, (2011).
* (2) Sell A, Leitenstorfer A, Huber R. Phase-locked generation and field-resolved detection of widely tunable terahertz pulses with amplitudes exceeding 100 MV/cm. _Optics Letters_**33**, 2767-2769 (2008).
* (3) Rouzegar R, _et al._ Broadband Spintronic Terahertz Source with Peak Electric Fields Exceeding 1.5 MV/cm. _Physical Review Applied_**19**, (2023).
* (4) Green B, _et al._ High-Field High-Repetition-Rate Sources for the Coherent THz Control of Matter. _Sci Rep_**6**, 22256 (2016).
* (5) Maehrlein S, Paarmann A, Wolf M, Kampfrath T. Terahertz Sum-Frequency Excitation of a Raman-Active Phonon. _Phys Rev Lett_**119**, 127402 (2017).
* (6) Johnson CL, Knighton BE, Johnson JA. Distinguishing Nonlinear Terahertz Excitation Pathways with Two-Dimensional Spectroscopy. _Phys Rev Lett_**122**, 073901 (2019).
* (7) Kampfrath T, _et al._ Coherent terahertz control of antiferromagnetic spin waves. _Nature Photonics_**5**, 31-34 (2010).
* (8) Fausti D, _et al._ Light-Induced Superconductivity in a Stripe-Ordered Cuprate. _Science_**331**, 189-191 (2011).
* (9) Li X, _et al._ Terahertz field-induced ferroelectricity in quantum paraelectric SrTiO3. _Science_**364**, 1079-1082 (2019).
* (10) Disa AS, _et al._ Photo-induced high-temperature ferromagnetism in YTiO(3). _Nature_**617**, 73-78 (2023).
* (11) Maehrlein SF, _et al._ Dissecting spin-phonon equilibration in ferrimagnetic insulators by ultrafast lattice excitation. _Sci Adv_**4**, (2018).
Bell G, Hike M. Polarization Effects of Electro-optic Sampling and Over-rotation for High Field THz Detection. _Journal of Infrared, Millimeter, and Terahertz Waves_**41**, 880-893 (2020).
* [13] Leitenstorfer A, _et al._ The 2023 terahertz science and technology roadmap. _Journal of Physics D: Applied Physics_**56**, (2023).
* [14] Davies CL, Patel JB, Xia CQ, Herz LM, Johnston MB. Temperature-Dependent Refractive Index of Quartz at Terahertz Frequencies. _Journal of Infrared, Millimeter, and Terahertz Waves_**39**, 1236-1248 (2018).
* [15] Tielrooij KJ, _et al._ Milliwatt terahertz harmonic generation from topological insulator metamaterials. _Light Sci Appl_**11**, 315 (2022).
* [16] Hafez HA, _et al._ Extremely efficient terahertz high-harmonic generation in graphene by hot Dirac fermions. _Nature_**561**, 507-511 (2018).
* [17] Chu H, _et al._ Phase-resolved Higgs response in superconducting cuprates. _Nat Commun_**11**, 1793 (2020).
* [18] Sie EJ, _et al._ An ultrafast symmetry switch in a Weyl semimetal. _Nature_**565**, 61-66 (2019).
* [19] Ueda H, _et al._ Chiral phonons in quartz probed by X-rays. _Nature_**618**, 946-950 (2023).
* [20] Luo J, _et al._ Large effective magnetic fields from chiral phonons in rare-earth halides. _arXiv:230603852_, (2023).
* [21] Basini M, _et al._ Terahertz electric-field driven dynamical multiferroicity in SrTiO3. _arXiv:221001690_, (2022).
* [22] Guo Q, _et al._ THz Time-Domain Spectroscopic Ellipsometry With Simultaneous Measurements of Orthogonal Polarizations. _IEEE Transactions on Terahertz Science and Technology_**9**, 422-429 (2019).
* [23] Chen X, Pickwell-MacPherson E. An introduction to terahertz time-domain spectroscopic ellipsometry. _APL Photonics_**7**, (2022).
* [24] Kamaraju N, _et al._ Subcycle control of terahertz waveform polarization using all-optically induced transient metamaterials. _Light: Science & Applications_**3**, e155-e155 (2014).
* [25] Choi WJ, Lee SH, Park BC, Kotov NA. Terahertz Circular Dichroism Spectroscopy of Molecular Assemblies and Nanostructures. _J Am Chem Soc_**144**, 22789-22804 (2022).
* [26] Baydin A, _et al._ Magnetic Control of Soft Chiral Phonons in PbTe. _Phys Rev Lett_**128**, 075901 (2022).
* [27] Cheng G, Choi WJ, Jang HJ, Kotov NA, Norris TB. Terahertz Time-Domain Polarimetry for Generalized Anisotropic and Chiral Materials. _Terahertz, Rf, Millimeter, and Submillimeter-Wave Technology and Applications Xii_**10917**, (2019).
* [28] Zhang ZY, _et al._ Terahertz circular dichroism sensing of living cancer cells based on microstructure sensor. _Anal Chim Acta_**1180**, (2021).
* [29] Leitenstorfer A, Hunsche S, Shah J, Nuss MC, Knox WH. Detectors and sources for ultrabroadband electro-optic sampling: Experiment and theory. _Appl Phys Lett_**74**, 1516-1518 (1999).
* [30] Wu Q, Zhang XC. Ultrafast electro-optic field sensors. _Appl Phys Lett_**68**, 1604-1606 (1996).
* [31] Wu Q, Zhang XC. 7 terahertz broadband GaP electro-optic sensor. _Appl Phys Lett_**70**, 1784-1786 (1997).
* [32] Kampfrath T, Notzold J, Wolf M. Sampling of broadband terahertz pulses with thick electro-optic crystals. _Appl Phys Lett_**90**, (2007).
* [33] Huber L, Maehrlein SF, Wang F, Liu Y, Zhu XY. The ultrafast Kerr effect in anisotropic and dispersive media. _J Chem Phys_**154**, 094202 (2021).
* [34] Zhang RX, Cui Y, Sun WF, Zhang Y. Polarization information for terahertz imaging. _Appl Optics_**47**, 6422-6427 (2008).
* [35] Nemoto N, Higuchi T, Kanda N, Konishi K, Kuwata-Gonokami M. Highly precise and accurate terahertz polarization measurements based on electro-optic sampling with polarization modulation of probe pulses. _Opt Express_**22**, 17915-17929 (2014).
* [36] van der Valk NCJ, van der Marel WAM, Planken PCM. Terahertz polarization imaging. _Optics Letters_**30**, 2802-2804 (2005).
* [37] Cornet M, Degert J, Abraham E, Freysz E. Terahertz Kerr effect in gallium phosphide crystal. _Journal of the Optical Society of America B_**31**, (2014).
* [38] Frenzel M, _et al._ Nonlinear terahertz control of the lead halide perovskite lattice. _Sci Adv_**9**, eadg3856 (2023).
* [39] Kaltenecker KJ, Kelleher EJR, Zhou B, Jepsen PU. Attenuation of THz Beams: A "How to" Tutorial. _Journal of Infrared, Millimeter, and Terahertz Waves_**40**, 878-904 (2019).
* [40] Zibod S, _et al._ Strong Nonlinear Response in Crystalline Quartz at THz Frequencies. _Advanced Optical Materials_, (2023).
* [41] Wei YX, Le JM, Huang L, Tian CS. Efficient generation of intense broadband terahertz pulses from quartz. _Appl Phys Lett_**122**, (2023).
* [42] Balos V, Wolf M, Kovalev S, Sajadi M. Optical rectification and electro-optic sampling in quartz. _Optics Express_**31**, (2023).
* [43] Rosner RD, Turner EH, Kaminow IP. Clamped Electrooptic Coefficients of Kdp and Quartz. _Appl Optics_**6**, 778-& (1967).
* [44] Shan J, Dadap JI, Heinz TF. Circularly polarized light in the single-cycle limit: the nature of highly polychromatic radiation of defined polarization. _Optics Express_**17**, 7431-7439 (2009).
* [45] Ghosh G. Dispersion-equation coefficients for the refractive index and birefringence of calcite and quartz crystals. _Opt Commun_**163**, 95-102 (1999).
* [46] Faust WL, Henry CH. Mixing of Visible and Near-Resonance Infrared Light in GaP. _Physical Review Letters_**17**, 1265-1268 (1966).
* [47] Eden DD, Thiess GH. Measurement of the Direct Electro-Optic Effect in Quartz at Uhf. _Appl Optics_**2**, 868-869 (1963).
* [48] Thamer M, Garling T, Campen RK, Wolf M. Quantitative determination of the nonlinear bulk and surface response from alpha-quartz using phase sensitive SFG spectroscopy. _The Journal of Chemical Physics_**151**, (2019).
* [49] Terki F, Levelut C, Boissier M, Pelous J. Low-frequency dynamics and medium-range order in vitreous silica. _Physical Review B_**53**, 2411-2418 (1996).
* [50] Buchenau U, Nucker N, Dianoux AJ. Neutron Scattering Study of the Low-Frequency Vibrations in Vitreous Silica. _Physical Review Letters_**53**, 2316-2319 (1984).
* [51] Naftaly M, Gregory A. Terahertz and Microwave Optical Properties of Single-Crystal Quartz and Vitreous Silica and the Behavior of the Boson Peak. _Applied Sciences_**11**, (2021).
* [52] Boyd RW. Nonlinear Optics, 3rd Edition. _Nonlinear Optics, 3rd Edition_, 1-613 (2008).
* [53] Tauchert SR, _et al._ Polarized phonons carry angular momentum in ultrafast demagnetization. _Nature_**602**, 73-77 (2022).
* [54] Castro-Camus E, Johnston MB. Extraction of the anisotropic dielectric properties of materials from polarization-resolved terahertz time-domain spectra. _Journal of Optics A: Pure and Applied Optics_**11**, (2009).
* [55] Cummings KD, Tanner DB. Far-Infrared Ordinary-Ray Optical-Constants of Quartz. _J Opt Soc Am_**70**, 123-126 (1980).
* [56] Choi WJ, _et al._ Chiral phonons in microcrystals and nanofibrils of biomolecules. _Nature Photonics_**16**, 366-373 (2022).
* [57] Davies CS, Fennema FGN, Tsukamoto A, Razdolski I, Kimel AV, Kirilyuk A. Phononic Switching of Magnetization by the Ultrafast Barnett Effect. _arXiv:230511551_, (2023).
* an all-optical synchronization for 4(th) generation light sources. _Opt Express_**30**, 26955-26966 (2022).
**Supplementary Information**
**for**
**"Quartz as an Accurate High-Field Low-Cost THz Helicity Detector"**
Maximilian Frenzel\({}^{\dagger}\), Joanna M. Urban\({}^{\dagger}\), Leona Nest\({}^{\dagger}\), Tobias Kampfrath\({}^{\dagger,2}\),
Michael S. Spencer\({}^{\dagger}\), Sebastian F. Maehrlein\({}^{\dagger,\dagger}\)
\({}^{\dagger}\)_Fritz Haber Institute of the Max Planck Society,_
_Department of Physical Chemistry, 14195 Berlin, Germany_
\({}^{2}\)_Freie Universitat Berlin, Department of Physics, 14195 Berlin, Germany_
\({}^{\dagger}\)Corresponding author. Email: [email protected]
**Supplementary Note 1: Estimation of the THz peak field**
The THz pulse energy \(E_{\rm p}\) is calculated from measured THz power \(P\) and laser repetition rate \(f_{\rm rep}=1\) kHz by \(E_{\rm p}=P/f_{\rm rep}\).The peak THz intensity \(I_{\rm peak}\) is determined from the THz pulse energy \(E_{\rm p}\), THz pulse duration (FWHM of THz intensity) \(\Delta t\), and THz focus diameter \(w\) (FWHM of THz intensity) using:
\[I_{\rm peak}=\frac{E_{\rm p}}{\Delta t}\cdot\frac{2\sqrt{\log 2}}{\sqrt{\pi}} \cdot\frac{2}{\pi\left(\frac{w}{2}\right)^{2}}.\] ( S1 )
This expression assumes a Gaussian shape for both the spatial focus and temporal pulse form. Here, \(w\) is estimated to be 350 \(\upmu\)m using a FLIR A35 THz camera, and THz power \(P\) was measured to be 1.65 mW with an Ophir Vega power meter with a 3A-P-THz sensor. The THz pulse duration is estimated from the FWHM of the THz intensity envelope \(I_{\rm env}(t)\) as determined using an EOS trace measured with quartz (see Fig. 3a).
The THz peak field strength \(E_{\text{peak}}\) can finally be estimated using \(E_{\text{peak}}=\sqrt{\frac{2}{\varepsilon_{\text{o}}c}}I_{\text{peak}}\). The estimated \(E_{\text{peak}}\) in Fig. 3a used for calculating \(r_{11}\) of z-cut quartz is therefore 1.04 MV/cm.
Supplementary Note 2: Relation between THz-induced birefringence and electrooptic tensor elements of quartz
Electrooptic sampling is often treated from the viewpoint of a pump-induced change of birefringence that is experienced by a probe (or sampling) pulse in the material. For this purpose, it is useful to consider the material's index ellipsoid to describe its optical properties, which can be expressed in the general form:
\[\left(\frac{1}{n^{2}}\right)_{1}x^{2}+\left(\frac{1}{n^{2}}\right)_{2}y^{2}+ \left(\frac{1}{n^{2}}\right)_{3}z^{2}+2\left(\frac{1}{n^{2}}\right)_{4}yz+2 \left(\frac{1}{n^{2}}\right)_{5}xz+2\left(\frac{1}{n^{2}}\right)_{6}xy=1\] ( S2 )
For z-cut \(\alpha\)-quartz, belonging to the \(D_{3}\) point group and 32 symmetry class[1, 2], and oriented along the principle axes, the induced change to the index ellipsoid by a THz electric field \(E_{\text{THz}}\) may then be written as:
\[\begin{bmatrix}\Delta(1/n^{2})_{1}\\ \Delta(1/n^{2})_{2}\\ \Delta(1/n^{2})_{3}\\ \Delta(1/n^{2})_{4}\\ \Delta(1/n^{2})_{5}\\ \Delta(1/n^{2})_{6}\end{bmatrix}=\begin{bmatrix}r_{11}&0&0\\ -r_{11}&0&0\\ 0&0&0\\ r_{41}&0&0\\ 0&-\tau_{41}&0\\ 0&-\tau_{11}&0\end{bmatrix}\begin{bmatrix}E_{\text{THz},1}\\ E_{\text{THz},2}\\ E_{\text{THz},3}\end{bmatrix}.\] ( S3 )
where \(r_{ij}\) is the electrooptic tensor. For our experimental configuration, the sampling pulse field's and THz pulse field's polarizations are in the x-y plane and the z components of these fields are zero. We now consider the case, where the THz field is polarized along the y-axis: \(E_{\text{THz},2}=t_{12}E_{\text{THz}}\); \(E_{\text{THz},1}=E_{\text{THz},3}=0\). Here, \(E_{\text{THz},2}\) is already the field inside the sample and related to the incident THz field \(E_{\text{THz}}\) via the Fresnel transmission coefficient, \(t_{12}=(2/(1+n_{\text{THz}}))\), if we neglect multiple internal reflections and absorption.
Note that the nonlinear susceptibility tensor \(d_{ij}\) (methods section of main text) and \(r_{ij}\) are related via \(r_{ij}=-4d_{ji}/n^{4}\). Since \(d_{11}\gg d_{14}\), we can assume \(r_{11}\gg r_{41}\), and the resulting index ellipsoid becomes:
\[\frac{x^{2}}{n_{\text{o}}^{2}}+\frac{y^{2}}{n_{\text{o}}^{2}}+\frac{z^{2}}{n_{ \text{e}}^{2}}-2r_{11}t_{12}E_{\text{THz}}xy=1,\] ( S4 )
where \(n_{\text{o}}\) and \(n_{\text{e}}\) are the ordinary and extraordinary refractive indices of quartz respectively. We change to the coordinate system \((x^{\prime},y^{\prime},z)\), where \(\widehat{x}^{\prime}=(\widehat{x}+\widehat{y})/\sqrt[]{2}\) and \(\widehat{y}^{\prime}=(\widehat{y}-\widehat{x})/\sqrt[]{2}\). In this new coordinate system, the index ellipsoid becomes:
\[\frac{x^{{}^{\prime}2}}{n_{0}^{2}}\left(1\,+\,n_{0}^{2}r_{11}t_{12}E_{\rm THz} \right)+\frac{y^{{}^{\prime}2}}{n_{0}^{2}}\left(1\,-\,n_{0}^{2}r_{11}t_{12}E_{\rm THz }\right)+\frac{z^{2}}{n_{\rm e}^{2}}=1.\] ( S 5 )
The change of refractive index \(\Delta n\) along x- and y-axis induced by a THz pulse polarized along the y-axis is therefore:
\[\Delta n_{x} \approx-\frac{1}{2}n_{0}^{3}r_{11}t_{12}E_{\rm THz}\] ( S 6 ) \[\Delta n_{y} \approx\frac{1}{2}n_{0}^{3}r_{11}t_{12}E_{\rm THz}\] ( S 7 )
A sampling pulse with frequency \(\omega_{\rm s}\) is polarized along the y-axis before it enters the EOS crystal with thickness \(d\). The polarization state of the sampling pulse can thus be expressed as \(\mathbf{E}_{\rm s}=E_{\rm s}\mathbf{\hat{y}}=\frac{E_{\rm s}}{\sqrt{2}}\left( \mathbf{\hat{X}}+\mathbf{\hat{Y}}\right)\).
After passing through the EOS crystal and experiencing \(\Delta n_{x}\), and \(\Delta n_{y}\), the sampling pulse polarization is:
\[\mathbf{E}_{\rm s}=\frac{E_{\rm s}}{\sqrt{2}}\Big{(}\exp\left(i \Delta\phi\right)\mathbf{\hat{x}}^{{}^{\prime}}\,+\,\exp\left(-i\Delta\phi \right)\mathbf{\hat{y}}^{{}^{\prime}}\Big{)}\\ =\frac{E_{\rm s}}{2}\left(\left(\exp\left(i\Delta\phi\right)-\exp \left(-i\Delta\phi\right)\right)\mathbf{\hat{x}}\,+\,\left(\exp\left(i\Delta \phi\right)\,+\,\exp\left(-i\Delta\phi\right)\right)\mathbf{\hat{y}}\right).\] ( S 8 )
where \(\Delta\phi=\frac{1}{2}n_{0}^{3}r_{11}t_{12}E_{\rm THz}\omega_{\rm s}d/c\) under the assumption of zero phase-mismatch between sampling- and THz pulse and no absorption. The sampling pulse then passes through a \(\lambda/4\) plate, so that \(\mathbf{E}_{\rm s}=iE_{\rm s}\exp\left(-\frac{i\pi}{4}\right)\left(\sin\left( \Delta\phi\right)\mathbf{\hat{x}}\,+\,\cos\left(\Delta\phi\right)\mathbf{\hat{ y}}\right)\). The following \(\lambda/2\) plate and Wollaston prism spatially separate perpendicular polarization components in the \((x^{\prime},y^{\prime},z)\) basis, so that the measured photodiode intensities can be decomposed by \(I_{1}\propto\left|\mathbf{\hat{x}}^{{}^{\prime}}\cdot\mathbf{E}_{\rm s}\right| ^{2}\) and \(I_{2}\propto\left|\mathbf{\hat{y}}^{{}^{\prime}}\cdot\mathbf{E}_{\rm s}\right| ^{2}\).
The measured EOS signal in z-cut quartz is therefore:
\[S=\frac{I_{1}-I_{2}}{I_{1}+I_{2}}=\frac{E_{\rm s}^{2}}{2E_{\rm s }^{2}}\left((\sin(\Delta\phi)\,+\,\cos(\Delta\phi))^{2}-(-\sin(\Delta\phi)\,+ \,\cos(\Delta\phi))^{2}\right)\approx 2\Delta\phi\\ =\frac{n_{0}^{3}r_{11}t_{12}E_{\rm THz}\omega_{\rm s}d}{c},\] ( S 9 )
where we assumed a small \(\Delta\phi\ll 1\) and used the small angle approximation (\(\sin(\Delta\phi)\approx\Delta\phi\) and \(\cos(\Delta\phi)\approx 1\)). Note that the same expression for \(S\) can be derived using the nonlinear polarization \(\mathbf{P}^{(2)}\) description in the methods section of the main text.
**Supplementary Note 3: Comparison between quartz, ZnTe, and GaP EOS detectors**
Table S1 shows the key parameters that are relevant to assess the sensitivity and bandwidth of quartz, ZnTe, and GaP detectors. Note that the electrooptic coefficient \(r_{\rm eff}\) is not the best metric to describe the sensitivity of an EOS THz detector, since it does not include the effect of THz reflection, which means that less THz field is available for signal generation in the crystal. In addition, the measured EOS signal is also dependent on the third power of the refractive index \(n_{\rm s}^{3}\) experienced by the sampling pulse (see Eq. (S9)). A better metric for the THz detector sensitivity is therefore the figure of merit (FoM) defined as:
\[{\rm FoM}=t_{12}r_{\rm eff}n_{\rm s}^{3}=2r_{\rm eff}n_{\rm s}^{3}/(1+n_{\rm THz }).\] ( S10 )
\begin{tabular}{|l|l|l|l|l|l|} \hline
**Detector** & \(r_{\rm eff}\) **(pm/V)** & \(n_{\rm THz}\) & \(n_{\rm s}\) & **GVM** & **FoM** \\ & & & & **(ps/mm)** & **(pm/V)** \\ \hline ZnTe (110) & \(r_{41}=4\) (Ref. 3) & 3.18 (Ref. 3) & 2.85 (Ref. 3) & 1.1 (Ref. 3) & \(\sim\)45 \\ \hline GaP (110) & \(r_{41}=1.6\) (Ref. 4) & 3.70 (Ref. 4) & 3.35 (Ref. 4) & 1.2 & \(\sim\)26 \\ \hline Quartz (001) & \(r_{11}=0.1\) (this work) & 2.09 (Ref. 5) & 1.54 (Ref. 6) & 1.8 & \(\sim\)0.2 \\ \hline \end{tabular}
**Table S1 | Overview of performance-related parameters for quartz, ZnTe, and GaP detectors**
**Supplementary Figures:**
**Fig. S3 | Calculated quartz EOS dependence on azimuthal angle \(\phi\), sampling pulse polarization \(\theta\) and linear THz polarization \(\psi\).****a** Calculation for all possible \(\phi\) and \(\theta\) angles for fixed THz electric field x- and y- components, \(\psi=90^{\circ}\) and \(\psi=0^{\circ}\), respectively. It is evident that the EOS signal obeys a 3-fold symmetry in \(\phi\), and a 2-fold symmetry in \(\theta\). **b** Calculated EOS signal for all possible \(\phi\) and \(\psi\) angles at the fixed sampling pulse polarization \(\theta=0^{\circ}\).
**Fig. S4 | Calculated quartz EOS azimuthal \(\phi\) dependence as a function of sampling pulse angle of incidence \(\alpha\) for the two geometries relevant for 2D-EOS.****a** Calculation for all possible \(\phi\) angles for fixed THz electric field y-component and sampling pulse polarization \(\theta=0^{\circ}\) as a function of sampling pulse angle of incidence \(\alpha\) between \(0^{\circ}\) and \(45^{\circ}\). This plot illustrates that the THz y-component can be extracted (blue square) even at large angles of incidence with proper adjustment of the quartz response function. **b** Corresponding calculation for fixed THz electric field x-component and sampling pulse polarization at \(\theta=45^{\circ}\), highlighting that the THz x-component may also still be extracted (red square) with the proper calibration of the EOS sensitivity. |
2308.16391 | Improving the Accuracy of Transaction-Based Ponzi Detection on Ethereum | The Ponzi scheme, an old-fashioned fraud, is now popular on the Ethereum
blockchain, causing considerable financial losses to many crypto investors. A
few Ponzi detection methods have been proposed in the literature, most of which
detect a Ponzi scheme based on its smart contract source code. This
contract-code-based approach, while achieving very high accuracy, is not robust
because a Ponzi developer can fool a detection model by obfuscating the opcode
or inventing a new profit distribution logic that cannot be detected. On the
contrary, a transaction-based approach could improve the robustness of
detection because transactions, unlike smart contracts, are harder to be
manipulated. However, the current transaction-based detection models achieve
fairly low accuracy. In this paper, we aim to improve the accuracy of the
transaction-based models by employing time-series features, which turn out to
be crucial in capturing the life-time behaviour a Ponzi application but were
completely overlooked in previous works. We propose a new set of 85 features
(22 known account-based and 63 new time-series features), which allows
off-the-shelf machine learning algorithms to achieve up to 30% higher F1-scores
compared to existing works. | Phuong Duy Huynh, Son Hoang Dau, Xiaodong Li, Phuc Luong, Emanuele Viterbo | 2023-08-31T01:54:31Z | http://arxiv.org/abs/2308.16391v2 | Improving Robustness and Accuracy of Ponzi Scheme Detection on Ethereum Using Time-Dependent Features
###### Abstract
The rapid development of blockchain has led to more and more funding pouring into the cryptocurrency market, which also attracted cybercriminals' interest in recent years. The Ponzi scheme, an old-fashioned fraud, is now popular on the blockchain, causing considerable financial losses to many crypto-investors. A few Ponzi detection methods have been proposed in the literature, most of which detect a Ponzi scheme based on its smart contract source code or opcode. The _contract-code-based_ approach, while achieving very high accuracy, is _not robust_: first, the source codes of a majority of contracts on Ethereum are not available, and second, a Ponzi developer can fool a contract-code-based detection model by obfuscating the opcode or inventing a new profit distribution logic that cannot be detected (since these models were trained on existing Ponzi logics only). A _transaction-based_ approach could improve the robustness of detection because transactions, unlike smart contracts, are harder to be manipulated. However, the current transaction-based detection models achieve fairly _low accuracy_. We address this gap in the literature by developing new detection models that rely only on the transactions, hence guaranteeing the robustness, and moreover, achieve considerably higher Accuracy, Precision, Recall, and Fl-score than existing transaction-based models. This is made possible thanks to the introduction of novel _time-dependent features_ that capture Ponzi behaviours characteristics derived from our comprehensive data analyses on Ponzi and non-Ponzi data from the XBlock-ETH repository.
## 1 Introduction
Since the birth of Bitcoin in 2008 [47], the blockchain technology has grown exponentially and revolutionized the way currencies and digital assets are transferred, exchanged, and traded. Thanks to its inherent decentralization, anonymity, and immutability, a blockchain, regarded as a digital ledger, provides better tampering resistance, robustness, privacy protection, and cheaper turn-around costs compared to a traditional financial system [24, 70].
Apart from applications in digital finance, Turing-complete smart contracts introduced first by Ethereum [10] and then by other similar blockchain platforms allow developers to implement sophisticated logic on the chain, further expanding the applicability of the technology to many other sectors including supply chains [11, 41, 26], data sharing [38, 59], games [6, 52], and the internet of things [48, 25, 49].
In recent years, crypto-crowdfunding via initial coin offerings (ICOs) has become a major fundraising method used by many businesses [46], providing an attractive alternative to the traditional stock exchanges. By the end of December 2021, the global market capitalization of blockchains had reached a staggering amount of over $2.9 trillion with more than 20,000 different cryptocurrencies [19]. However, this phenomenal success of the blockchain technology in digital finance has also led to a rising number of cybercrimes. Smart-contract-supporting blockchains have now become a paradise for a plethora of devastating financial scams, most notably Ponzi schemes, Honeypots, Phishing, Pump and Dump, and Rug Pull [12].
A _Ponzi_ scheme [2] is a classic fraudulent investment scam, which first appeared over 100 years ago in Boston. These scams often worked under the hood of high-yield investment opportunities. In short, a Ponzi scheme promises high returns to investors by using the funds from newcomers to pay earlier investors. A Ponzi scheme will inevitably collapse when few or no new investors join, making most investors, except for the early ones and the scheme owner, losing their money. The most common victims of Ponzi schemes are novice investors who are attracted by the promise of a large profit but aren't aware of where the money comes from or how the business works [36, 37].
Although having a long history, Ponzi schemes never go out of fashion because fraudsters never stop deploying them across different platforms. Blockchain platforms are no exception, and in fact, create an even more favourable environment for Ponzi scams to thrive due to the lack of supervision from a
central authority and its inherent anonymity [61, 62]. According to Chainanalysis's 2021 Crypto Crime Report [13], from 2017 to 2020, most blockchain frauds were Ponzi schemes, which accounted for nearly $7 billion worth of cryptocurrency in 2019, more than double of all other scams in 2020.
The development of Ponzi schemes on Ethereum1 have attracted some attention from the research community. The very first work in this area was by Bartoletti _et al._[3], who analyzed the _source codes_ of available Ethereum smart contracts and proposed four criteria to identify a Ponzi scheme (their paper first appeared on Arxiv in 2017). They classified Ponzi schemes on the Ethereum chain into four different types according to their money distribution logic (see Section 2.2). They also constructed the very first Ponzi dataset on Ethereum, consisting of 184 Ponzi contracts (active from 2015 to 2017) by manually inspecting their source codes. To automatically detect Ponzi schemes, a number of Ponzi detection models using various machine learning methods [16, 17, 29, 35, 63, 69, 39] and symbolic execution techniques [15] have been developed in the literature. All of the machine-learning-based approaches employed both transaction-based features (account features) and contract-code-based features (_opcode_ features) in their models to improve the detection accuracy. Most notably, SADPonzi [15], which used a semantic-aware approach, achieves 100% accuracy. Their proposed system can identify a Ponzi contract by comparing the extracted semantic information of its _bytecode_ and that of four known Ponzi scheme patterns.
Footnote 1: In the scope of this work we focus on Ponzi schemes on Ethereum only. For Bitcoin-based Ponzi schemes, please refer to [4, 60].
However, a _contract-code-based approach_, while capable of achieving very high accuracy, is _not robust_. _First_, the source codes (in Solidity) of a majority of contracts on Ethereum are not available (see, e.g., [72]). _Second_, a Ponzi developer can fool a contract-code-based detection model by obfuscating the opcode (see [15, Section 7.2.1]) or inventing a new profit distribution logic that cannot be detected (since these models or methods were trained or strictly rely on existing Ponzi logics) (see [15, Section 8]). We will discuss these points with more details in Section 2.3. A _transaction-based approach_ could improve the robustness of detection because transactions, unlike smart contracts, are harder to be manipulated. However, the current transaction-based detection models achieve fairly _low accuracy_[16, 17, 39].
In this work, we aim to develop more _robust_ and _accurate_ detection models that only rely on transaction data. To this end, we first collected all related transactions of 1395 applications that are included in the first ten million Ethereum blocks (July 2015 to May 2020) from the XBlock-ETH repository. We then analysed the data to capture the way Ponzi applications work. We observed that Ponzi and non-Ponzi applications have distinctive behaviours and characteristics and more importantly, that the _time_ factor, which has been overlooked in most studies, is crucial in identifying a Ponzi application. We introduced a list of novel _time-dependent features_ that capture the behaviours of an application throughout its lifetime. To evaluate the effectiveness of this new list of features, we ran the same classifiers and also some new ones on this list and on the existing list used by other transaction-based models [16, 39], treated as the baselines. The experiments showed that our proposed list of features achieved significantly higher F1-score values compared to the baselines. More specifically, the F1-score values of the baselines are improved by 8.3% or 26.4% if using our new features with the same classifiers. Finally, we demonstrated that our approach can also detect, with high accuracy, _new_ types of Ponzi schemes that were not present in the training dataset.
The rest of the paper is organized as follows. In Section 2, we introduce the background knowledge of Ethereum, Ponzi schemes, and discuss related works in the literature. In Section 3, we describe the data collection process, provide a comprehensive analysis of the collected data, discuss our transaction-based features aggregation and well-known classification models. In Section 4, we explain in detail the workflow, experimental configuration, evaluation metrics and experiment outcomes. Finally, in Section 5, we summarize our work and introduce a few directions for future work.
## 2 Background
### Ethereum in a Nutshell
Ethereum is the second most popular blockchain after Bitcoin in terms of market capitalization [18]. It is also the largest platform that provides a decentralized virtual environment (i.e., Ethereum Virtual Machine or EVM for short) to execute smart contracts [56]. In 2022, the Ethereum chain reached 15 million blocks with over 1.5 billion transactions [57].
_Smart contracts_ on Ethereum are executable programs that run automatically when their trigger conditions are met. Those contracts can be implemented using an object-oriented and high-level language called Solidity [27]. Contract _source codes_ are then compiled into _bytecodes_, which can be represented as low-level human-readable instructions - _opcodes_[67]. After that, the bytecodes are launched onto EVM. Once a contract is deployed, it cannot be modified by anyone. Moreover, any activities in the life cycle of a contract, e.g. deployment, execution, or even termination, must be triggered by a transaction. Therefore, any communication 'from' or 'to' a contract is recorded as a transaction and stored on the blockchain as immutable data. In other words, in Ethereum, a transaction is a key unit that involves all the activities of the contract.
In Ethereum, there are two types of transactions, _external_ and _internal_ transactions [16]. While external transactions are stored in the chain, internal transactions are only recorded from smart contracts execution. The type of a transaction
strongly depends on the type of account from which the transaction is sent. More specifically, external transactions are sent out from externally owned accounts (EOA), i.e., user accounts, while internal transactions come from smart contracts (SC) themselves. Each transaction contains a few common data fields listed below [57].
* _Participant address_: An account address of a sender or receiver participating in this transaction.
* _Value_: Transferred Ether (ETH) amount. ETH is the representative token in Ethereum that is used for many purposes, such as trading on crypto exchanges, paying the transaction fee, and paying for decentralized services.
* _Timestamp_: The time at which a transaction is mined.
* _Block_: Number of the blocks in which the transaction is recorded.
* _Transaction fee_: Amount paid to the miner for processing the transaction.
* _Gas usage_: The fee required to conduct a transaction or execute a contract on Ethereum successfully. Normal ETH transfers involve 21,000 gas units, while contract executions require a higher amount.
* _Input data_: Additional data included for the transaction, such as a message, a contract bytecode, or the contract's calling function and input.
* _Status_: The delivery status of the transaction. The status can be marked "failed" for reasons such as insufficient transferred amount, insufficient gas, or even an invalid bytecode.
### Ponzi Schemes on Ethereum
The blockchain technology has the potential to revolutionize the way traditional businesses work [44] by reducing credit costs [32], improving users' privacy [70], and realizing machine trust [22]. However, this technology also creates a golden opportunity for cybercriminals, resulting in the migrations of many financial scams to the blockchain platforms [12]. Among blockchain scams, Ponzi schemes [2] are the most popular from 2017 to 2020. In hindsight, this is not a surprise. Blockchain's inherent properties, i.e., automation, transparency, immutability, and anonymity create an ideal environment for this scam to grow [43]. While in traditional Ponzi schemes, scams can often be stopped, scammers get caught by the authority and compensations can be paid to the victims, in a blockchain environment, once a smart contract is up and running, it can't be stopped unless some preinstalled conditions are met, and scammers can stay entirely anonymous and withdraw money without revealing their identity. As a result, the scammers can often get away and the investments are permanently lost. Finally, the fact that the working logic of Ponzi schemes are immutable and publicly available on the chain for everyone to see can create a false sense of trust in the schemes among novice investors, making them fall prey to the scammers.
In layman term, Ponzi schemes are scams often camouflaged as high-return investment programs that use the funds from the later investors to pay the existing ones. With no real project behind and no intrinsic value, a Ponzi scheme will eventually collapse when there are not enough new investors joining and/or the payment commitment can no longer be fulfilled. A more official and authoritative definition of Ponzi schemes is given by the U.S. Securities and Exchange Commission [53]. At the hear of each Ponzi scheme is a money redistribution mechanism. Bartoletti _et al._[3] classified Ponzi schemes on Ethereum into four different categories based on their redistribution mechanisms as follows.
**Chain-shaped schemes** use a linear money distribution mechanism. These schemes often commit to paying investors a multiple, e.g., double, of their original investments. Each new investor joining the scheme is appended to a payment list in their order of arrival. Each investor in the list is paid in full with their promised amount whenever the accumulated fund (minus some commission fee) is sufficient. These schemes will collapse when the investment becomes too large to fulfill and the waiting time of late comers grows.
**Tree-shaped schemes** use a tree structure to manage the money redistribution, in which an inviter is a parent node, and the invitees are their children. Once a new investor joins the scheme, his investment is split and distributed among the ancestors: the nearer an ancestor is, the more he will receive. In this type of Ponzi scheme, investors cannot guess how much they will gain because their profit depends on how many users they and their descendants can invite and also how much these users pay. Similar to other schemes, tree-shaped Ponzi collapses when there are no or too few users joining.
**Handover schemes**, like Chain-shaped schemes, also use a linear payment list. However, instead of gathering newcomers' investments, these schemes require the entry toll, which increases every time a new user joins this scheme. At a time, only one user is invited by the last user in the list, and the new entry toll is paid entirely to the inviter to make an instant profit. Once the inviter is paid, he hands the privilege over to the following user who just came.
**Waterfall schemes** are similar to chain-shaped schemes in payment order but different in money distribution logic. Every new investment is distributed along the list of existing investors from the first to the last or until the fund is exhausted. This first-join-first-receive logic implies that the later joining investors are less likely to reap any profit.
### Related Works
Detecting scams on blockchain systems is crucial to making a secure trading environment for crypto-investors and providing a favourable development environment for potential decentralized applications (D-apps). Existing Ponzi detection models (on Ethereum) can be divided into two groups, depending on whether they rely on smart contract codes or on the transactions.
**Contract-Code-Based Approaches:** A contract source code reflect the working logic used in an application. Therefore, Bartoletti _et al._[3] proposed four criteria to detect the contract of a Ponzi application. As a result, 184 different Ponzi schemes on Ethereum were detected manually by scanning their source codes. Finally, based on the detected Ponzi list, a comprehensive analysis was conducted to highlight the various characteristics of Ponzi schemes.
However, it turns out that the original source codes of 77.3% contracts on Ethereum are not available (see Zhou _et al._[72]). To tackle this drawback and to detect Ponzi automatically, many researchers built Ponzi detection tools based on the frequency distribution of operation codes (opcodes), which are always available on the Ethereum chain. Chen _et al._ proposed an automatic detection tool on opcode features using machine learning models XGBoost [16] and Random Forest [17]. Their experimental results showed that the detection models using opcode features achieved greater performance than those using account features, which were aggregated from transactions. Finally, using both types of features provided the best performance in their work.
Further improvements were proposed to improve the detection accuracy. Fan _et al._[28] pointed out that an imbalanced dataset caused an overfit in previous models [16, 17]. Hence, to improve data quality and the detection accuracy, they proposed a data enhancement method that expanded the dataset and eliminated the imbalance. Additionally, Wang _et al._[63] adopted a deep learning technique to build a more accurate detection tool. Their study also used oversampling techniques (Smote and Tomek) to deal with an imbalanced dataset. There were also other studies that focused more on crafting better representative features rather than improving model's performance or data quality. Jung _et al._[39] aggregated more sophisticated account features and combined them with opcode features to build a new detection model with high accuracy. Sun _et al._[54] introduced a behaviour forest algorithm that first builds a behaviour tree from the contract's opcodes to represent continuous behaviours of the smart contract and then measures the similarity between contracts to detect a Ponzi scheme at an early stage.
While the various aforementioned studies focus on improving detection accuracy, their methods' robustness has been overlooked. Indeed, as demonstrated by Chen _et al._[15], scammers can use code obfuscation techniques [8] to counter those detection models relying on opcode features (see [15, Section 7.2.1]). For example, a contract code can be manipulated or modified to change the opcode occurrence frequency. Chen _et al._[15] also proposed in their work a new detection tool called SADPonzi, which was built upon a semantic-aware approach and achieved the best performance with Precision and Recall reaching 100%. SADPonzi was proven to be more robust than the current opcode-based method when facing code obfuscation techniques. More specifically, it can detect a Ponzi contract by comparing the extracted semantic information of its bytecode and the predefined semantics of four known Ponzi schemes. However, the approach of SADPonzi requires a domain expert to analyze a Ponzi application's operational logic to build the corresponding semantic pattern, which can be costly to put into practice. On top of that, as was also mentioned by the authors, SADPonzi can only effectively detects known and well-defined Ponzi types with predefined semantics, and may fail to detect a new Ponzi variant (see [15, Section 8]).
**Transaction-Based Approaches:** Transactions are records that save historical activities between an application and its participants. Not surprisingly, in several studies [3, 16, 17], transaction data was used to capture the differences in behaviours between Ponzi and non-Ponzi applications. Unlike the models using opcode features, the detection tools built on transaction data are more resilient to scammers' countermeasures because transaction information cannot be modified or deleted from the chain. Although scammers can add transaction records, they cannot manipulate transaction data as freely as they can with smart contract's source code and opcodes for two reasons. _First_, any participant, not just the creator, can create transactions, which is out of control of the contract creator. _Second_, the cost to create a transaction on the chain is not cheap (approximately $14.26 on average for each transaction in 2022 [58]). Thus, manipulating the data of a Ponzi application to avoid detection by flooding the system with extra transactions is infeasible or at least very costly to do.
Although there are a few works in the literature that studied models based on features extracted from the transaction data only (e.g., account features), they all achieved low accuracy [15, 16, 17]. This is because account features are often extracted from all transactions in a contract's lifetime, which only provides general, time-independent information such as the final balance, the number of transactions, or the number of users. The scam behaviours, which are indicated by the change of information throughout a Ponzi's timeline, however, cannot be captured entirely by those features. To improve the detection accuracy, some studies [16, 17, 39, 63] have integrated these account features with opcode features. However, this hybrid approach also inherits the shortcomings of the contract-code-based approach. Thus, to maintain the robustness of the transaction-based approach, we should only use transaction data to build detection models and at the same time should identify more informative features to increase the detection accuracy. More specifically, we proposed time
dependent features alongside account features to improve the detection capability of the transaction-based models. This is the key idea of our work.
## 3 Data collection, analysis and features aggregation
### Data Collection
We first need a reliable dataset to build an effective feature list and accurate transaction-based detection models. Although previous studies introduced some benchmarks, most of them are contract-based datasets. We collected data (Ponzi/non-Ponzi contract addresses and their associated transactions) following the process described in Fig. 1. It is worth noting that each Ponzi application is often implemented by a single smart contract. Therefore, we can retrieve all transaction data of an application by collecting all transactions sent from and to the application's contract.
As depicted in Fig. 1, we first gathered contract addresses of known Ponzi and non-Ponzi applications. The Ponzi contract address list was created originally by [3], which contains 184 verified Ponzi contracts active from 2015 to 2017. However, 25 duplicated contracts and 26 other misclassified Ponzi contracts in this list were filtered out by [15]. Thus, we have only 133 valid Ponzi contracts left. Moreover, we also reused 1262 non-Ponzi contract addresses in the dataset of [15], which were collected from top-ranked DApps on the DApps ranking website [21]. Those top decentralized applications were ranked by the number of active users, daily transactions, and daily trading volume, which means that they have been used by thousands of users in the community and are very unlikely to be scams. Thus, as the first step, we gathered from the literature 1395 applications' contract addresses, 133 among which are Ponzi. Next, we downloaded transactions associated with these applications, analyzed them, and refined our dataset further to include only 79 Ponzi and 1182 non-Ponzi applications, as explained below.
We downloaded processed Ethereum on-chain data from the XBlock-ETH repository [71, 68] and extracted relevant data associated with the aforementioned 1395 contracts. Note that the authors of XBlock-ETH first gathered raw blockchain data from Ethereum, including blocks, traces, and receipts, and then, processed and categorized data into seven different datasets: _Block_, _Block Transaction_, _Internal Transaction_, _Contract Info_, _ERC20 Transaction_, _ERC721 Transaction_, and _Token Info_. We are only interested in the _Block Transaction_ dataset and the _Internal Transaction_ dataset. All relevant transactions in the first 10 million blocks (from July 2015 to May 2020) were retrieved, including transactions of Ponzi applications and non-Ponzi applications. Note that the contract addresses of interest (1395 in total) can be found in the FROM or TO address fields of the transactions.
As the final step in the refinement of our dataset, we filtered out _unsuccessful transactions_ which failed for various reasons such as insufficient gas or errors in the contract codes. Those transactions were removed because there was no activities occurring following those calls. Moreover, we also discarded _inadequate applications_ to enhance our dataset. Those applications were eliminated because their number of transactions were too low (one or zero transaction) or their lifetime is shorter than one day. These are outliers and their behaviours are not the same as the common behaviours of the whole group. Even if such an application is a Ponzi scheme, it is also a failed one. Therefore, removing those applications is important to build a clean dataset, especially for a transaction-based approach. As a result, our finally dataset contains **1182** non-Ponzi applications and **79** Ponzi applications. The statistics by Ponzi types in our dataset are displayed in Table 1.
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Ponzi type** & **Number of applications** & **Percentage** \\ \hline Chain-shaped & \(68\) & \(86\)\% \\ \hline Tree-shaped & \(1\) & \(1.3\)\% \\ \hline Handover & \(1\) & \(1.3\)\% \\ \hline Waterfall & \(4\) & \(5\)\% \\ \hline Other & \(5\) & \(6.4\)\% \\ \hline \end{tabular}
\end{table}
Table 1: Ponzi types statistics for our refined dataset. Chain-shaped schemes constitute 86% of the Ponzi applications in our dataset, while other Ponzi types are few in number.
Figure 1: Data collection process. In particular, we used the list of Ponzi and non-Ponzi contracts from the literature [3, 15] as the ground truth and also downloaded from XBlock-ETH [71, 68] all transactions related to these contracts in the first 10 million Ethereum blocks.
### Data Analysis
In this section, we study how Ponzi applications work and demonstrate how _time-dependent_ characteristics
can help discriminate Ponzi applications. To this end, we analyze the historical transaction data of DynamicPyramid, a representative Ponzi contract2. This is a chain-shaped scheme, the most popular type, which constitutes 86% of all known Ponzi contracts.
Footnote 2: Oxa9e4c3b1da2462752aea980698c335e70e9ab26c (DynamicPyramid’s address)
The analysis of one Ponzi scheme and one non-Ponzi scheme is sufficient to emphasize the importance of the _time dimension_ in detecting Ponzi applications, while avoiding the risk of overfitting when the model development relies too heavily on the known types of the scams and may fail to detect unknown types in the future.
In general, different types of applications have different transaction behaviours, and understandably, Ponzi applications have unique behaviours that are different from non-Ponzi ones. We note that all Ponzi applications in our dataset have been already analyzed in [3]. However, they did not perform the same analysis on _non-Ponzi_ applications to show the differences between them. In our analysis, we compare the representative applications of the two groups to demonstrate their differences regarding temporal behaviours.
**Transaction volumes.** We start our analysis by comparing the _transaction volumes_ of a Ponzi application (DynamicPyramid) and a non-Ponzi application3. The transaction volume of an application measures the daily number of associated internal and external transactions. As observed in Fig. 2, the Ponzi application had a shorter lifespan with a peak transaction volume concentrating in the first month followed by almost no activities. In comparison, the non-Ponzi application had more regular activities throughout its long lifespan. It is because a Ponzi application often introduces itself as a potential project with a high investment return promise. However, there is no actual project behind it. Thus, participants of Ponzi applications often work actively at the beginning since the application pays them regularly. However, when the number of participants increases, it is impossible for the application to pay a large proportion of members. This results in members leaving the application and the scheme collapses.
Footnote 3: Oxa9e4c3b1da2462752aea980698c335e70e9ab26c (DynamicPyramid’s address)
**Investment versus payment activities.** Pushing our analysis one step further, we break down transactions into two different types, namely, _investments_ and _payments_.
An investment refers to a transaction sending ETH from an investor to an application, whereas a payment refers to a transaction from an application that pays ETH to an investor.
As demonstrated in Fig. 3, payments (orange dots) and investments (blue dots) of the Ponzi application concentrated only in the first month. Moreover, each orange dot was preceded by some blue dots of smaller ETH amounts. This is because the examined Ponzi application, a chain-shaped scheme, must gather sufficient new investment funds before making a payment to a single participant. After this intensely active period, the number of payments decreased and finally disappeared, despite a few new investments going into the application. This happened because the application's balance was no longer enough to make any new payment. On the contrary, the activities spread out over the life of the non-Ponzi application and the payment amount is often equal to or less than the investment amount.
Figure 3: Investment and payment activities of a Ponzi (DynamicPyramid) and a non-Ponzi applications. Several lower investments (blue dots) were followed by a higher payment (orange dot) in the Ponzi application, which demonstrates the funds accumulation before a payment to an investor can be made.
Figure 2: Daily transaction volumes of a Ponzi (DynamicPyramid) and a non-Ponzi applications. The Ponzi application had a shorter lifespan with a peak transaction volume concentrating in the first month followed by almost no activities. By contrast, the non-Ponzi application had more regular activities throughout its long lifespan.
**Application balance.** The _balance_ of an application is the amount of ETH in the application at the time. How the balance varies as time goes by can indicate the type of application. As demonstrated in Fig. 4, the balance of the Ponzi application (Dynamic Pyramid) often rose gradually (investments), and after a while, dropped dramatically and created a "cliff" (payment). This is because the balance was gradually accumulated from the investments until reaching the amount that the application has committed to pay to a particular investor when he joined the application. Once the desired balance was reached, the promised profit was immediately paid to the corresponding investor.
**Investment and payment frequencies/amounts**. Apart from the contract-centric data mentioned above, user-centric data is also essential in distinguishing the Ponzi behaviours. Fig. 5 shows the frequencies of investments (blue heat map) and payments (red heat map) of Ponzi and non-Ponzi applications' participants. More specifically, the frequency represents the number of times each investor (corresponding to a square in the map) invested or got paid. Fig. 6 demonstrates the investment amounts (blue heat map) and payment amounts (red heat map) for all investors. The two sub-figures of the Ponzi application (DynamicPyramid) in both figures indicate that the earlier an investor joined the application the higher chance he would receive a payment. More specifically, the Ponzi application paid most early investors and none to the late investors, including those who invested heavily but joined late. On the other hand, the two sub-figures of the non-Ponzi application show that the payment frequencies and amounts do not depend on the order the users joined the application. In fact, a large number of users who came in late but still received payments regularly regardless of the number of times they invested.
### Transaction-Based Features
In the literature, the only type of transaction-based features used so far, often referred to as _account features_, are based on general statistics of the transactions, e.g., the total/average investment amount, the final balance of the contract, or the maximum number of payments to an investor, etc. (see Appendix A for the complete list). Using account features alone led to low detection accuracy, as reported in various works in
Figure 4: Application balances. We examined changes in the balance of the Ponzi application (DynamicPyramid) during the first four months when users were most active since the application was launched. The same period is also observed on a non-Ponzi application to make a fair comparison. As observed, the chart of the Ponzi contract has a few “cliffs” while that of the non-Ponzi contract has none.
Figure 5: Investment and payment _frequencies_. Investors, which are represented by the squares, are ordered in the top-down, left-right manner. Colour intensities are used to represent frequencies: the darker the colour, the higher the number of times the investor invested or got paid.
Figure 6: Investment and payment _amounts_. Investors, which are represented by the squares, are ordered in the top-down, left-right manner. Colour intensities are used to represent the amounts of ETH: the darker the colour, the more ETH the investor invested or got paid.
the literature. This isn't a surprise, given that the temporal dimension has been completely neglected in the existing works. The discussion in Section 3.2 clearly demonstrates that the temporal dimension is essential in studying the behaviours of Ponzi and non-Ponzi schemes, and shouldn't be ignored. Using a combination of account and time-dependent features proposed in our work is a natural way to improve the accuracy of the detection models. We also show later in Section 4.3 that even using the time-dependent features alone can already improve the detection accuracy compared to account-feature-based approaches. Both types of features are discussed in detail below.
**Account features:** This type of features has been widely used in previous studies [16, 17, 39, 63]. These features capture general information about the contract of interest.
More specifically, general statistic metrics such as average, count, sum, standard deviation, and Gini coefficient4[66] can be extracted from the set of all relevant transactions to aggregate account features. Although insufficient to capture all behaviours of the Ponzi scheme, account features are still useful in expressing the scam's working logic. For example, the Gini coefficient of the number of payments can demonstrate an inequality in money distribution, or the final balance of the application indicates whether the investment funds have been distributed all to investors. Therefore, in our investigation, we still include account features introduced in the literature [16, 17, 39, 63], which are listed in A.
Footnote 4: The measure of wealth inequality in a social group. A Gini coefficient of 0 indicates perfect equality, while a Gini coefficient of 1 indicates perfect inequality.
**Time-dependent features:** As discussed earlier, time-dependent features play an important role in identifying Ponzi applications. Unlike account features, they capture the behaviours and activities throughout the application's lifetime. To aggregate time-dependent features, we _first_ partitioned our transactions into several _time intervals_ (days), and built 43 _time-series_ that measure various aspects of the transactions, e.g., the total number of incoming transactions or the total amount of money the contract received from user addresses during each interval. These times series form three dimensions, namely, the contract address, the interval, and the data value for that contract in that interval (e.g., account balance). In order to use the time-series data alongside the account features, which are only two-dimensional (contract address and data value), in a single detection model, we _then_ use a dimensionality-reduction technique to compress the 3-dimensional time-series data into the 2-dimensional data, producing the final time-dependent features. We discuss both tasks in detail below (see Fig. 7 for the general feature aggregation process).
To create the time-series, we following the following steps.
1. For a fixed time duration \(T\), one day in our case, we split our transaction data into \(N\) time intervals of length \(T\) each where \[N=\left\lceil\frac{\text{life\_time}}{T}\right\rceil.\]
2. Based on the timestamp field, we assigned each transaction to its corresponding interval.
3. We created 43 time-series (see Appendix B for the complete list), which were aggregated using the eight basic data fields of a transaction listed in Section 2.1 together with data derived from them. These data are comprehensive enough to represent any activities that occur during the application's lifetime. Thus, for each application, the time-series can be represented as a 2-dimensional matrix of size \(N\times 43\). Lastly, if the dataset has \(M\) applications, the time-series data extracted from the whole dataset can be represented as a 3-dimensional array with size \(M\times N\times 43\).
Finally, to generate 2-dimensional time-dependent features to used together with the account features in the same model, we employed a dimensionality-reduction technique to compress the time-series data. To classify multivariate time-series variables, Blazquez-Garcia _et al._[9] introduced a number of techniques, including dimensionality-reduction and dissimilarity-based techniques. The former aims to reduce the dimensionality of the input multivariate time-series into a set of uncorrelated variables [34] while the latter directly analyzes the pairwise dissimilarity between the time-series [5, 42, 50]. In our research, we used a dimensionality
Figure 7: Transaction-based features aggregation. Account and time-dependent features are both created from transaction data of each application in a dataset.
reduction technique to reduce the time dimension of the time-series. In particular, we employed a finite set of 12 statistical measures (see Appendix C) proposed in the previous studies (see, e.g., [23, 30, 65, 34]) to capture the global information of the time-series. By applying these 12 measures to the 43 time-series, we compressed the 3-dimensional time-series data downto a 2-dimensional \(M\times 516\) matrix (note that \(516=43\times 12\)).
The Python codes used to aggregate and process features from the downloaded transactions are available online at [1].
### Detection Models
In this research, to measure the effectiveness of our proposed time-dependent features, we reused classification methods employed in the previous studies [16, 39], namely, Random Forest (RF) [55] and XGBoost (XGB) [14]. In addition, other well-known classification methods such as \(K\)-nearest neighbor (KNN) [20], Support vector machine (SVM) [33], and LightGBM (LGBM) [40] were also included in our experiment in order to find the most suitable classification model for the problem. The details of each classification model is listed below.
* **Random Forest (RF)** is a computationally efficient classification algorithm that works effectively in several domains [51] including fraud detection [7]. A key idea of this algorithm is to use the Bootstrap resampling technique to generate different training decision trees by repeatedly sampling samples from the original dataset. Finally, a better result is achieved by aggregating the predictions from all trees in the forest.
* **XGBoost (XGB)** is a gradient-boosting based algorithm that creates gradient-boosted decision trees in sequential form and then groups these trees to form a strong model. Unlike RF, which aggregates the results from all trees to get the final result, the result of XGB is the prediction of the last model, which addressed data misclassified from previous models.
* \(K\)**-nearest neighbour (KNN)** is a non-parametric classifier that uses proximity to estimate the likelihood that a data point will become a member of one group.
* **Support vector machine (SVM)** is an algorithm that performs classification by achieving a hyperplane that enlarges the border between two categories in an multi-dimensional feature space. We include this classifier in our experiment because this method has been widely applied in binary classification problems or fraud detection problems.
* **LightGBM (LGBM)** is also a gradient-boosting-based algorithm similarly to XGB. However, LGBM grows a tree vertically (leaf-wise) while XGB grows trees horizontally (level-wise). With leaf-wise algorithms, LGBM often has better accuracy and faster training time than other gradient-boosting-based algorithms.
## 4 Experiments
### Model Structure and Experiment Setting
We describe in Fig. 8 the overall transaction-based detection workflow. After the Data collection, Data analysis, and Feature aggregation steps were done (see Section 3), two groups of features, namely, account features and time-dependent features, were produced. These two groups were used both separately and together in different detection models. The dataset consisting of 79 Ponzi and 1182 non-Ponzi applications and their feature groups were split into a training set (80%) and a test set (20%).
In our data collection, the ratio of Ponzi and non-Ponzi applications is 79:1182, which means Ponzi applications only occupy 6% of total applications. Therefore, we applied data sampling techniques to balance our dataset. We adopted the well-known oversampling method Borderline-SMOTE [31] to generate new Ponzi instances that have more than half of the K nearest neighbours being non-Ponzi applications. That helps to enhance the existence of Ponzi applications that are more likely to be misclassified as they are located near the border of the two classes. However, if we generate too many Ponzi samples, it may cause overfitting in our model. Therefore, instead of generating Ponzi applications up to the same number of non-Ponzi applications, we only generated the number of Ponzi samples to some extent and then applied the random undersampling method to reduce the bias on the non-Ponzi class samples.
After the dataset is balanced, the \(K\)-fold cross-validation method was used to train the selected classifier on the training set. More specifically, the training set was divided into \(K\) subsets of the same size, and a detection model is also run \(K\) times. At each iteration, one of the subsets was sequentially picked as validation data for validating our model performance. In contrast, the remaining subsets were all used as training data for building a model. In our experiment, we only set \(K=5\), which is lower than common practice in the literature because our dataset is small. Finally, a trained model was used to classify the applications in the unseen test dataset. To make our test result more reliable, we repeated the experiment process 500 times, and the final result was obtained by taking the average. It is worth mentioning that the same hyperparameters were used for the same models to make a fair comparison.
The Python codes of our models are available online at [1].
### Evaluation Metrics
We use standard metrics to evaluate the accuracy of the detection models in our experiments. Hereinafter, the numbers of true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) (see Table 2) are used for calculating the Accuracy, Precision, Recall, and F1-score.
* **Accuracy**: the fraction of correct predictions. \[\texttt{Accuracy}=\frac{\texttt{TP + TN}}{\texttt{TP + FP + TN + FN}}\]
* **Precision**: the fraction of the actual scams out of all the predicted scams by the method. \[\texttt{Precision}=\frac{\texttt{TP}}{\texttt{TP + FP}}\]
* **Recall**: the fraction of detected scams among all actual scams. \[\texttt{Recall}=\frac{\texttt{TP}}{\texttt{TP + FN}}\]
* **F1-score**: the harmonic mean of Precision and Recall. \[\texttt{F1-score}=\frac{2(\texttt{Precision}\cdot\texttt{Recall})}{ \texttt{Precision + Recall}}\]
### Experimental Results
We conducted _three_ experiments to demonstrate the advantages of using our proposed time-dependent features.
**Experiment 1 (Performance comparison).** In this experiment, we aimed to compare the performance of our proposed models using the new features with the existing approaches. As already mentioned, most of the previous studies used either opcode features or both opcode features and account features to build their detection models. Only a few works attempted a transaction-based approach (without using opcode features) separately [16, 39]. To have a fair comparison, we aimed to reproduce those experiments on our dataset and use them as the baselines. However, their codes and feature data haven't been released to the community. Thus, we reimplemented those models based on the details provided in their papers, including the lists of features and classification algorithms.
We would like to emphasize that the F1-score we obtained for the model used in [16] is actually higher than what was reported in their paper (54% versus 44%). However, we couldn't reproduce the very high scores reported in [39]. We are not completely sure of the reason for this discrepancy. However, we note that the authors of [15] also encountered the same issue: they reimplemented the approach in [39] and achieved similar F1-score as ours (66% versus 68.8%). This could be due to the fact that in both [15] and our paper, we started from the same dataset of 1395 Ponzi and non-Ponzi schemes, while in [39], the authors used a different set that includes 3203 non-Ponzi addresses. Unfortunately, it was not clear in their paper how these addresses were collected and hence, we couldn't recreate their dataset. We also noticed that although the bytecode size (size_info) was created from the smart contract bytecode, it was listed among the top eight important transaction-based features listed in [39, Table 2]. This makes their detection model depend on contract code as well and therefore is susceptible to opcode obfuscation techniques [8, 15]. In our experiment to reproduce their transaction-based model, we ignored this irrelevant (contract-code-based) feature size_info.
A comparison of various scores of our models and the baselines are provided in Table 3. We have the following observations.
1. _First_, according to the experimental results, the detection models using time-dependent features and account features improved the F1-score of the models by Jung _et al._[39] and Chen _et al._[16] by 8.3% and 26.4%,
\begin{table}
\begin{tabular}{|l|c|c|} \hline & **Predicted Ponzi** & **Predicted non-Ponzi** \\ \hline
**Ponzi** & TP & FN \\ & hit & miss \\ \hline
**non-Ponzi** & FP & TN \\ & false alarm & correct rejection \\ \hline \end{tabular}
\end{table}
Table 2: Confusion matrix
Figure 8: Ponzi detection workflow
respectively.
2. _Second_, even with the time-dependent features alone, our detection model already achieved higher F1-score than the baselines. Moreover, ours also achieved higher Recall values. By definition, a higher Recall value means our approach can detect more Ponzi cases than the baseline approaches.
3. _Third_, among the attempted classification algorithms, we noticed that gradient-boosting classifiers were more efficient in Ponzi detection than others. XGBoost (XGB) and LightGBM (LGBM) achieved better F1-score values than other classifiers, even though we only used time-dependent features. Following Table 3, LightGBM, using both time-dependent and account features, provided the best outcome not only in the F1-score but also in Accuracy and Precision.
**Experiment 2 (Contribution of time-dependent features).** Next, we conduct another experiment to investigate how much the newly proposed time-dependent features contributed to our top model's performance. To this end, we first retrieved the list of features' importance from the LGBM model in the previous experiment. The _importance_ of a feature in the LGBM model is defined to be the number of times this feature is used to split the data across all decision trees. In LGBM, an effective feature selection technique, namely Exclusive Feature Bundling (EFB), has been adopted to reduce the number of features without affecting the model's performance. We found from the outcome of the LGBM detection model that only 200/541 features (516 time-dependent features and 25 account features) had been used at least once to build a tree in the LGBM detection model. More specifically, these 200 important features consist of 175 time-dependent features and all 25 account features. We sorted these 200 features in descending order of importance. After that, we reran the detection experiment with an LGBM model using only the \(5,10,15,\ldots,200\) most important features among the 200. The experimental results shown in Fig. 9 demonstrate how the F1-score values of the prediction were improved as more time-dependent features were used in the model. We can also observe in the bottom sub-figure that while the top five most important features are all account features, from the top 10 onward, time-dependent features start to appear. For example, the top 30 contains 16 account features and 14 time-dependent features.
As can be seen from Fig. 9, the F1-score sharply increases when we increase the number of features, especially at the beginning. According to the top sub-figure of Fig. 9, the value of the F1-score exceeds 0.8 when at least 30 features out of 200 most important features are used, reaching a peak at 0.826 with the top 90 features. Then, the F1-score values fluctuate around 0.820 as the number of features further increases. As shown in the bottom sub-figure of Fig. 9, the percentage of time-dependent features in the important feature list increases in the same direction as F1-score and over 50% when the F1-score is over 0.8. It proves that our proposed time-dependent features have significantly contributed to the performance of LGBM, the best-performing classification model in Experiment 1.
**Experiment 3 (Detecting a new type of Ponzi).** To check whether our classification model using the proposed feature list can detect a new Ponzi type, we conducted the third experiment using the LGBM model as follows. The key idea is to train our model on some types of Ponzi schemes and test it on another type of Ponzi schemes to see if it can still accurately detect these schemes.
To achieve this, we first removed all applications for each known type of Ponzi schemes mentioned in Section 2.2 from our training set. The removed applications were then used in a test set for testing the trained model's new Ponzi detection ability. However, we only removed each of the three Ponzi types (waterfall schemes, tree-shaped schemes, and handover schemes) or all three and not the chain-shaped schemes, which account for 86% of all the Ponzi schemes in the dataset. The reason is that if we remove all chain-shaped schemes, our model won't have enough Ponzi samples to learn the scam's behaviours. Furthermore, various test sets with different scam rates were introduced to test how our model performs in different situations, e.g., with a full-scam test set (100% scams), a balance test set (50% scams), and a few-scam test set (6% scams, similar to our entire dataset's scam rate). Due to the lack of Ponzi applications, we can only decrease the scam rate by increasing the number of non-Ponzi applications in test sets. The same experimental settings were used in our experiments.
The results, demonstrated in Table 4, indicate that the de
Figure 9: LGBM’s performance when using the \(5,10,15,\ldots,200\) most important features (top sub-figure) and the percentages of time-dependent features among these top features (bottom sub-figure)
tection model can detect over 89% of actual new Ponzi applications in a given test set (greater than 89% of Recall value in most cases). Moreover, the Precision value is approximately 80% even in few-scam test sets, and the model also achieved F1-score at least 90% in all cases. Although the dataset we have is not ideal in the sense that there are very few Ponzi applications of types other than the chain-shaped, which may affect the reliablity of our third experiment, the outcome still gives strong evidence that a completely new type of Ponzi scheme can still be detected by our model.
## 5 Conclusions and future work
Although the blockchain technology can provide great benefits to various industries, it also opens new opportunities for exploitation by scammers. Ponzi schemes, often advertised as high-yield investment projects, have stolen a large amount of money from investors globally.
In this study, we proposed a robust method for detecting Ponzi schemes in Ethereum using only the transaction data, which is hard to be manipulated by scammers. We proposed a list of effective features that reflect the scam natures, extracted from a careful analysis of the Ponzi and non-Ponzi schemes, in order to improve the detection accuracy of the transaction-based approach.
More specifically, our analysis showed that some characteristics of a Ponzi application depend on time and should be captured by time-series, representing the application's behaviours and activities throughout its lifetime.
As such, we introduced a list of novel time-dependent features, extracted from these time-series, which help to signifi
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Test candidate** & **Scam rate** & **\#Ponzi** & **\#non-Ponzi** & **Accuracy** & **Precision** & **Recall** & **F1-score** \\ \hline \multirow{3}{*}{Waterfall} & 100\% & 4 & 0 & 0.91 & 1.0 & 0.91 & 0.94 \\ \cline{2-7} & 50\% & 4 & 4 & 0.94 & 0.98 & 0.89 & 0.93 \\ \cline{2-7} & 6\% & 4 & 62 & 0.97 & 0.79 & 0.89 & 0.83 \\ \hline \multirow{3}{*}{Tree-shaped} & 100\% & 1 & 0 & 1.0 & 1.0 & 1.0 & 1.0 \\ \cline{2-7} & 50\% & 1 & 1 & 0.99 & 0.99 & 1.0 & 0.99 \\ \cline{2-7} & 6\% & 1 & 15 & 0.98 & 0.87 & 1.0 & 0.91 \\ \hline \multirow{3}{*}{Handover} & 100\% & 1 & 0 & 0.97 & 0.97 & 0.97 & 0.97 \\ \cline{2-7} & 50\% & 1 & 1 & 0.97 & 0.94 & 0.95 & 0.94 \\ \cline{2-7} & 6\% & 1 & 15 & 0.98 & 0.80 & 0.94 & 0.85 \\ \hline \multirow{3}{*}{All of above} & 100\% & 6 & 0 & 0.92 & 1.0 & 0.92 & 0.95 \\ \cline{2-7} & 50\% & 6 & 6 & 0.94 & 0.98 & 0.91 & 0.93 \\ \cline{1-1} \cline{2-7} & 6\% & 6 & 94 & 0.98 & 0.80 & 0.91 & 0.84 \\ \hline \end{tabular}
\end{table}
Table 4: The outcomes of Experiment 3 (Detecting a new type of Ponzi). All applications of each Ponzi type (waterfall, tree-shaped, handover, or all three) were removed from our training set. These applications were then used only for testing. We also experimented with test sets of different Ponzi rates (100%, 50%, and 6%). New Ponzi types were successfully detected with high accuracy, demonstrating the capability of our proposed feature list and detection models.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**Approaches** & **Features** & **Classifier** & **Accuracy** & **Precision** & **Recall** & **F1-score** \\ \hline Jung _et al._[39] & Account & Random Forest & 0.965 & 0.811 & 0.612 & 0.688 \\ \hline Chen _et al._[16] & Account & XGBoost & 0.948 & 0.620 & 0.487 & 0.537 \\ \hline \multirow{3}{*}{Our approach} & Time-dependent & Random Forest & 0.963 & 0.702 & 0.765 & 0.725 \\ \cline{2-7} & Time-dependent + Account & Random Forest & 0.970 & 0.751 & 0.806 & 0.771 \\ \cline{2-7} & Time-dependent & XGBoost & 0.964 & 0.701 & 0.780 & 0.731 \\ \cline{2-7} & Time-dependent + Account & XGBoost & 0.974 & 0.784 & 0.831 & 0.801 \\ \cline{2-7} & Time-dependent & KNN & 0.914 & 0.420 & 0.879 & 0.563 \\ \cline{2-7} & Time-dependent + Account & KNN & 0.915 & 0.422 & 0.886 & 0.567 \\ \cline{2-7} & Time-dependent & SVM & 0.879 & 0.342 & 0.936 & 0.496 \\ \cline{2-7} & Time-dependent + Account & SVM & 0.884 & 0.352 & **0.938** & 0.507 \\ \cline{2-7} & Time-dependent & LightGBM & 0.967 & 0.721 & 0.781 & 0.743 \\ \cline{2-7} & Time-dependent + Account & LightGBM & **0.977** & **0.811** & 0.845 & **0.822** \\ \hline \end{tabular}
\end{table}
Table 3: Overall evaluation results. Models adopting our proposed feature list have a higher F1-score values than baselines while using similar classifiers. On top of that, all classifiers also have higher Recall values when using Time-dependent features. The LightGBM (LGBM) model, which used both time-dependent and account features, achieved the best performance among tested classifiers. The numbers in **bold font** are the highest values in the corresponding columns.
cantly improve various performance metrics compared to the existing transaction-based approaches.
Our plan for future work is discussed below. _First_, although we have considerably increased the detection accuracy of transaction-based detection tools, there is still room for future improvement. Specifically, one open problem is to find more effective statistical measures to represent the global nature of time-series (other than the 12 measures we obtained from the existing works). _Second_, it is desirable to collect more data to extend the ground-truth dataset for Ponzi applications, which will help to train the detection models more effectively. Moreover, with more data, we can test our approaches on other popular machine learning algorithms that work effectively on big data such as Artificial Neural Networks (ANN) [64] or Recurrent Neural Networks (RNN) [45]. _Finally_, as time goes by, blockchain scans are becoming more sophisticated. For example, instead of simply using smart contracts to perform a fraud automatically, scammers may use multiple smart contracts or smart contracts from a third party as an additional service, making the picture much more complicated. In such cases, the application's transactions might not be enough to perform fraud detection. Detecting such sophisticated scams remains a big challenge.
|
2306.17537 | 3D induction log modelling with integral equation method and domain
decomposition preconditioning | The deployment of electromagnetic (EM) induction tools while drilling is one
of the standard routines for assisting the geosteering decision-making process.
The conductivity distribution obtained through the inversion of the EM
induction log can provide important information about the geological structure
around the borehole. To image the 3D geological structure in the subsurface, 3D
inversion of the EM induction log is required. Because the inversion process is
mainly dependent on forward modelling, the use of fast and accurate forward
modelling is essential. In this paper, we present an improved version of the
integral equation (IE) based modelling technique for general anisotropic media
with domain decomposition preconditioning. The discretised IE after domain
decomposition equals a fixed-point equation that is solved iteratively with
either the block Gauss-Seidel or Jacobi preconditioning. Within each iteration,
the inverse of the block matrix is computed using a Krylov subspace method
instead of a direct solver. An additional reduction in computational time is
obtained by using an adaptive relative residual stopping criterion in the
iterative solver. Numerical experiments show a maximum reduction in
computational time of 35 per cent compared to solving the full-domain IE with a
conventional GMRES solver. Additionally, the reduction of memory requirement
for covering a large area of the induction tool sensitivity enables
acceleration with limited GPU memory. Hence, we conclude that the domain
decomposition method is improving the efficiency of the IE method by reducing
the computation time and memory requirement. | Durra Handri Saputera, Morten Jakobsen, Koen W. A. van Dongen, Nazanin Jahani, Kjersti Solberg Eikrem, Sergey Alyaev | 2023-06-30T10:48:58Z | http://arxiv.org/abs/2306.17537v1 | # 3D induction log modelling with integral equation method and domain decomposition preconditioning
###### Abstract
The deployment of electromagnetic (EM) induction tools while drilling is one of the standard routines for assisting the geosteering decision-making process. The conductivity distribution obtained through the inversion of the EM induction log can provide important information about the geological structure around the borehole. To image the 3D geological structure in the subsurface, 3D inversion of the EM induction log is required. Because the inversion process is mainly dependent on forward modelling, the use of fast and accurate forward modelling is essential. In this paper, we present an improved version of the integral equation (IE) based modelling technique for general anisotropic media with domain decomposition preconditioning. The discretised IE after domain decomposition equals a fixed-point equation that is solved iteratively with either the block Gauss-Seidel or Jacobi preconditioning. Within each iteration, the inverse of the block matrix is computed using a Krylov subspace method instead of a direct solver. An additional reduction in computational time is obtained by using an adaptive relative residual stopping criterion in the iterative solver. Numerical experiments show a maximum reduction in computational time of 35 per cent compared to solving the full-domain IE with a conventional GMRES solver. Additionally, the reduction of memory requirement for covering a large area of the induction tool sensitivity enables acceleration with limited GPU memory.
Hence, we conclude that the domain decomposition method is improving the efficiency of the IE method by reducing the computation time and memory requirement.
Keywords:Electromagnetic theory - Numerical modelling - Numerical Solutions - Integral equation method - Domain decomposition.
## 1 Introduction
State-of-the-art tools for electromagnetic (EM) induction logging-while-drilling (LWD) enable real-time mapping of formation boundaries tens of meters away from the borehole (Sinha et al., 2022). These tools typically consist of multiple antenna configurations that have different sensitivities to the electrical resistivity distribution in the medium around the borehole. The distribution of the electrical properties is quantified through an inversion process and provides structural information and characteristics of the surrounding medium. The studies in real-time geosteering inversion usually employ 1D or 2D approximations (Bakr et al., 2017; Noh et al., 2022; Pardo and Torres-Verdin, 2015; Puzyrev, 2019). However, for imaging complex geological structures, it is important to capture the 3D variability of the resistivity change around the borehole through 3D inversion methods (Puzyrev et al., 2019; Sinha et al., 2022). The work of Wilson et al. (2019) shows that it is possible to perform 3D inversion in real-time, however, it is challenging due to the large computational cost required for the 3D forward modelling, especially when quantification of the uncertainties in the inversion is required. Therefore, the study of a fast 3D forward solver that accurately models induction logs remains essential for the development and testing of new imaging methods.
The integral equation (IE) method is one of the most widely applied numerical methods for the 3D modelling of EM data (Avdeev, 2005; Wang et al., 2021) alongside the finite difference (Newman and Alumbaugh, 2002; Hou et al., 2006) and finite element methods (Puzyrev et al., 2013; Ren et al., 2014). One of the main advantages of using the IE method is that it has the accuracy of a semi-analytical solution (Wang et al., 2021). Without introducing many specific approximations, the EM fields around the borehole are obtained by solving the linear system arising from the discretization of the integral equations. As the linear system is dense, the computational memory and time required can be large compared to other numerical methods (Yoon et al., 2016;
Zaslavsky et al. 2011). To overcome this challenge, the linear system can be efficiently solved using an iterative solver based on the Krylov subspace method in combination with the utilization of the FFTs to accelerate the convolution integral operations in the linear system (Fang et al. 2006). A faster convergence rate can be achieved by implementing the contraction IE formulation (Hursan & Zhdanov 2002) which works especially well in the presence of a high contrast or a high degree of anisotropy. Additionally, the application of GPUs further decreases computation times because GPUs enable the acceleration of mathematical operations that can be straightforwardly parallelized (Dyatlov et al. 2015; Saputera et al. 2022).
In the work of Zhdanov et al. (2006), the formulation of the IE method is extended by decomposing the region of interest into several sub-domains. The field in the entire domain is obtained by sequentially solving the linear system in each sub-domain and updating the interaction between the sub-domains iteratively until convergence. With this formulation, it becomes feasible to conduct large-scale modelling of surface EM data in heterogeneous media as the computational operation can be reduced to one sub-domain at a time. It is possible to obtain an additional reduction in computational costs by only considering sub-domains that contain an anomaly with respect to the background medium. This leads to a smaller number of discretization blocks required for the 3D modelling while still enabling FFT implementation (Endo et al. 2009) and an improved iterative solver convergence rate (Van Dongen et al. 2007). Typically, a horizontally layered model is chosen as the background medium as the theory of Green's functions for layered one-dimensional (1D) models is very well developed (Zhdanov et al. 2006). Hence, the IE method can be very efficient when the resistivity model only deviates in some areas from the 1D model. However, in our application, the subsurface structure can vary in all directions. The sub-domains containing an anomaly can be everywhere around the EM tools and it may not be possible to achieve a reduction in the number of discretizations by the domain decomposition. Additionally, the sub-domains from the decomposition can be adjacent to each other such that the interactions between neighbouring sub-domains are not negligible.
The domain decomposition method can lead to an efficient way of solving the linear system of the IE method (Jakobsen & Tveit 2018; Wang et al. 2017). In the work of Jakobsen & Tveit
(2018), the domain decomposition method is used to efficiently compute the T-matrix for the inversion of controlled source EM data. It is also shown that the domain decomposition method opens up the possibility to compute the T-matrix in parallel.
In this paper, we demonstrate that the formulation of IE with domain decomposition (IE-DD) can be interpreted as a preconditioned linear system, offering a computational advantage. We illustrate that the IE-DD method can be represented as a fixed-point equation, which is iteratively solved using block Gauss-Seidel or Jacobi preconditioners (Saad, 2003). In particular, we will use a Krylov subspace method to invert the block matrices that are present in the formulation. Instead of expressing the decomposition formulation in terms of the contrast source in each sub-domain as described in Zhdanov et al. (2006) and Endo et al. (2009), we present the domain decomposition formulation in terms of the electric field in each sub-domain and a different perspective on the derivation of the IE-DD formulation. Additionally, we propose the use of an inexact iterative solver when solving the IE linear system for each sub-domain where the target tolerance is adapted based on the full domain residual.
The outline of this paper is described as follows. In section 2 called theory, we give an overview of the theory and implementation of the conventional IE method and the IE-DD. In section 3 called numerical results and discussion, we present three numerical examples to show the performance of the IE-DD method and discuss the computational aspect of our implementation. First, we show an example with isolated sub-domains to verify if the domain decomposition formulation will produce the same numerical results as the conventional full-domain formulation. In the second example, we show a numerical experiment with three different IE-DD schemes and compare the performance of these schemes with each other and the full-domain IE as a reference. In the last example, we simulate a logging scenario across a large complex 3D model. Furthermore, we showcase the ability of the domain decomposition method to reduce the memory requirement for dealing with a large number of grid blocks in the last example. This feature lets us cover more portion of the subsurface receivers while keeping a fine grid size, which may not be straightforward to implement in our currently available computer without the domain decomposition method. In section 4, we provide a compact evaluation of the IE-DD implementation in this study and also
some possible improvements for future research. This paper contains appendices with more in-depth details of the IE-DD derivation and implementation. We also include the comparison of our conventional IE code and existing code as a benchmark of our work in the appendix.
## 2 Theory
### The Integral Equation Method for 3D Induction Logs Modelling
Maxwell's equations for heterogeneous media (Wannamaker & Zhdanov 2002) are the basic theory for modelling the induction tools' response within the frequency domain:
\[\nabla\times\mathbf{E}\left(\mathbf{r}\right)=i\omega\mu\mathbf{H}\left(\mathbf{r} \right)+\mathbf{J}^{H}\left(\mathbf{r}\right), \tag{1}\] \[\nabla\times\mathbf{H}\left(\mathbf{r}\right)=\widehat{\mathbf{\sigma}} \left(\mathbf{r}\right)\mathbf{E}\left(\mathbf{r}\right), \tag{2}\]
where \(\mathbf{E}\left(\mathbf{r}\right)\) and \(\mathbf{H}\left(\mathbf{r}\right)\) are the total electric and magnetic fields, respectively, at location \(\mathbf{r}\), \(\mathbf{J}^{H}(\mathbf{r})\) denotes the magnetic source term, \(\omega\) is the angular frequency, \(\mu\) is the magnetic permeability, \(\widehat{\mathbf{\sigma}}\left(\mathbf{r}\right)=\mathbf{\sigma}\left(\mathbf{r}\right)-i \omega\mathbf{\varepsilon}\left(\mathbf{r}\right)\) is the complex electric conductivity, \(\mathbf{\varepsilon}\) is the dielectric permittivity, and \(i\) = \(\sqrt{-1}\). We assume that the magnetic permeability is constant and it is set equal to the magnetic permeability of the vacuum \(\mu_{0}\). Additionally, the imaginary part of the complex conductivity can be ignored in the diffusion regime, which is a typical assumption for the operating conditions of induction tools.
The total electric and magnetic fields can be formulated using the following integral equations (Fang et al. 2006)
\[\mathbf{E}\left(\mathbf{r}\right)=\mathbf{E}^{(0)}\left(\mathbf{r}\right)+\int_{ \Omega}\mathbf{G}^{E}\ \left(\mathbf{r},\mathbf{r^{\prime}}\right)\Delta\mathbf{\sigma}\left(\mathbf{r^{\prime}} \right)\mathbf{E}\left(\mathbf{r^{\prime}}\right)\ dV(\mathbf{r^{\prime}}), \tag{3}\] \[\mathbf{H}\left(\mathbf{r}\right)=\mathbf{H}^{(0)}\left(\mathbf{r}\right)+\int_{ \Omega}\mathbf{G}^{H}\ \left(\mathbf{r},\mathbf{r^{\prime}}\right)\Delta\mathbf{\sigma}\left(\mathbf{r^{\prime}} \right)\mathbf{E}\left(\mathbf{r^{\prime}}\right)\ dV(\mathbf{r^{\prime}}), \tag{4}\]
where the \(\Omega\) indicates the domain of integration where anomalies in the conductivity relative to the background conductivity are present. The integral terms in equations (3) and (4) represent the scattered electric and magnetic fields, respectively, due to the presence of these anomalies. The (0) superscripts indicate the fields defined for a homogeneous isotropic background medium with conductivity \(\sigma_{0}\) which are referred to as the background fields. We choose a homogeneous isotropic
background medium for simplicity and efficiency when calculating Green's tensor (Fang et al., 2006), and we assume that the tool is not always surrounded by a horizontally layered medium. The tensor \(\Delta\boldsymbol{\sigma}\left(\boldsymbol{r}\right)=\boldsymbol{\sigma}\left( \boldsymbol{r}\right)-\sigma_{0}\boldsymbol{I}\), denotes the conductivity contrast between the actual anisotropic medium and the background medium, and with \(\boldsymbol{I}\) the identity tensor. The electric Green's tensor \(\boldsymbol{G}^{E}\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right)\) and its relation to the magnetic Green's tensor \(\boldsymbol{G}^{H}\boldsymbol{\left(r},\boldsymbol{r^{\prime}}\right)\) for a homogenous isotropic medium are (Fang et al., 2006)
\[\boldsymbol{G}^{E}\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right) =\left(i\omega\mu_{0}\;\boldsymbol{I}+\frac{\nabla\nabla}{\sigma _{0}}\right)g\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right), \tag{5}\] \[\boldsymbol{G}^{H}\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right) =\left(i\omega\mu_{0}\right)^{-1}\nabla\times\boldsymbol{G}^{E},\] (6) \[g\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right) =\frac{e^{ik_{0}}|\boldsymbol{r}\boldsymbol{-r^{\prime}}|}{4\pi \left|\boldsymbol{r}-\boldsymbol{r^{\prime}}\right|}, \tag{7}\]
where \(g\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right)\) is the scalar Green's function and \(k_{0}=\sqrt{i\omega\mu_{0}\sigma_{0}}\). To calculate the total magnetic fields, the total electric fields need to be obtained first by solving equation (3). Afterward, the calculation of the total magnetic fields is a straightforward addition of the background magnetic fields and the integral term as shown in equation (4). Therefore, the main computational challenge of the IE method is to solve the integral equation (3), which is classified as the Fredholm integral equation of the second kind (Fang et al., 2006).
### Numerical Implementation of the Integral Equation Method
A numerical solution of the volume integral in equation (3) can be obtained using the method of moments (Gibson, 2021). The subsurface model around the induction tool is discretized into a set of grid blocks with centroids \(\boldsymbol{r}^{j}\) and volume of \(\Delta v\), where \(j\) indicates the \(j\)-th grid block. The discretization of equation (3) leads to a linear system of equations that can be expressed in operator form as
\[\left(\boldsymbol{I}-\boldsymbol{\mathcal{G}}\Delta\boldsymbol{\sigma}\right) \boldsymbol{E}=\boldsymbol{E}^{(0)}, \tag{8}\]
where \(\boldsymbol{\mathcal{G}}\) is the operator that represents the discrete convolution integral of the electric Green's tensor \(\boldsymbol{G}^{E}\left(\boldsymbol{r},\boldsymbol{r^{\prime}}\right)\) with the contrast source \(\Delta\boldsymbol{\sigma}\boldsymbol{E}\) in equation (3). The Green's function in equation (5) can be discretized by separating the non-singular part of the Green's function and dealing
with the singularity by integrating the Green's function of a single grid block over a spherical domain with an equivalent volume (Gao et al., 2005; Jakobsen & Tveit, 2018). The linear system in equation (8) can be efficiently solved using a Krylov subspace method because it does not require the matrix of the linear system to be formed explicitly. The desired accuracy of the iterative method is quantified by the relative residual \(e\) which is calculated as
\[e=\frac{\left\|\boldsymbol{E}^{(0)}-\left(\boldsymbol{I}-\boldsymbol{\mathcal{ G}}\Delta\boldsymbol{\sigma}\right)\boldsymbol{E}\right\|}{\left\|\boldsymbol{E}^{(0)} \right\|}, \tag{9}\]
where \(\left\|\cdot\right\|\) is the L\({}_{2}\)-norm. In this study, we use the generalized minimum residual or GMRES (Saad & Schultz, 1986) as the linear system solver.
Green's tensor operator exhibits a convolution structure in each of the tensor components. This property enables the use of FFT to convolve a Green's tensor component \(G_{pq}^{E}\) and a component of the contrast source \(\left(\Delta\boldsymbol{\sigma}\boldsymbol{E}\right)_{q}\) efficiently (Fang et al., 2006). The \(p\) and \(q\) indices indicate the component of Green's tensor and the contrast source vector with \(p\) and \(q\) = x,y,z. At each step of the iterative solver, the convolution integral can be efficiently calculated by
\[\mathcal{G}_{pq}\left(\Delta\boldsymbol{\sigma}\boldsymbol{E}\right)_{q}= \mathcal{F}^{-1}\left(\mathcal{F}\left[G_{pq}^{E}\right]\odot\mathcal{F}\left[ \left(\Delta\boldsymbol{\sigma}\boldsymbol{E}\right)_{q}\right]\right), \tag{10}\]
where \(\mathcal{F}\) is the FFT operator and \(\odot\) denotes elementwise multiplication. This operation reduces the convolution computation complexity from O(\(N^{2}\)) to O(\(N\)log\({}_{2}N\)) with \(N\) the number of grid blocks. It should be noted that the size of the discretized contrast source \(\Delta\boldsymbol{\sigma}\boldsymbol{E}\) needs to be padded by zeros such that the padded \(\Delta\boldsymbol{\sigma}\boldsymbol{E}\) has twice the original number of points in all directions to avoid the periodicity in the FFT convolution result. The FFT of Green's tensor can be pre-calculated before calling the iterative solvers to save computational time during the iterative process.
### Domain Decomposition
The domain decomposition method attempts to solve the problem for the entire from solutions of the different sub-domains (Saad, 2003). In our case, the spatial domain \(\Omega\) is decomposed into \(M\) non-overlapping rectangular sub-domains \(\Omega_{j}\), hence
\[\Omega=\bigcup_{j=1}^{M}\Omega_{j}, \tag{11}\]
see Fig. 1. Adapting the domain decomposition formulation described in (Endo et al., 2009), the convolution integral term or the scattered electric field term in equation (3) can be expressed as a sum of scattered electric fields from each of the sub-domains. Subsequently, equation (3) can be written as
\[\mathbf{E}\left(\mathbf{r}\right)=\mathbf{E}^{(0)}\left(\mathbf{r}\right)+\sum_{j=1}^{M}\int_{ \Omega_{j}}\mathbf{G}^{E}\ \left(\mathbf{r},\mathbf{r^{\prime}}\right)\Delta\mathbf{\sigma}\left(\mathbf{r^{\prime}} \right)\mathbf{E}\left(\mathbf{r^{\prime}}\right)dV\left(\mathbf{r^{\prime}}\right), \tag{12}\]
where \(\Omega_{j}\) indicates the sub-domains with the conductivity anomaly. From equation (12), we obtain the following set of integral equations evaluated in each sub-domain:
\[\mathbf{E}^{(i)}=\mathbf{E}^{(i,0)}+\sum_{j=1}^{M}\mathbf{\mathcal{G}}^{(ij)}\Delta\mathbf{ \sigma}^{(j)}\mathbf{E}^{(j)},\ \ i=1,2,\ \ldots\,M. \tag{13}\]
The terms \(\mathbf{E}^{(i,0)}\), \(\mathbf{E}^{(i)}\), and \(\Delta\mathbf{\sigma}^{(i)}\) are the background electric field, total electric field, and the conductivity contrast defined at the sub-domain \(\Omega_{i}\), respectively. The terms \(\mathcal{G}^{(ij)}\Delta\mathbf{\sigma}^{(j)}\mathbf{E}^{(j)}\) in equation (13) are the discrete representations of the convolution integral in equation (12) which denote the scattered electric fields in the sub-domain \(\Omega_{i}\) due to the contrast source in the sub-domain \(\Omega_{j}\). It can be seen in equation (13) that the region without a conductivity anomaly does not contribute to the sum and hence can be omitted from the discretization when calculating the electric field. By collecting the scattered field terms into the left-hand side of the equations, the linear system of equations in (13) can be expressed with a block-matrix representation, viz.
\[\mathbf{A}\widetilde{\mathbf{E}}=\widetilde{\mathbf{E}}^{(0)}, \tag{14}\]
where \(\mathbf{A}\) is the block matrix of the rearranged linear system according to the domain decomposition
\[\mathbf{A}=\begin{bmatrix}\mathbf{I}\mathbf{-}\mathbf{\mathcal{G}}^{(11)}\Delta\mathbf{\sigma}^{ (1)}&-\mathbf{\mathcal{G}}^{(12)}\Delta\mathbf{\sigma}^{(2)}&\ldots&-\mathbf{\mathcal{G}} ^{(1M)}\Delta\mathbf{\sigma}^{(M)}\\ -\mathbf{\mathcal{G}}^{(21)}\Delta\mathbf{\sigma}^{(1)}&\mathbf{I}\mathbf{-}\mathbf{\mathcal{G}} ^{(22)}\Delta\mathbf{\sigma}^{(2)}&\ldots&-\mathbf{\mathcal{G}}^{(2M)}\Delta\mathbf{ \sigma}^{(M)}\\ \vdots&\vdots&\ddots&\vdots\\ -\mathbf{\mathcal{G}}^{(M1)}\Delta\mathbf{\sigma}^{(1)}&-\mathbf{\mathcal{G}}^{(M2)}\Delta \mathbf{\sigma}^{(2)}&\ldots&\mathbf{I}\mathbf{-}\mathbf{\mathcal{G}}^{(MM)}\Delta\mathbf{ \sigma}^{(M)}\end{bmatrix}, \tag{15}\]
with \(\widetilde{\mathbf{E}}\) and \(\widetilde{\mathbf{E}}^{(0)}\) are the block vectors containing the total and background electric fields in different sub-domains, respectively. These terms are defined as
\[\widetilde{\mathbf{E}}=\left[\begin{array}{c}\mathbf{E}^{(1)}\\ \mathbf{E}^{(2)}\\ \vdots\\ \mathbf{E}^{(M)}\end{array}\right],\quad\widetilde{\mathbf{E}}^{(0)}=\left[\begin{array} []{c}\mathbf{E}^{(1,0)}\\ \mathbf{E}^{(2,0)}\\ \vdots\\ \mathbf{E}^{(M,0)}\end{array}\right]. \tag{16}\]
Each block in the matrix \(\mathbf{A}\) indicates interaction terms between the sub-domains. The diagonal terms \(\left(\mathbf{I}-\mathbf{\mathcal{G}}^{(ii)}\Delta\mathbf{\sigma}^{(i)}\right)\) in equation (15) can be interpreted as the intra-domain interaction within a sub-domain while the off-diagonal terms \(-\mathbf{\mathcal{G}}^{(ij)}\Delta\mathbf{\sigma}^{(j)}\) represent the inter-domain interaction terms. Since the sub-domains are rectangular, the convolution integrals with Green's tensor in the intra- and inter-domain interaction terms can still be calculated using the FFT.
To solve the rearranged linear system of equation with domain decomposition in equation (14), the matrix \(\mathbf{A}\) is preconditioned by splitting the matrix into a strictly-lower-triangular \((\mathbf{L})\), strictly-upper-triangular \((\mathbf{U})\), and diagonal (\(\mathbf{D}\)) part (Barrett et al. 1994; Saad 2003):
\[\mathbf{A}=\left(\mathbf{L}+\mathbf{U}+\mathbf{D}\right), \tag{17}\]
where the matrices \(\mathbf{L},\mathbf{U}\), and \(\mathbf{D}\) are defined by
\[\mathbf{L}=\begin{bmatrix}\mathbf{0}&\mathbf{0}&\ldots&\mathbf{0}\\ -\mathbf{\mathcal{G}}^{(21)}\Delta\mathbf{\sigma}^{(1)}&\mathbf{0}&\ldots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots\\ -\mathbf{\mathcal{G}}^{(M1)}\Delta\mathbf{\sigma}^{(1)}&-\mathbf{\mathcal{G}}^{(M2)} \Delta\mathbf{\sigma}^{(2)}&\ldots&\mathbf{0}\end{bmatrix},\] \[\mathbf{U}=\begin{bmatrix}\mathbf{0}&-\mathbf{\mathcal{G}}^{(12)}\Delta\mathbf{ \sigma}^{(2)}&\ldots&-\mathbf{\mathcal{G}}^{(1M)}\Delta\mathbf{\sigma}^{(M)}\\ \mathbf{0}&\mathbf{0}&\ldots&-\mathbf{\mathcal{G}}^{(2M)}\Delta\mathbf{\sigma}^{(M)}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\ldots&\mathbf{0}\end{bmatrix}, \tag{18}\]
and
\[\mathbf{D}=\begin{bmatrix}\mathbf{I}\mathbf{-\mathcal{G}}^{(11)}\Delta\mathbf{\sigma}^{(1)}& \mathbf{0}&\ldots&\mathbf{0}\\ \mathbf{0}&\mathbf{I}\mathbf{-\mathcal{G}}^{(22)}\Delta\mathbf{\sigma}^{(2)}&\ldots&\mathbf{0}\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{0}&\mathbf{0}&\ldots&\mathbf{I}\mathbf{-\mathcal{G}}^{(MM)}\Delta\mathbf{\sigma}^{(M)} \end{bmatrix},\]
respectively. By substituting the matrix splitting in (17) into the equation (14) and some simple algebra, we obtain
\[\widetilde{\mathbf{E}}=\left(\mathbf{D}+\mathbf{L}\right)^{-1}\left[\widetilde{\mathbf{E}}^{( 0)}\mathbf{-U}\widetilde{\mathbf{E}}\right], \tag{19}\]
which can be solved by choosing an initial guess of \(\widetilde{\mathbf{E}}\) and iteratively calculating the following
\[\widetilde{\mathbf{E}}^{k+1}=\left(\mathbf{D}+\mathbf{L}\right)^{-1}\left[\widetilde{\mathbf{E }}^{(0)}\mathbf{-U}\widetilde{\mathbf{E}}^{k}\right], \tag{20}\]
with \(k\) the iteration number. The iteration described in equation (20) corresponds to the block Gauss-Seidel iterative method (Barrett et al., 1994; Saad, 2003). The matrix \(\left(\mathbf{D}+\mathbf{L}\right)\) has a lower triangular form where the inverse can be obtained using forward substitution (Venkateshan & Swaminathan, 2014). The forward substitution process to compute equation (20) is outlined in Appendix A. With the forward substitution, the total electric field update in each sub-domain according to equation
(20) can be expressed in the simple form as:
\[\mathbf{E}^{(i),k+1}=\left(\mathbf{I}-\mathbf{\mathcal{G}}^{(ii)}\Delta\mathbf{ \sigma}^{(i)}\right)^{-1}\Bigg{[}\mathbf{E}^{(i,0)}+\sum_{j=1}^{i-1}\mathbf{\mathcal{G}} ^{(ij)}\Delta\mathbf{\sigma}^{(j)}\mathbf{E}^{(j),k+1}+\sum_{j=i+1}^{M}\mathbf{\mathcal{G}} ^{(ij)}\Delta\mathbf{\sigma}^{(j)}\mathbf{E}^{(j),k}\Bigg{]},\]
where \(i\) = 1,2,..., \(M\) denotes the number of inner iterations where the IE is solved for one sub-domain and the number \(k\) denotes the number of the total domain sweeps where the electric field is updated for the entire domain. The inverse operation of the block intra-domain term in equation (21) is not calculated using the direct solver, but instead by using a Krylov subspace method to solve the following linear system of equations within each sub-domain:
\[\left(\mathbf{I}-\mathbf{\mathcal{G}}^{(ii)}\Delta\mathbf{\sigma}^{(i)}\right)\mathbf{E}^{(i ),k+1}=\mathbf{E}^{(i,0)}+\sum_{j=1}^{i-1}\mathbf{\mathcal{G}}^{(ij)}\Delta\mathbf{\sigma }^{(j)}\mathbf{E}^{(j),k+1}+\sum_{j=i+1}^{M}\mathbf{\mathcal{G}}^{(ij)}\Delta\mathbf{ \sigma}^{(j)}\mathbf{E}^{(j),k}. \tag{22}\]
The domain sweep is carried out until the relative residual on the whole domain reaches a desired threshold. The resulting operation of solving equation (22) iteratively is equivalent to the formulation described in Zhdanov et al. (2006) and Endo et al. (2009). However, in our derivation, we can see the link between the original formulation to a block-preconditioned iterative method, which is the block Gauss-Seidel iterative method in this case. The convergence of the Gauss-Seidel iterative method depends on the diagonal dominance of the linear system matrix (Saad, 2003). In this case, if the sum of the inter-domain terms' norm is small compared to the norm of the intra-domain terms in equation (22), then this scheme is guaranteed to converge. Since the magnitude of Green's tensor elements depends on the distance between sub-domains, the interaction terms are small when the sub-domains are isolated from each other. When a sub-domain has small contrasts, the interaction is one-sided from the sub-domain with high contrast. These properties should be considered when designing the domain decomposition settings.
Instead of the Gauss-Seidel iterative method, one can also choose the Jacobi iterative method by taking only the diagonal part of the matrix \(\mathbf{A}\) as the preconditioner of the fixed-point equation instead of its lower triangular part. The fixed-point equation that corresponds to the Jacobi iterative method can be written as
\[\widetilde{\mathbf{E}}^{k+1}=\mathbf{D}^{-1}\left[\widetilde{\mathbf{E}}^{(0)}- \left(\mathbf{L}+\mathbf{U}\right)\widetilde{\mathbf{E}}^{k}\right], \tag{23}\]
which leads to the following linear system of equations to be solved in each sub-domain:
\[\left(\mathbf{I}-\mathbf{\mathcal{G}}^{(ii)}\Delta\mathbf{\sigma}^{(i)}\right)\mathbf{E}^{(i),k+ 1}=\mathbf{E}^{(i,0)}+\sum_{j=1,j\neq i}^{M}\mathbf{\mathcal{G}}^{(ij)}\Delta\mathbf{ \sigma}^{(j)}\mathbf{E}^{(j),k}. \tag{24}\]
Since the right-hand side of equation (24) only depends on the solutions at the \(k\)-th iteration, the Jacobi iterative method is more straightforward to be implemented in parallel computing environments (Barrett et al. 1994). In this case, the linear system of equations at each sub-domain can be solved with the Krylov solver in parallel and the interaction terms are updated after the Krylov solver computations are done in all sub-domains. The main drawback is that the Gauss-Seidel method generally has better convergence properties than the Jacobi method (Barrett et al. 1994).
To further improve the computation speed, we propose to use a Krylov solver with adaptive target residual when solving the IE linear system of a sub-domain. The main idea is the relative residual of the Krylov solver in a sub-domain only needs to be an order of magnitude less than the full-domain relative residual to achieve the convergence of the Gauss-Seidel or Jacobi iteration. Inaccurate approximate solutions from the Krylov solver are acceptable at the beginning of the iteration and the relative residual target of the Krylov solver is lowered as the full-domain relative solver is decreasing during the Gauss-Seidel or Jacobi iteration. Additionally, the initial guess for the Krylov solver in the current outer iteration is updated from the result of the previous outer iteration. Detailed implementation of this strategy is shown in Algorithm 1.
To further accelerate the computation, it is possible to use the contraction integral form which accelerates the Krylov solver convergence rate (Endo et al. 2009; Zhdanov et al. 2006). However, in this study, we use the conventional integral equation formulation in the Krylov solver to reduce the complexity of evaluating the performance of the IE with domain decomposition.
## 3 Numerical results & discussion
In this section, we present three numerical cases to demonstrate the effectiveness of the domain decomposition preconditioning of the IE method. The first case is a model with two anomalous sub-domains separated by an isotropic medium with conductivity equal to the background conductivity. In the second case, we present a model where the anomalous isotropic conductivity is
surrounded by an anisotropic medium. Lastly, we simulate a logging scenario across a faulted sand formation surrounded by anisotropic shale layers. We use the conventional IE formulation as described in section 2 in the GMRES solver for both full-domain IE and IE with domain decomposition (IE-DD) method. All numerical experiments presented in this paper are performed on a laptop with an AMD Ryzen 7 4800H processor and NVIDIA GeForce RTX 3060 Laptop GPU using MATLAB with GPU support enabled. We have compared our full-domain IE code with existing 1D semi-analytical solution (Shahriari et al., 2018) and 3D finite-volume method (Hou et al., 2006). This comparison is shown in Appendix B and our results show a good agreement with less than one per cent average absolute difference.
### Isolated Sub-domains Example
We consider two isolated anomalous sub-domains embedded in an isotropic medium background as shown in Fig. 2. The background conductivity \(\sigma_{0}\) is equal to 0.1 S m\({}^{-1}\) and the conductivity in the anomalous sub-domain is equal to 0.01 S m\({}^{-1}\). A transmitter with 24 KHz frequency is located at the origin (x = 0, y = 0, z = 0 m) and it is oriented in the x-direction. The whole domain is discretized into 128 \(\times\) 128 \(\times\) 128 grid blocks with a uniform grid size of 0.25 \(\times\) 0.25 \(\times\) 0.25 m\({}^{3}\). The two anomalous sub-domains are set to have an equal size of 30 \(\times\) 30 \(\times\) 7.5 m\({}^{3}\). The distance between the closest edges of the two sub-domains is 10 m which is approximately equal to the skin depth of the background medium given the transmitter frequency.
To solve the conventional full-domain IE and the inverse operation in IE-DD formulation with Gauss-Seidel iterative method, we use restarted GMRES method with 10 restart iterations. We set the relative residual \(e\) = 10\({}^{-6}\) for both the conventional full-domain IE and IE-DD Gauss-Seidel iteration stopping criterion. For this case, we did not implement the adaptive relative residual scheme and set \(e\) = 10\({}^{-6}\) as the relative residual target for the GMRES solver stopping criterion in the IE-DD to analyze the convergence behavior of the method. Because the medium without contrast does not contribute to the scattering field, the relative residuals are only computed within the anomalous sub-domains in both cases.
The sub-domains without the conductivity anomalies are excluded from the discretization in
the IE-DD iterations. This results in only half of the total number of grid blocks of the full-domain IE being discretized in the IE-DD iterations. Fig. 3 shows the convergence plot and the total number of GMRES iterations taken to reach the target residual. The number of GMRES iterations required is decreasing in each of the Gauss-Seidel iterations as the relative residual is decreasing. This indicates that the changes in the electric fields due to the interaction terms become smaller as the initial guess for the GMRES solver is updated in each of the Gauss-Seidel iterations. Overall, it took only four Gauss-Seidel iterations for the method to converge below the threshold level with a total of 278 GMRES iterations and the total computational time is 23 s. The full-domain IE took 86 GMRES iterations to converge on the same error level, but the computation time is 53 s. Even though there is more GMRES iteration in IE-DD, the total computational time is approximately twice faster compared to the full-domain IE because the operation at each of the GMRES iterations in IE-DD works on a smaller domain with the number of blocks equal to a quarter of the full-domain grid blocks in each domain.
The magnetic field comparison between both methods is shown in Fig. 4. Qualitatively, there are no differences observed because both methods show similar numerical results within less than 0.01 per cent average normalized magnitude difference. Therefore, the IE-DD method will give the same result within the same relative residual level as the full-domain IE.
### Simple Anisotropic Medium Example
Fig. 5 shows an xz-plane view of a faulted resistive isotropic medium with a conductivity of 0.01 S m\({}^{-1}\) surrounded by an anisotropic medium with vertical transverse isotropy. The conductivity tensor of the anisotropic medium consists of the conductivity in the horizontal and vertical direction with the value of \(\sigma_{h}\) = 0.2 S m\({}^{-1}\) and \(\sigma_{v}\) = 0.1 S m\({}^{-1}\), respectively. The conductivity of the media does not vary in the y-direction. For the background medium, we choose a homogenous isotropic medium with the conductivity of \(\sigma_{0}\) = 0.01 S m\({}^{-1}\). A transmitter with 24 KHz frequency is located at the origin and is oriented in the x-direction. We set 10\({}^{-6}\) as the relative residual target for both methods. The whole domain is discretized into 120 \(\times\) 120 \(\times\) 120 grid blocks with a grid size of 0.25 \(\times\) 0.25 \(\times\) 0.25 m\({}^{3}\).
The full domain is decomposed into three rectangular sub-domains of equal size as illustrated in Fig. 5. Each sub-domain is discretized into 120 \(\times\) 120 \(\times\) 40 grid blocks with a grid size of 0.25 \(\times\) 0.25 \(\times\) 0.25 m\({}^{3}\). With this decomposition, the faulted resistive layer is located only in sub-domain 1 while the other two sub-domains contain only the anisotropic medium.
We present three different schemes of IE-DD to calculate the electric field of the model. The first one (IE-DD-GS-F) is the IE-DD with Gauss-Seidel iterative method and fixed GMRES solver relative residual stopping criterion equal to 10\({}^{\text{-6}}\) in every Gauss-Seidel iteration. The second one (IE-DD-GS-A) is the IE-DD with Gauss-Seidel iterative method and adaptive GMRES solver relative residual stopping criterion. The third one (IE-DD-Jacobi-A) is the IE-DD with Jacobi iterative method and with the same adaptive GMRES relative residual as the second scheme. In the adaptive relative residual scheme, the relative residual stopping criterion is set to be one order of magnitude lower than the relative residual calculated on the whole domain or full-domain relative residual divided by ten.
Fig. 6 displays the comparison between the total GMRES iteration in each Gauss-Seidel iteration for both IE-DD-GS schemes. The total number of GMRES iterations generally increases along with the Gauss-Seidel iteration in the adaptive relative residual scheme, while it is decreasing in the fixed relative residual scheme. In both cases, sub-domain 1, which contains the faulted resistive layer, took the greatest number of GMRES iterations. Since the number of GMRES iterations is proportional to the conductivity contrast, this indicates that sub-domain 1 has the largest conductivity contrast compared to the other two sub-domains.
Table 1 summarizes the computational cost comparison between the full domain IE and IE-DD with three different schemes. Based on the computation time, the IE-DD-GS-A is the fastest scheme compared to the other two methods. The IE-DD-GS-A scheme shows a faster computation time compared to the IE-DD-GS-F because there are fewer GMRES iterations in the IE-DD-GS-A scheme. Therefore, specifying the adaptive relative residual for the Krylov solver in the IE-DD improves the computation time of the original IE-DD formulation with the cost of going through more Gauss-Seidel iterations. The full-domain relative residual plot in each of the outer iterations shown in Fig. 7 indicates that the IE-DD-GS-A has a better convergence rate compared to the IE
DD-Jacobi-A. Besides having a smaller number of total outer iterations, the number of GMRES iterations taken in the IE-DD-GS-A is also fewer compared to the IE-DD-Jacobi-A. However, the computation time of the IE-DD-Jacobi-A can potentially be improved by further utilizing parallel computation to independently solve the linear system in each sub-domains.
### Logging Simulation Across a Complex Formation
We simulated induction logs across the faulted anisotropic formation with an 85\({}^{\circ}\) drilling angle as illustrated in Fig. 8a. This formation consists of anisotropic shale layers surrounding isotropic sand layers. The shale layers are indicated by the blue colours and the sand layers are indicated by the yellow colours in Fig. 8. The model has 2.5D main structural features with the addition of a simple 3D Gaussian perturbation only in the sand layers to imitate a fluid distribution in a reservoir. This perturbation is defined by
\[\boldsymbol{\sigma}_{sand}=\boldsymbol{\sigma}_{sand}^{u}+\alpha \boldsymbol{\sigma}_{sand}^{u}\exp{\left(-\frac{|\boldsymbol{r}_{sand}- \boldsymbol{r}_{c}|}{\gamma}\right)}, \tag{25}\]
where the subscripts \(sand\) denote the values located in the sand layers and the superscripts \(u\) indicate the defined unperturbed value; \(\boldsymbol{r}_{c}\) is the location of the maximum perturbation; \(\alpha\) and \(\gamma\) are the factors that control the magnitude and range of the perturbation, respectively. In this example, we set the peak perturbation location \(\boldsymbol{r}_{c}\) at x = 500 m, y = 0 m, and z = 40 m; and define \(\alpha\) = 4 and \(\gamma\) = 50 m.
We use a moving 3D forward modelling window to simulate a moving transmitter scenario. The z-direction in the forward modelling window is directed to the drilling direction so it is consistent with the component direction of the induction tools (Pardo & Torres-Verdin, 2015). Hence the coordinate system in the window is rotated from the cartesian coordinate according to the drilling direction as illustrated in Fig. 8a. Consequently, the conductivity tensor elements are transformed following the domain rotation (Gao, 2006), see Appendix C for detail. In each of the forward modelling windows, we set a constant background conductivity \(\sigma_{0}\) = 0.1 S m\({}^{-1}\).
Following the typical tool configurations described in Antonsen et al. (2022), we set a z-oriented transmitter with a frequency of 24 KHz and three receivers with spacings of 7, 15, and
30 m as illustrated in Fig. 9 for the logging simulations. A forward modelling window with a size of 32 \(\times\) 32 \(\times\) 32 m\({}^{3}\) may not be enough to capture the full sensitivity of all the receivers, especially the one with the largest spacing. Hence, we tested two different windows with different sizes of 32 \(\times\) 32 \(\times\) 64 m\({}^{3}\) and 64 \(\times\) 64 \(\times\) 64 m\({}^{3}\) to see different sensitivities of the receivers with the forward modelling domain size. We refer to the smaller window as window 1 and the larger one as window 2. In both windows, we keep a grid size of 0.25 \(\times\) 0.25 \(\times\) 0.25 m\({}^{3}\) resulting in a total of 128 \(\times\) 128 \(\times\) 256 and 256 \(\times\) 256 \(\times\) 256 grid blocks for window 1 and 2, respectively. Since our current computer GPU memory size can only handle a maximum of 128\({}^{3}\) grid blocks for the GMRES solver computation, we decompose the window 1 and 2 into two and eight sub-domains in the drilling direction, respectively, as illustrated in Fig. 10. In this way, the memory requirements to calculate the electric field with both windows are reduced to the memory requirement for the calculation using 128\({}^{3}\) grid blocks plus the memory for storing the electric fields. This allows us to fully take advantage of the acceleration with the GPU implementation.
The logging position starts at x = 0 m, y = 0 m, z = 0 m and ends at x = 900 m, y = 0 m, z = 78.74 m. In each logging position, we use the IE-DD-GS-A scheme and set a full-domain tolerance of 10\({}^{-3}\) as the stopping criterion. Fig. 11 shows the magnitude of the z-component of the magnetic field \(|H_{zz}|\) measured in the receivers at each transmitter position. Qualitatively, the differences in the results between the two window settings are increasing with the receiver spacings. This result shows different sensitivities of the transmitter-receiver configuration, and we can observe that the sensitivity range is proportional to the receiver spacing.
The computation time required to calculate the magnetic field for one logging position using the window 1 and 2 settings takes an average of approximately 1.5 and 15 minutes, respectively. Updating the interaction terms is the most expensive part of the computation time, taking up around 80 per cent of the time at every iteration due to the operation acting on the entire domain that consists of a massive number of grid blocks. In every position, it took less than seven Gauss-Seidel iterations to reach the desired tolerance.
## 4 Conclusion
The linear system of equations arising in the IE method for 3D EM method modelling can be naturally decomposed into a set of linear systems of equations that correspond to the IE in different parts of the modelling domain. The IE-DD formulation is reducing the memory requirement to compute a large-scale problem as it provides the connection between each sub-domains while still maintaining the viability of using FFTs to calculate the convolution integral operation. By expressing these linear systems of equations in a block matrix representation where each block represents the interactions between the domains, we have made a link between the derivation in Zhdanov et al. (2006) and Endo et al. (2009) with a preconditioned fixed-point iteration using domain decomposition method. Depending on the choice of the preconditioner, the fixed-point iteration corresponds to the block Gauss-Seidel and Jacobi iterative method. In every Gauss-Seidel or Jacobi iteration, the inverse of the block intra-domain interaction term is calculated using the Krylov subspace method instead of a direct solver.
Our numerical experiment results show that a reduction in computation time can be achieved although the total number of GMRES solver iterations in IE-DD schemes is more than in the full-domain IE. This speed-up is due to the GMRES solver in the decomposed domains being cheaper to compute and it is shown that it only takes five to eight IE-DD outer iterations to reach the desired tolerance. Additionally, specifying adaptive relative residual stopping improves the computation time of the IE-DD by reducing the total number of GMRES iterations required for reaching desired error tolerance. The Gauss-Seidel preconditioning with adaptive relative residual has the fastest computation time among the schemes that we tested in this study. This scheme reduces the computation time of the conventional IE by approximately 35 per cent. The scheme with Jacobi preconditioning takes longer computation time compared to the one with Gauss-Seidel. However, the form of the Jacobi iterative method is more suitable for parallel computation as the operation in each sub-domain can be computed independently, which is a subject for future implementation.
In this study, we have only implemented IE-DD with a simple iterative update corresponding to Gauss-Seidel and Jacobi iterative method. The Gauss-Seidel and Jacobi iterative methods are in general not very competitive in terms of convergence compared to the Krylov subspace method
(Barrett et al., 1994). Therefore, further potential improvement of the IE-DD presented in this study is obtained by implementing the Krylov subspace as the outer iteration update instead of the Gauss-Seidel and Jacobi iteration update. Another interesting application of the domain decomposition in the IE method would be to incorporate a direct method that can be computed in parallel into the domain decomposition preconditioner, for example using the T-matrix method (Jakobsen & Tveit, 2018; Sommer & Jakobsen, 2018).
## 5 Acknowledgements
This work is part of the Center for Research-based Innovation DigiWells: Digital Well Center for Value Creation, Competitiveness and Minimum Environmental Footprint (NFR SFI project no. 309589, [https://DigiWells.no](https://DigiWells.no)). The center is a cooperation of NORCE Norwegian Research Centre AS, the University of Stavanger, the Norwegian University of Science and Technology (NTNU), and the University of Bergen. It is funded by Aker BP, ConocoPhillips, Equinor, Lundin Energy, TotalEnergies, Var Energi, Wintershall Dea, and the Research Council of Norway.
## 6 Data availability
Currently, the data and code relating to this work are not freely available. We are considering publishing the codes with an open-source license in the future.
|
2307.16500 | Deciding Linear Height and Linear Size-to-Height Increase for Macro Tree
Transducers | We present a novel normal form for (total deterministic) macro tree
transducers (mtts), called depth proper normal form. If an mtt is in this
normal form, then it is guaranteed that each parameter of each state of the mtt
appears at arbitrary depth in the output trees of that state. Intuitively, if
some parameter only appears at certain bounded depths in the output trees of a
state, then this parameter can be removed by in-lining the corresponding output
paths at each call site of that state. We use regular look-ahead in order to
determine which of the paths should be in-lined. As a consequence of changing
the lookahead, a parameter that was previously appearing at unbounded depths,
may be appearing at bounded depths for some new look-ahead; for this reason,
our construction has be iterated in order to obtain an mtt in depth-normal
form. Using the normal form, we can decide whether the translation of an mtt
has linear height increase or has linear size-to-height increase. | Paul Gallot, Sebastian Maneth, Keisuke Nakano, Charles Peyrat | 2023-07-31T08:51:17Z | http://arxiv.org/abs/2307.16500v4 | # Deciding Linear Height and Linear Size-to-Height Increase for Macro Tree Transducers
###### Abstract
Tree Transducers are fundamental devices within automata theory and formal language theory. They generalize the finite state transductions from strings to (finite, ranked) trees and were invented in the 1970s in the context of compiler theory and mathematical linguistics. Probably the most basic such transducers are the top-down tree transducer [18, 19] and the bottom-up tree transducer [20], see also [4]. These transducers traverse their input tree once, but may process subrees in several copies. It is well known that these transducers have _linear height increase_ ("LHI"), see e.g. [15].
In this paper we deal with a more powerful type of tree transducer: the macro tree transducer [11] ("mtt"). Mttts can be seen as particular functional programs that carry out primitive recursion via tree pattern matching. Alternativly, mtts can be seen as context-free tree grammars (introduced in [18] as "context-free dendrogrammars"; see also [9, 10, 13] and Section 15 of [16]), the nonterminals of which are controlled by a top-down tree storage in the spirit of [5].
It is well known, that if we restrict the translations of mtts to _linear size increase_, then we obtain exactly the MSO definable tree translations [8]. In that paper it is also proven that it is decidable for a given mtt, whether or not its translation is of linear size increase (in fact, this can even be decided for compositions of mtts, and if so, then the translation is effectively MSO definable [6]). It is an open problem, if it is decidable for a given mtt whether or not its translation can be realized by a top-down tree transducer (in the presence of "origin", this problem was shown to be decidable [12]). As mentioned above, it is a necessary condition for the mtt to be of linear height increase ("LHI"). This raises the question, can we decide for a given mtt, whether or not its translation is of LHI? Here we answer this question in the affirmative.
It is also an open problem, if it is decidable for a given mtt whether or not its translation can be realized by an attributed tree transducer [14, 17] ("att"). It is well-known that attts have _linear size-to-height increase_ ("LSHI"), see, e.g., Lemma 5.40 of [15]. This raises the question, can we decide for a given mtt, whether or not its translation is of LSHI? Here we answer this question in the affirmative. Note that it was conjectured already in [8] that the methods of that paper can be applied in order to decide whether or not the translation of an mtt is of LSHI.
Last but not least, let us consider _linear size-to-distinct-number-of-output-trees increase_ ("LSOI"). It is well-known that attts have LSOI, see, e.g., Lemma 5.43 of [15]. In fact, it was conjectured in the year 2000 by Joost Engelfriet that the translation of an mtt can be realized by an att (with look-around, see [2]) if and only if the translation if of LSOI. This raises the question, can we decide for a given mtt, whether or not its translation is of LSOI?
Here we show that deciding LSOI for mtts is at least as difficult as deciding equivalence of atts. The latter is a long-standing and difficult open problem.
Let us now discuss our results in more detail. How is the _linear size increase_ ("LSI") property decided for a given mtt [8]? The given mtt is first transformed into a certain normal form (called "proper"); intuitively, the normal form guarantees that (1.) each state (except possibly the initial state) produces infinitely many output trees (this is called "input-proper"), and that (2.) each parameter of a state is instantiated with infinitely many distinct argument trees (this is called "parameter-proper"). Note that input-properness is a generalization of the proper form of [1]. Once in proper normal form, it suffices to check if the transducer is "finite copying". This means that (1.) each node of each input tree is processed only a bounded number of times and that (2.) each parameter of every state is copied only a bounded number of times. Both of these properties can easily be reduced to the finiteness problem of ranges of compositions of (nondeterministic) mtts [3]. It is also proved that if a proper mtt is _not_ finite copying, then its translation is _not_ of LSI.
To decide both the LHI and LSHI properties, we introduce a new normal form called "depth-proper". An mtt is depth-proper if each parameter of every state appears at infinitely many different depths (for different input trees). The proof of this normal form is similar to the one of input-properness, but is more complicated. Technically speaking all our mtts are always equipped with look-ahead. The original proof of the proper form of [1] was wrong, because they had not realized that due to the change of look-ahead, states that were input-proper in the original transducer, may become non-input-proper in the new constructed transducer. To solve this issue, it was shown in [8] that the construction has to be iterated, and that this iteration terminates after at most \(|Q|\)-many iterations and yields an input-proper mtt (where \(Q\) is the set of states of the given original mtt). In the case of input-properness, only the look-ahead changes in each iteration of the construction. Our construction of a depth-proper mtt also needs to be iterated due to the change of look-ahead, however, we also add new states in each iteration. This complicates the termination proof and we are not able to present a simple bound such as \(|Q|\) for our iteration.
Given a depth-proper mtt, we can decide the LSHI and LHI properties as follows. First, we consider input trees which contain exactly one special marked input leaf (it will be marked by a state \(p\) of the look-ahead automaton, to act as a place-holder for any input tree for which the look-ahead automaton arrives in state \(p\)). For such input trees, the mtt produces output trees which still contain (nested) state calls to the special input leaf. We say that the mtt has the "finite nesting" property, if there is a bound on the number of nested state calls that appear on any path of such output trees. We can decide whether or not an mtt is finite nesting, similar as before: we change the mtt to nondeterministically output any path of nested states of any such output tree. The original mtt is finite nesting if and only if the range of this transducer is finite (the latter is decidable, as mentioned above). We can also show that if the mtt is _not_ finite nesting, then the given translation if _not_ LSHI. In a similar way we can deal with LHI: here we consider that _each_ leaf of an input tree is marked by a look-ahead state and then proceed exactly as for LSHI.
For the last result, consider two (total) attributed tree transducers. We know that their translations are of linear size-to-number-of-distinct-output-subtrees increase ("LSOI"). We now convert these transducers into equivalent mtts (following, e.g., the construction in the proof of Lemma 5.11 of [7]). We obtain two total mtts that are of LSOI. Consider first that these transducers \(M_{1},M_{2}\) are not equivalent, i.e., there exists some input tree \(s\) such that the output \(M_{1}(s)\) of \(M_{1}\) is not equal to the output \(M_{2}(s)\) of \(M_{2}\). We construct a new transducer \(M\) which takes as input trees of the form \(a^{n}(s^{\prime})\). It outputs a full binary tree of height \(n\), at
the leaves of which are all possible cons-like list of the trees \(M_{1}(s^{\prime})\) and \(M_{2}(s^{\prime})\). This causes that the translation of \(M\) is not of LSOI (in particular, taking \(s^{\prime}=s\)). On the other hand, if \(M_{1}\) and \(M_{2}\) are equivalent, then all those cons-like trees are equal and hence \(M\) is of LSOI.
## 2 Preliminaries
The set \(\{0,1,\ldots\}\) of natural numbers is denoted by \(\mathbb{N}\). For \(k\in\mathbb{N}\) we denote by \([k]\) the set \(\{1,\ldots,k\}\); thus \([0]=\emptyset\). A ranked alphabet (set) consists of an alphabet (set) \(\Sigma\) together with a mapping \(\operatorname{rank}_{\Sigma}:\Sigma\to\mathbb{N}\) that assigns each symbol \(\sigma\in\Sigma\) a natural number called its "rank". We will write \(\sigma^{(k)}\in\Sigma\) to denote that \(\sigma\in\Sigma\) and \(\operatorname{rank}_{\Sigma}(\sigma)=k\). By \(\Sigma^{(k)}\) we denote the symbols of \(\Sigma\) that have rank \(k\).
The set \(T_{\Sigma}\) of (finite, ranked, ordered) trees over \(\Sigma\) is the smallest set \(S\) such that if \(\sigma\in\Sigma^{(k)}\), \(k\geqslant 0\), and \(s_{1},\ldots,s_{k}\in S\), then also \(\sigma(s_{1},\ldots,s_{k})\in S\). We will write \(\sigma\) instead of \(\sigma()\). For a tree \(s=\sigma(s_{1},\ldots,s_{k})\) with \(\sigma\in\Sigma^{(k)}\), \(k\geqslant 0\), and \(s_{1},\ldots,s_{k}\in T_{\Sigma}\), we define the set \(V(s)\subseteq\mathbb{N}^{*}\) of nodes of \(s\) as \(\{\varepsilon\}\cup\{iu\mid i\in[k],u\in V(s_{i})\}\). Thus, \(\varepsilon\) denotes the root node of \(s\), and for a node \(u\), \(ui\) denotes the \(i\)-th child of \(u\). For \(u\in V(s)\) we denote by \(s[u]\) the label of \(u\) in \(s\) and by \(s/u\) the subtree rooted at \(u\). Formally, let \(s=\sigma(s_{1},\ldots,s_{k})\) and define \(s[\varepsilon]=\sigma\), \(s[iu]=s_{i}[u]\), \(s/\varepsilon=s\), and \(s/iu=s_{i}/u\) for \(\sigma\in\Sigma^{(k)}\), \(k\geqslant 0\), \(s_{1},\ldots,s_{k}\in T_{\Sigma}\), \(i\geqslant 1\) and \(u\in\mathbb{N}\) such that \(iu\in V(s)\).
We fix two special sets of symbols: the set \(X=\{x_{1},x_{2},\ldots\}\) of variables and the set \(Y=\{y_{1},y_{2},\ldots\}\) of parameters. For \(k\geqslant 1\) let \(X_{k}=\{x_{1},\ldots,x_{k}\}\) and \(Y_{k}=\{y_{1},\ldots,y_{k}\}\). Let \(A\) be a set that is disjoint from \(\Sigma\). Then the set \(T_{\Sigma}(A)\) of trees over \(\Sigma\) indexed by \(A\) is defined as \(T_{\Sigma^{\prime}}\) where \(\Sigma^{\prime}=\Sigma\cup A\) and \(\operatorname{rank}_{\Sigma^{\prime}}(a)=0\) for \(a\in A\) and \(\operatorname{rank}_{\Sigma^{\prime}}(\sigma)=\operatorname{rank}_{\Sigma}\) for \(\sigma\in\Sigma\).
For a ranked alphabet \(\Sigma\) and a set \(A\) the ranked set \(\langle\Sigma,A\rangle\) consists of all symbols \(\langle\sigma,a\rangle\) with \(\sigma\in\Sigma\) and \(a\in A\); the rank of \(\langle\sigma,a\rangle\) is defined as \(\operatorname{rank}_{\Sigma}(\sigma)\).
### Tree Substitution
Let \(\Sigma\) be a ranked alphabet and let \(s,t\in T_{\Sigma}\). For \(u\in V(s)\) we define the tree \(s[u\gets t]\) that is obtained from \(s\) by replacing the subtree rooted at node \(u\) by the tree \(t\). Let \(\sigma_{1},\ldots,\sigma_{n}\in\Sigma^{(0)}\), \(n\geqslant 1\) be pairwise distinct symbols and let \(t_{1},\ldots,t_{n}\in T_{\Sigma}\). Then \(t[\sigma_{i}\gets t_{i}\mid i\in[n]]\) is the tree obtained from \(t\) by replacing each occurence of \(\sigma_{i}\) by the tree \(t_{i}\). We have defined trees as particular stings, and this is just ordinary string substitution (because we only replace symbols of rank zero). We refer to this as "first-order tree substitution".
In "second-order tree substitution" it is possible to replace internal nodes \(u\), i.e., subtrees of \(s\) by new trees. These new trees use parameters to indicate where the "dangling" subtrees \(s/ui\) of the node \(u\) are to be placed. Let \(\sigma_{1}\in\Sigma^{(k_{1})},\ldots,\sigma_{n}\in\Sigma^{(k_{n})}\) be pairwise distinct symbols with \(n\geqslant 1\) and \(k_{1},\ldots,k_{n}\in\mathbb{N}\) and let \(t_{i}\in T_{\Sigma}[T_{k_{i}}]\) for \(i\in[n]\). Let \(s\in T_{\Sigma}\). Then \(s[\![\sigma_{i}\gets t_{i}\mid i\in[n]]\!]\) denotes the tree that is inductively defined as (abbreviating \([\![\sigma_{i}\gets t_{i}\mid i\in[n]]\!]\) by \([\![\ldots]\!]\)) follows: for \(s=\sigma(s_{1},\ldots,s_{k})\), if \(\sigma\notin\{\sigma_{1},\ldots,\sigma_{n}\}\) then \(s[\![\ldots]\!]=\sigma(s_{1}[\![\ldots]\!],\ldots,s_{k}[\![\ldots]\!])\) and if \(\sigma=\sigma_{j}\) for some \(\nu\in[n]\) then \(s[\![\ldots]\!]=t_{\nu}[\![y_{j}\gets s_{j}[\![\ldots]\!]\mid j\in[k]]\).
### Macro Tree Transducers
A _tree automaton_\(A\) is given by a tuple \((P,\Sigma,h)\) where \(P\) is a finite set of states, \(\Sigma\) is a ranked alphabet, and \(h\) is a collection of mappings \(h_{\sigma}:P^{k}\to P\) where \(\sigma\in\Sigma^{(k)}\) and \(k\geqslant 0\). The extension of \(h\) to a mapping \(\hat{h}:T_{\Sigma}\to P\) is defined recursively as \(h_{\sigma}(\hat{h}(s_{1}),\ldots,\hat{h}(s_{k}))\) for every \(\sigma\in\Sigma^{(k)}\), \(k\geqslant 0\), and \(s_{1},\ldots,s_{k}\). For every \(p\in P\) we define the set \(L_{p}\) of trees in \(T_{\Sigma}\) as \(\{s\in T_{\Sigma}\mid\hat{h}(s)=p\}\).
A (total, deterministic) _macro tree transducer with look-ahead_ ("mtttr") \(M\) is given by a tuple \((Q,P,\Sigma,\Delta,q_{0},R,h)\), where
* \(Q\) is a ranked alphabet of _states_,
* \(\Sigma\) and \(\Delta\) are ranked alphabet of _input_ and _output symbols_,
* \((P,\Sigma,h)\) is a tree automaton (called the _look-ahead automaton_ of \(M\)),
* \(q_{0}\in Q^{(0)}\) is the _initial state_,
* and \(R\) is the _set of rules_, where for each \(q\in Q^{(m)}\), \(m\geqslant 0\), \(\sigma\in\Sigma^{(k)},k\geqslant 0\), and \(p_{1},\ldots,p_{k}\in P\) there is exactly one rule of the form \[\langle q,\sigma(x_{1}:p_{1},\ldots,x_{k}:p_{k})\rangle(y_{1},\ldots,y_{m})\to t\] with \(t\in T_{\Delta\cup\langle Q,X_{k}\rangle}[Y_{m}]\). The right-hand side \(t\) of such a rule is denoted by \(\operatorname{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle)\) The semantics of an mtttr \(M\) (as above) is defined as follows. We define the derivation relation \(\Rightarrow_{M}\) as follows. For two trees \(\xi_{1},\xi_{2}\in T_{\Delta\cup\langle Q,T_{\Sigma}\rangle}(Y)\), \(\xi_{1}\Rightarrow\xi_{2}\) if there exists a node \(u\) in \(\xi_{1}\) with \(\xi_{1}/u=\langle q,s\rangle(t_{1},\ldots,t_{m})\), \(q\in Q\), \(s\in T_{\Sigma}\), and \(t_{1},\ldots,t_{m}\in T_{\Delta\cup\langle Q,T_{\Sigma}\rangle}\) and \(\xi_{2}=\xi_{1}[u\leftarrow\zeta]\) where \(\zeta\) equals \[\operatorname{rhs}_{M}(q,\sigma,\langle\hat{h}(s_{1}),\ldots,\hat{h}(s_{k}) \rangle)\llbracket\langle q^{\prime},x_{i}\rangle\leftarrow\langle q^{\prime},s_{i}\rangle\mid q^{\prime}\in Q,i\in[\![k]\!]\llbracket y_{j}\gets t_{j }\mid j\in[\![m]\!].\] Since \(\Rightarrow_{M}\) is confluent and terminating, there is for every \(\xi_{1}\) a unique tree \(\xi_{2}\) such that \(\xi_{1}\Rightarrow_{M}^{*}\xi_{2}\). For every \(q\in Q^{(m)}\), \(m\geqslant 0\) and \(s\in T_{\Sigma}\) we define the \(q\)_-translation of \(s\)_, denoted by \(M_{q}(s)\), as the unique tree \(t\) in \(T_{\Delta}(Y_{m})\) such that \(\langle q,s\rangle(y_{1},\ldots,y_{m})\Rightarrow_{M}^{*}t\). We denote the translation realize by \(M\) also by \(M\), i.e., \(M=M_{q_{0}}\) and for every \(s\in T_{\Sigma}\), \(M(s)=M_{q_{0}}(s)\) is the unique tree \(t\in T_{\Delta}\) such that \(\langle q_{0},s\rangle\Rightarrow_{M}^{*}t\).
## 3 Depth proper normal form
Because the LSHI and LHI properties of MTTs pertain to the growth of the height of the output, those properties are linked to the nesting of state calls in the output. For example a MTT that is finite-nesting is trivially LSHI, the converse is however not true because some state calls do not always increase the height of the output tree. The goal of this normal form is to remove those states that have a bound on the amount of height they contribute to the output, i.e. the depth at which they use their parameters. This normal form will allow us to characterize LSHI and LHI by finite-nesting constraints.
To obtain this _Depth-proper normal form_ we first show in section 3.1 how to remove those state calls which have a bound on the depth at which a parameter occurs in their output (we call them _improper state calls_). This process involves adding new look-ahead states which predict the paths (of bounded depth) at which these parameters will appear in the output. This allows us to compute early the branches of the output leading to the parameter, we then add helper states which will compute other branches of the output.
In order to remove _improper state calls_, we have altered the transducer in a way that may introduce new _improper state calls_ (similarly to the normal form in [8]). In subsection 3.2 we show that by iterating the process of removing improper state calls we eventually obtain a MTT without any improper state calls, i.e. a MTT in _depth-proper normal form_.
### Removing improper state calls
First, we need to recognise that for a given state call may never appear in any derivation if the rules of the MTT don't allow it. We use the general notion of "reachability" defined in [7]
to formalise which improper state calls actually appear in practice, and recall the following definition :
Let \(M=(Q,P,\Sigma,\Delta,q_{0},R,h)\) be a \(\mathrm{MTT}^{\mathrm{R}}\). The extension of \(M\), denoted by \(\hat{M}\), is the \(\mathrm{MTT}^{\mathrm{R}}(Q,P,\hat{\Sigma},\hat{\Delta},q_{0},\hat{R},\hat{h})\), where \(\hat{\Sigma}=\Sigma\cup\{p^{(0)}\mid p\in P\},\hat{\Delta}=\Delta\cup\langle \!\langle Q,P\rangle\!\rangle,\hat{R}=R\cup\langle\!\langle q,p\rangle\! \rangle\,(y_{1},\ldots,y_{m})\rightarrow\langle\!\langle q,p\rangle\! \rangle\,(y_{1},\ldots,y_{m})\mid\langle q,p\rangle\in\langle Q,P\rangle^{(m) }\},\hat{h}_{p}()=p\) for \(p\in P\), and \(\hat{h}_{\sigma}(p_{1},\ldots,p_{k})=h_{\sigma}(p_{1},\ldots,p_{k})\) for \(\sigma\in\Sigma^{(k)},k\geq 0\), and \(p_{1},\ldots,p_{k}\in P\).
\(\hat{M}\) extends the behavior of \(M\) to process symbols of the look-ahead. These symbols \(p\in P\) take on the same role as subtrees \(t\in L_{p}\), and intuitively, if \(\langle\!\langle q,p\rangle\!\rangle\) appears in \(\hat{M}(s)\) with \(s\in T_{\Sigma}\), then replacing \(p\) with \(t_{p}\in L_{p}\) in \(s\) yields a tree \(s^{\prime}\) such that \(\langle q,t_{p}\rangle\) appears in the derivation of \(M(s^{\prime})\). We say that a state call \(\langle\!\langle q,p\rangle\!\rangle\) is _reachable_ if and only if there exists a tree \(s\in T_{\Sigma}(P)\) such that \(\langle\!\langle q,p\rangle\!\rangle\) appears in \(\hat{M}(s)\).
For any given state where a parameter appears only at bounded depth in the output, there are only finitely many output tree branches leading to this parameter. With the help of our look-ahead, we can guess from the input tree which of these branches will be produced, and forgo the state call by immediately producing them.
The _least common output form with parameters_\([s]_{Y^{\prime}}\in T_{\Delta\cup\langle Q,X_{k}\rangle\cup\langle\$^{(0)} \rangle}(Y)\) expresses this idea of output tree branches. For each \(s\in T_{\Delta\cup\langle Q,X_{k}\rangle}(Y)\), it is obtained from \(s\) by replacing all occurrences of subtrees that do not contain any parameter \(y\in Y^{\prime}\) with \(\$\). Let us write \(\mathsf{ps}(s)\subseteq Y\) for the set of parameters occurring in \(s\in T_{\Delta\cup\langle Q,X\rangle}(Y)\). Then we can define \([s]_{Y^{\prime}}\) inductively as follows:
\[[s]_{Y^{\prime}}=\begin{cases}y_{i}&\text{if }s=y_{i}\in Y^{\prime}\\ \delta([s_{1}]_{Y^{\prime}},\ldots,[s_{n}]_{Y^{\prime}})&\text{if }\mathsf{ps}(s) \neq\emptyset\text{ and }s=\delta(s_{1},\ldots,s_{n})\\ \$&\text{if }\mathsf{ps}(s)=\emptyset\end{cases}\]
For convenience, we define \(\mathrm{pOut}_{M}((q,Y^{\prime}),p)\) to be the set \(\big{\{}[M_{q}(s)]_{Y^{\prime}}\mid s\in L_{p}\big{\}}\). For simplicity, we also write \(\mathrm{pOut}_{M}((q,y),p)\) instead of \(\mathrm{pOut}_{M}((q,\{y\}),p)\) and \([s]_{y}\) instead of \([s]_{[y]}\). If a state call \(\langle q,x_{i}\rangle\) appears in a right-hand side \(\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle)\) and \(y\in Y\) is such that \(\mathrm{pOut}((q,y),p_{i})\) is finite, we call the state call \(\langle q,x_{i}\rangle\) "improper" and the parameter \(y\) one of its "depth-bounded" parameter.
If parameters could appear at arbitrary depth during the derivation of the ouput, then by the non-deletion property, they would be able to also appear at arbitrary depth in the ouput. For this reason, improperness of state calls propagates through rule right-hand sides to the other state calls making use of their depth-bounded parameters. The following lemma formalises this fact.
Let \(M=(Q,P,\Sigma,\Delta,q_{0},R,h)\) be a nondeleting mtt with look-ahead. Let \(q\in Q\), \(\sigma\in\Sigma^{(k)}\), \(k\geq 1\), \(y\in Y\),and \(p,p_{1},\ldots,p_{k}\in P\) such that \(p=h_{\sigma}(p_{1},\ldots,p_{k})\) and \(L_{p_{j}}\neq\emptyset\) for every \(j\in[k]\). If \(\langle r,x_{i}\rangle\!\in\!\langle Q,X_{k}\rangle\) occurs in \([\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle)]_{y}\) at position \(u\), with \(r\in Q^{(m)}\) and \(\mathrm{pOut}((q,y),p)\) is finite, then \(\forall l\in[m],[\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle)] _{y}\,[ul]\neq\$\Rightarrow\mathrm{pOut}((r,y_{l}),p_{i})\) is finite.
For \(j\in[k]-\{i\}\) fix trees \(s_{j}\in T_{\Sigma}\) with \(h(s_{j})=p_{j}\). Let \(\xi=\zeta[\![\ldots]\!]\) with \(\zeta=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle)\) and \([\![\ldots]\!]=\big{[}\langle q^{\prime},x_{j}\rangle\gets M_{q^{\prime}}( s_{j})\mid q^{\prime}\in Q,j\in[k]-\{i\}\big{]}\). By the definition of \(\mathrm{pOut}_{M}((q,y),p)\), Lemma 3.5 from [8], and associativity of second-order tree substitutions,\(O=\{[M_{q}(\sigma(s_{1},\ldots,s_{k}))]_{y}\mid s_{i}\in L_{p_{i}}\}=\{|\xi[\![s_{i}]\!]_{y} \mid s_{i}\in L_{p_{i}}\}\), where \([\![s_{i}]\!]\) denotes the substitution \([\![\langle q^{\prime},x_{i}\rangle\gets M_{q^{\prime}}(s_{i})\mid q^{ \prime}\in Q]\) is a subset of \(\mathrm{pOut}_{M}((q,y),p)\) and hence finite. Since \(M\) is nondeleting, both \([\![\ldots]\!]\) and \([\![s_{i}]\!]\) are nondeleting by Lemma 3.11 from [8]. Hence,
by Lemma 2.1 from [8], \(\xi\) has a subtree \(\langle r,x_{i}\rangle(\xi_{1},\ldots,\xi_{m})\), where \(m=\mathrm{rank}_{Q}(r)\). Again by the same Lemma 2.1, \(\xi[\![s_{i}]\!]\) has a subtree \(\langle r,x_{i}\rangle(\xi_{1},\ldots,\xi_{m})[\![s_{i}]\!]=M_{r}(s_{i})[\![y_{ j}\leftarrow\xi_{j}[\![s_{i}]\!]\mid j\in[m]\!]\). Letting Y' be the set \(\{y_{l}\in Y\mid[\![\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k} \rangle)\!]\!]_{y}\,[\![u]\!]\neq\$\}\), we thus have, for every \(t\in\mathrm{pOut}_{M}((r,Y^{\prime}),p_{i})\) (i.e., \(t=\big{[}M_{r}(s_{i})\big{]}_{Y^{\prime}}\) for some \(s_{i}\in L_{p_{i}}\)) the tree \(t[\![y_{j}\leftarrow\xi_{j}[\![s_{i}]\!]\mid j\in[m]\!]\) is a subtree of \([\![\xi[\![s_{i}]\!]]\!]_{y}\), i.e., it is a subtree of a tree in the finite set \(O\). This implies finiteness of \(\mathrm{pOut}_{M}((r,Y^{\prime}),p_{i})\), which is equivalent to finiteness of \(\mathrm{pOut}_{M}((r,y_{l}),p_{i})\) for all \(y_{l}\in Y^{\prime}\). \(\blacktriangle\)
Recall the definition of \(\mathrm{pOut}((q,y),p)\) we gave, which is the set \(\{[\![M_{q}(s)]\!]_{y}\mid s\in L_{p}\}\). We can now define what is means for a given \(\mathrm{MTT}^{\mathrm{R}}\) to be depth-proper.
\(\blacktriangleright\)**Definition 3.3**.: Let \(M=(Q,P,\Sigma,\Delta,R,h)\) be a \(\mathrm{MTT}^{\mathrm{R}}\). \(M\) is depth-proper if and only if, for all reachable state call \(\langle\!\langle q,p\rangle\!\rangle\), the parameters of \(q\) can appear at arbitrary depth in the output \(M_{q}(L_{p})\). Formally, we write \(\forall q\in Q,\forall p\in P,\langle\!\langle q,p\rangle\!\rangle\) is reachable \(\Rightarrow\forall y\in Y,\mathrm{pOut}((q,y),p)\) is infinite.
For each \(p\in P\), we define \(F_{p}=\{(q,y)\in Q*Y\mid\mathrm{pOut}_{M}((q,y),p)\text{ is finite}\},\text{ and }F_{p}^{1}=\{q\in Q\mid\exists y\in Y,(q,y)\in F_{p}\}\). For convenience, for all \(q\in Q\), we write \(F_{p}(q)\) for the set \(\{y\in Y\mid(q,y)\in F_{p}\}\).
We introduce the notation \(\mathrm{pOut}^{f}((q,Y),p)\) which is the same as \(\mathrm{pOut}\) except that all signs \(\$\) have been replaced by sets of (indexes of) parameters of \(q\). Formally, it is defined as \(\{s\in T_{\Sigma\cup\mathcal{P}([rank_{M}(q)])^{(0)}}\mid s[\![u\leftarrow \$\mid s/u\in\mathcal{P}([rank_{M}(q)])]\!]\in\mathrm{pOut}((q,Y),p)\}\). Let \(\Phi_{p}\) be the set of mappings \(\varphi\) from \(F_{p}^{1}\) to \(T_{\Delta\cup\mathcal{P}(\mathbb{N})^{(0)}}(Y)\) such that \(\varphi(r)\in\mathrm{pOut}_{M}^{f}((r,F_{p}(r)),p)\). \(\Phi_{p}\) is finite for every \(p\in P\) because the different \(\mathrm{pOut}\) are by definition finite if we take \(q\in F_{p}^{1}\), and the cardinality of \(\mathrm{pOut}^{f}\) is a finite factor away from that of \(\mathrm{pOut}\).
We will now construct a \(\mathrm{mtt}\)\(\pi(M)\) equivalent to \(M\) where there is no occurrence of any improper state call in a right-hand side. Like stated previously, the look-ahead is enriched with new information, so we can guess the branches leading to depth-bounded parameters and skip improper state calls. But we still need to reconstruct the rest of the output tree afterwards. This is achieved by introducing helper states \(\hat{Q}\). Every call to a helper states produces a missing branch, and uses only some part of the non depth-bounded parameters. We need to not give them the ones they don't use if we want to keep the non-deleting property. Thus, the lookahead is augmented with information about which non depth-bounded is needed in which branch.
\(\blacktriangleright\)**Definition 3.4**.: Let \(M=(Q,P,\Sigma,\Delta,R,h)\). Then the \(\mathrm{mtt}\)\(\pi(M)=(Q^{\prime},P^{\prime},\Sigma,\Delta,R^{\prime},h^{\prime})\) is given as follows. Let \(Q^{\prime}\) be \(Q\cup\hat{Q}\) where
\(\hat{Q}=\left\{(p,r,t/v,u)^{(n)}\mid p\in P,r\in F_{p}^{1},t\in\mathrm{pOut}((r,F_{p}(r)),p),t/vu=\$,n\in[rank_{M}(q)]\right\}\). Let \(P^{\prime}\) be \(\{(p,\varphi)\mid\varphi\in\Phi_{p}\}\). For every \(q\in Q\), \(\sigma\in\Sigma\), and \((p_{1},\varphi_{1}),\ldots,(p_{k},\varphi_{k})\in P^{\prime}\), let the rule
\(\langle q,\sigma(x_{1},\ldots,x_{k})\rangle(y_{1},\ldots,y_{m})\to\zeta_{q}\Theta \langle(p_{1},\varphi_{1}),\ldots,(p_{k},\varphi_{k})\rangle\)
be in \(R^{\prime}\), where \(\zeta_{q}=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle)\) and \(\Theta\) is a second-order tree substitution given as \([\![\langle r,x_{i}\rangle\leftarrow\![u\leftarrow\langle(p_{i},r,t,u),x_{i} \rangle\langle y_{i},i\in t/u\rangle\mid t/u\in\mathcal{P}(\mathbb{N})]\!] \mid r\in F_{p^{i}}^{1},t=\varphi_{i}(r),i\in[\![k]\!]\). For every \((p,r,t,u)\in\hat{Q}\) and \(\sigma\in\Sigma\), let the rule
\(\langle(p,r,t,u)(y_{1},\ldots,y_{l}),\sigma(x_{1},\ldots,x_{k})\rangle\to\phi( \zeta_{r},t,u)\)\(\langle(p_{1},\varphi_{1}),\ldots,(p_{k},\varphi_{k})\rangle\)
be in \(R^{\prime}\), where \(\zeta_{r}=\mathsf{rhs}_{M}(r,\sigma,\langle p_{1},\ldots,p_{k}\rangle)\) and the function \(\phi(\zeta,t,u)\) is defined by
\(\phi(\zeta,\$,\varepsilon)=\zeta\)
\(\phi(\langle q,x_{j}\rangle,t,u)=\langle(p_{j},q,t,u),x_{j}\rangle(y_{1}, \ldots,y_{l})\) where \(t\neq\$\)
\(\phi(\delta(\zeta_{1},\ldots,\zeta_{n}),\delta(t_{1},\ldots,t_{n}),iu)=\phi( \zeta_{i},t_{i},u)\).
Note that \(\phi(\zeta,t,u)\) may be partial. No rule is constructed if its right-hand side is not defined. In addition, let \(h^{\prime}_{q}((p_{1},\varphi_{1}),\ldots,(p_{k},\varphi_{k}))=(p,\varphi)\), where \(p=h_{\sigma}(p_{1},\ldots,p_{k})\), \(\varphi=\{q\mapsto[\zeta_{q}\Theta^{\prime}]\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,
we get \(M_{r}(s)=\zeta[\![\langle q,x_{i}\rangle\gets M_{q}(s_{i})]\!]\). We conclude by repeated applications of the definition of second-order substitutions : \(\delta(t_{1},\ldots,t_{n})[\![\delta^{\prime}\gets t^{\prime}]\!]=\delta(t_ {1}[\![\delta^{\prime}\gets t^{\prime}]\!],\ldots,t_{n}[\![\delta^{\prime }\gets t^{\prime}]\!]\) if \(\delta\neq\delta^{\prime}\) along the path to node \(u\), proving \(M_{r}(s)[\![u]\!]=\zeta[\![\langle q,x_{i}\rangle\gets M_{q}(s_{i})]\!][u ]=\zeta[\![u]\!]\), and thus \(t[\![u]\!]=\zeta[\![u]\!]\), proving the contrapositive. We can now show that \(\phi\) is well-defined : by our observation, any call on \(\phi\) such that \(\zeta[\![u]\!]\neq t[\![u]\!]\) is either such that one of the two other rules apply (if \(v=u\)), or impossible (if \(v<u\) then the recursive calls should have stopped sooner at \(v\)).
\(\star\)
**Lemma 3.5**: Let \(M=(Q,P,\Sigma,\Delta,R,h)\) be a nondeleting mtt with look-ahead. For \(q\in Q\) and \(s\in T_{\Sigma}\), we have \(\pi(M)_{q}(s)=M_{q}(s)\).
Proof. We prove the statement by induction on \(s\in T_{\Sigma}\). Let \(t=[\![M_{r}(s)]\!]\) and \(r\in F_{p}\) with \(p=h(s)\). Then \(t[\![u\leftarrow\pi(M)_{(p,r,t,u)}(s)\mid t/u=\$]\!]=M_{r}(s)\). More generally, we have \((t/v)[\![u\leftarrow\pi(M)_{(p,r,t/v,u)}(s)\mid t/vu=\$]\!]=M_{r}(s)/v\) for every path \(v\) of \(t\).
Let \(s=\sigma(s_{1},\ldots,s_{k})\) with \(\sigma\in\Sigma^{(k)}\) and \(s_{1},\ldots,s_{k}\in T_{\Sigma}\), and \(h^{\prime}(s_{i})=(p_{i},\varphi_{i})\) for every \(i\in[\![k]\!]\). From the induction hypothesis, we have \(\pi(M)_{q}(s_{i})=M_{q}(s_{i})\) for every \(q\in Q\) and \(i\in[\![k]\!]\). In addition, \(p_{i}=h(s_{i})\) and \(\varphi_{i}(r)=[\![M_{r}(s_{i})]\!]\) holds for \(r\in F_{p_{i}}\) and \(i\in[\![k]\!]\) because of Claim 1. By the definition of \(\pi(M)\), we have
\[\pi(M)_{q}(s)\] \[=\mathsf{rhs}_{\pi(M)}(q,\sigma,\langle\langle p_{1},\varphi_{1} \rangle,\ldots,(p_{k},\varphi_{k}\rangle))[\![\langle r,x_{i}\rangle\gets \pi(M)_{r}(s_{i})\mid r\in Q^{\prime},i\in[\![k]\!]]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle r,x_{i}\rangle\gets t\left[\![u\leftarrow\langle(p_{i},r,t,u ),x_{i}\rangle\mid t/u=\$]\mid r\in F_{p_{i}},t=\varphi_{i}(r),i\in[\![k]\!]\right]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle r,x_{i}\rangle\leftarrow t\left[\![u\leftarrow\langle(p_{i},r,t,u ),x_{i}\rangle\mid t/u=\$]\mid r\in F_{p_{i}},t=[\![M_{r}(s_{i})]\!]\,,i\in[\![ k]\!]\right]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow\pi(M)_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow\pi(M)_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=\mathsf{rhs}_{M}(q,\sigma,\langle p_{1},\ldots,p_{k}\rangle) [\![\langle q^{\prime},x_{i}\rangle\leftarrow M_{r}(s_{i})\mid q^{ \prime}\in Q\langle F_{p_{i}},i\in[\![k]\!]\]\] \[=M_{q}(s).\]
\(\star\)
### Iterated removal of improper states terminates
Having proved that our application preserves the semantics of MTTs, we still need to show that it yields a depth-proper MTT as output. This isn't actually true, and we will need to iterate our transformation to actually obtain a depth-proper MTT. To understand why, remember that our transformation splits our initial look-ahead states into multiple look-ahead states that recognise a partition of the same language (a look-ahead state \(p\) is split into \((p,\phi)\)). A state that was depth-proper in \(M\) with respect to look-ahead state \(p\) may then have only finitely many least common output with parameters with respect to any look-ahead
state \((p,\phi)\) in \(\pi(M)\). Another problem is that helper states from \(\hat{Q}\) may not be depth-proper when they are created.
What we will show instead is that iteration of \(\pi\) yields a fixed point of \(\pi\) after only finitely many steps, and that any fixed point of \(\pi\) is indeed depth-proper. The proof of termination is done by showing with downwards induction that for any stricly positive arity, helper states with this arity eventually stop being introduced by iteration of \(\pi\). When we eventually introduce only parameter-less helper states, which are by definition already depth-proper, an adaptation of the argument from [8] can be used.
Let then \(M=M_{0}\) be a MTT, and define \((M_{n})_{n\in\mathbb{N}}\) by the relation \(M_{n+1}=\pi(M_{n})\). Also let \(M_{n}=(Q_{n},P_{n},\Sigma,\Delta,R_{n},h_{n})\) and \(d\geq 2\). Assuming that \(\forall e\geq d,\forall n\in\mathbb{N},Q_{n}^{(e)}\subseteq Q_{0}^{(e)}\), we will show that \(\exists n_{0}\in\mathbb{N},\forall e\geq d-1,\forall n\geq n_{0},Q_{n}^{e} \subseteq Q_{0}^{e}\). First, observe that \(\forall n\in\mathbb{N},\forall p\in P_{n},\forall(p,\phi)\in P_{n+1},F_{p} \cap Q_{0}\subseteq F_{(p,\phi)}\cap Q_{0}\subseteq Q_{0}\). This is because \(\forall r\in Q_{0},\mathrm{pOut}(r,(p,\phi))\subseteq\mathrm{pOut}(r,p)\). Since \(Q_{0}\) is a finite set, then every infinite sequence \((F_{p_{n}})\) with \(p_{n}\in P_{n}\) and \(p_{n}\in p_{n+1}\) must become stationary at one point. Let \(n_{0}^{\prime}\in\mathbb{N}\) such that \(\forall n\geq n_{0}^{\prime},\forall p\in P_{n},\forall(p,\phi)\in P_{n+1},F_{ p}\cap Q_{0}=F_{(p,\phi)}\cap Q_{0}\). By definition of \(\Theta\), there is no rule \(\langle q,\sigma(x_{1},\ldots,x_{k})\rangle\rightarrow\langle\zeta\langle p_{1 },\ldots,p_{k}\rangle\rangle\) in \(R_{n_{0}^{\prime}}\) such that \(\exists q\in Q_{n_{0}^{\prime}},\exists i,\exists y,\langle q,x_{i}\rangle\in\zeta\) and \((q,y)\in F_{p_{i}}\), i.e. \(\exists q\in Q_{n_{0}^{\prime}+1},\exists i,\exists y,\langle q,x_{i}\rangle \in\zeta\) and \((q,y)\in F_{p_{i}}\cap Q_{n_{0}^{\prime}}\). Furthermore, by the definition of \(\pi\), any call to a state of \(Q_{n_{0}^{\prime}}\) appearing in the right hand side of a rule of \(R_{n}\) for \(n>n_{0}^{\prime}+1\) must appear in the right hand side of a rule of \(R_{n_{0}^{\prime}+1}\). We now know by the fact that \(F_{p}=F_{(p,\phi)}\) for \(n\geq n_{0}^{\prime}\) that there is no call to a state \(q\in Q_{0}\subseteq Q_{n_{0}}\) that is rewritten by \(\Theta\) in a rule from \(R_{n},n\geq n_{0}+1\), and because \(Q_{0}\) contains by hypothesis all states of arity \(\geq d\) and helper states come from states of strictly higher arity, then no helper states of arity \(\geq d-1\) are introduced in \(Q_{n},n\geq n_{0}\), i.e \(\forall e\geq d-1,\forall n\in\mathbb{N},Q_{n}^{(e)}\subseteq Q_{n}^{(e)}\)
To conclude, first observe for any arbitrary MTT \(M\) that if \(d\) is the maximal arity of any state of \(Q\), then it is already a given that \(\forall e\geq d,\forall n\in\mathbb{N},Q_{n}^{(e)}\subseteq Q_{n}^{(e)}\). Applying the previous result in a induction yields d values \(n_{1},\ldots,n_{d}\) such that \(\forall e\geq 1,\forall n\geq\sum_{i}n_{i},Q_{n}^{(e)}\subseteq Q_{\sum_{i}n_{i}}^{ (e)}\). Define \(\sigma=\sum_{i}n_{i}\). We can partially apply our previous proof to \(M_{\sigma}\), yielding the existence of a \(n_{0}\in\mathbb{N}\) such that \(R_{\sigma+n_{0}}\) contains no rule \(\langle q,\sigma(x_{1},\ldots,x_{k})\rangle\rightarrow\zeta\langle p_{1},\ldots p _{k}\rangle\) such that \(\exists q\in Q_{\sigma},\exists i,\exists y,\langle q,x_{i}\rangle\in\zeta\) and \((q,y)\in F_{p_{i}}\). Because by construction of \(\sigma\), \(Q_{\sigma+n_{0}}\backslash Q_{\sigma}\subseteq Q_{\sigma+n_{0}}^{(0)}\) are two sets containing only depth-proper states, then we finally conclude that \(M_{\sigma+n_{0}}\) is depth-proper.
## 4 Deciding LSHI
The Linear input Size to output Height Increase (LSHI) property is defined similarly to the Linear Size Increase (LSI) property from [8]:
A MTT \(M\) is of _Linear input Size to output Height Increase _(LSHI) if there exists a bound \(b\in\mathbb{N}\) such that, for all input tree \(t\) of size \(n\), the height of \(M(t)\) is less than \(b*n\)._
In this section we first characterize the LSHI property of MTTs under Depth-proper normal form as those that are finite-nesting, then we use this characterization to decide whether a MTT is LSHI. The finite-nesting property is defined as follows:
A MTT \(M\) is _finite-nesting_ if there is a bound \(b\in\mathbb{N}\) such that, for all input tree-context \(t[X]\) with variable \(X\), in the provisional output of \(M(t[X])\), states of \(M\) applied to \(X\) appear nested (i.e. along a same output path) at most \(b\) times.
A MTT is _infinite-nesting_ if it is not _finite-nesting.
### LSHI characterization
This subsection is dedicated to proving the following theorem:
Any MTT in Depth-proper normal form is LSHI (Linear input Size to output Height Increase) if and only if it is finite-nesting.
We first prove that any finite-nesting MTT \(M\) in Depth-proper normal form is LSHI. The idea here is that, for each path in an output tree, each input node can only contribute a bounded number of nodes to that output path. Given an input node \(u\) and an output path \(v\), the finite-nesting property tells us that the number of occurrences of states applied to \(u\) along path \(v\) in the output is bounded by some integer \(n\). So the number of nodes along path \(v\) whose origin is \(u\) is bounded by \(n\) times the maximum size of the right-hand side of rules of \(M\). Formally:
Given a MTT \(M\) in Depth-proper normal form, if \(M\) is finite-nesting then \(M\) is LSHI.
Proof. We note \(n\in\mathbb{N}\) the nesting bound of \(M\), i.e. \(n\) is such that for all input tree-context \(C[X]\) where variable \(X\) occurs once in \(C\), along each path in the provisional output \(M(C(X))\) there are at most \(n\) occurrences of a state of \(M\) applied to \(X\).
Now we look at the provisional outputs \(M(C[X_{1},\ldots,X_{k}])\) where each variable \(X_{i}\) occurs once in \(C\), and how the height of such a provisional output increases when we substitute \(X_{1}\) with a tree-context \(\sigma(Y_{1},\ldots,Y_{m})\) where \(\sigma\) is an output tree symbol of arity \(m\).
By definition of \(n\), the height of \(M(C[X_{1},\ldots,X_{k}])\) increases, through the substitution \([X_{1}\leftarrow\sigma(Y_{1},\ldots,Y_{m})]\), by at most \(n.c\), where \(c\) is the maximum height of a right-hand-side of rule of \(M\).
For all input tree \(t\), the output \(M(t)\) can be computed from the provisional output \(M(X)\) by successive substitutions of the form \([X\leftarrow\sigma(Y_{1},\ldots,Y_{m})]\) where \(\sigma\) is an output tree symbol of arity \(m\). The number of substitutions needed is the size \(|t|\) of \(t\). The height of \(M(X)\) is constant. The height of the provisional output increases by at most \(b.c\) after each substitution, so the height of \(M(t)\) is at most linear in the size of input tree \(t\). Therefore \(M\) is LSHI.
Proving the converse implication requires more work. The general idea is that any infinite-nesting MTT must have a kind of loop we call _nesting generator loop_. Such a loop can be pumped to contradict the LSHI property.
### Nesting generator loops
Nesting generator loops are defined so as to be the most general type of loop which can induce infinite-nesting: a generator state \(q_{0}\) calls itself and a generated state \(q\) by nesting them, and the generated state \(q\) calls itself on the same loop.
A _nesting generator loop_ in MTT \(M\) is given by a tuple \((C[X],p,q_{0},q)\) where:
* \(C[X]\) is an input tree-context with variable \(X\)
* \(p\) is a look-ahead state such that, noting \(L_{p}\) the set of input trees with look-ahead \(p\), there exists \(t\in L_{p}\) with \(C(t)\in L_{p}\)
* \(q_{0}\) and \(q\) are states of \(M\) such that there exists an input tree-context \(C_{0}[X]\) so that \(q_{0}(X)\) appears in the provisional output of \(M(C_{0}[X])\) when \(X\) has look-ahead
* _We have the following nesting properties in the provisional outputs of_ \(q\) _and_ \(q_{0}\) _on input_ \(C(X_{p})\) _where_ \(X_{p}\) _is an input tree variable with look-ahead_ \(p\)_:_
* _in the provisional output_ \(M_{q}(C(X_{p}))\)_, there exists a parameter_ \(y_{i}\) _of_ \(q\) _such that_ \(y_{i}\) _appears in the_ \(i\)_-th argument of_ \(q(X_{p})\)__
* _in the provisional output_ \(M_{q_{0}}(C(X_{p}))\)_, either_ \(q_{0}(X_{p})\) _appears in the_ \(i\)_-th argument of_ \(q(X_{p})\)_, or there exists a parameter_ \(y_{j}\) _of_ \(q_{0}\) _such that a single occurrence of_ \(y_{j}\) _appears in the_ \(j\)_-th argument of_ \(q_{0}(X_{p})\) _and in the_ \(i\)_-th argument of_ \(q(X_{p})\)_._
The look-ahead state \(p\) is important because without it we cannot pump the loop. Note that states \(q_{0}\) and \(q\) and parameters \(y_{i}\) and \(y_{j}\) are not necessarily distinct. When \(q_{0}=q\) and \(y_{i}=y_{j}\), the loop allows to produce a nesting of states exponential in the number of times we pump the loop. In the general case however, pumping this loop \(n\) times produces a number of nested state calls \(q(X)\) linear in \(n\).
We start by proving that a _nesting generator loop_ in a MTT \(M\) in Depth-proper normal form necessarily breaks the LSHI property (i.e. \(M\) cannot be LSHI).
\(\rhd\)Claim 6. Given a MTT \(M\) in Depth-proper normal form, if \(M\) has a _nesting generator loop_ then \(M\) is not LSHI.
Proof. By definition of the nesting generator loop, we have tree-contexts \(C_{0}\) and \(C\) such that, in the provisional output of \(M(C_{0}(C(X_{p})))\), \(q_{0}(X_{p})\) and \(q(X_{p})\) appear nested.
Moreover, by pumping the loop, we increase the nesting of \(q(X_{p})\), i.e. \(q(X_{p})\) appears \(n\)-times along a single path in the provisional output of \(M(C_{0}(C^{n}(X_{p})))\) for all \(n\in\mathbb{N}\). This notably implies that \(M\) is not finite-nesting.
So, for all \(t\in L_{p}\), the height of \(M(C_{0}(C^{n}(t)))\) is at least \(n\) times the maximum depth of parameter \(y_{i}\) in \(q(t)\). The size of the input \(C_{0}(C^{n}(t))\) is:
\[S(C_{0}(C^{n}(t)))=S(C_{0})+n.S(C)+S(t)\]
The height of the output is:
\[H(M(C_{0}(C^{n}(t))))\geqslant n.\text{Depth}(y_{i},q(t))\]
If \(M\) was LSHI there would be a bound \(b\in\mathbb{N}\) such that, for all \(n\in\mathbb{N}\):
\[b.(S(C_{0})+n*S(C)+S(t))\geqslant H(M(C_{0}(C^{n}(t))))\geqslant n.\text{Depth }(y_{i},q(t))\]
This means that \(b\geqslant\text{Depth}(y_{i},q(t))/S(C)\) for all \(t\in L_{p}\). Since \(M\) is in Depth-proper normal form, there is no bound on the depth of \(y_{i}\) in \(q(t)\) for \(t\in L_{p}\). Then such a \(b\) cannot exist, therefore \(M\) is not LSHI.
The last step in proving theorem 3 is to show that if a MTT \(M\) in Depth-proper normal form is _infinite-nesting_ then it has a _nesting generator loop_. For this we look at how states of \(M\) call each other, more specifically we define a notion of state call trees, and we express the _infinite-nesting hypothesis_ as a condition on those state call trees. We will then prove that this condition implies the existence of a _nesting generator loop_.
### State call trees
For each input tree \(t\), we can look at the state calls on the nodes of \(t\) as a larger unranked tree whose root is the call of the initial state on the root of \(t\), and each node corresponds to the call of a state on a node of \(t\).
To facilitate the definition of state call trees, we define from \(M\) a new \(\mathrm{MTT}^{\mathrm{R}}\) noted \(M^{\#}\) where each state \(q\) adds a node labeled \(\#_{q}\) at the root of its output, i.e. we replace each right-hand side \(t\) by \(\#_{q}(t)\) where \(\#_{q}\) is a new output symbol of arity \(1\) and \(q\) is the state on the left side of the rule. Here we consider each occurrence of \(\#_{q}\) as one "state call", we discuss this choice after the definition:
For all input tree \(t\), we note \(\mathrm{Origin}(t,M^{\#}(t))\) the _origin semantics_ of the computation of \(M^{\#}\) on tree \(t\). We define it as the set of pairs of paths \((u,v)\in P(t)\times P(M^{\#}(t))\) such that the node at path \(v\) in \(M^{\#}(t)\) was produced by the node at path \(u\) in \(t\).
We call _state call tree_ of \(M\) on \(t\) the unranked tree \(\mathfrak{C}\!\mathfrak{C}(t)\) defined by:
the set of nodes is \(N=\{(u,v)\in\mathrm{Origin}(t,M^{\#}(t))\mid\exists q\in Q,M^{\#}(t)|_{v}=\#_ {q}\}\)
the root is \((\varepsilon,\varepsilon)\)
for all node \((u,v)\in N\backslash\{(\varepsilon,\varepsilon)\}\), the parent node of \((u,v)\) is \((u^{\prime},v^{\prime})\) where:
\(u^{\prime}\) is the longest strict prefix of \(u\) (i.e. \(\exists i\in\mathbb{N},u^{\prime}.i=u\))
\(v^{\prime}\) is the longest strict prefix of \(v\) such that \((u^{\prime},v^{\prime})\in N\)
Intuitively, each node corresponds to a state call, and the parent of a state call \(q(t_{i})\) is the state call \(q^{\prime}(a(t_{1},\dots,t_{n}))\) which induces it (so \(q(X_{i})\) is in the right-hand side of a rule of \(M^{\#}\) whose left side is \(q^{\prime}(a(X_{1},\dots,X_{n}),\dots)\)).
Considering each occurrence of \(\#_{q}\) in the output as a state call can be counter-intuitive. This is because when a state call \(q(t)\) appears in an argument of another state call \(q^{\prime}(t)\) and the state \(q^{\prime}\) makes several copies of this argument, the corresponding \(\#_{q}\) will occur several times in the output. With our definition we consider each occurrence of \(\#_{q}\) as a different _state call_.
Since we are only looking at nesting properties in this paper, we will only look at state calls occurring along a same path in the output tree. Because state calls along a same output path have different arguments, they cannot be copies of each other, and so this choice does not change anything for the proofs in this paper.
### The infinite-nesting hypothesis for state call trees
The infinite-nesting property of \(M\) can be expressed as a property of its state call trees. Given an input tree \(t\), a path \(u\) in \(t\) and a path \(v\) in \(M^{\#}(t)\), the nesting along path \(v\) of states on the input node at path \(u\) is the number of nodes in the state call tree \(\mathfrak{C}\!\mathfrak{C}(t)\) of the form \((u,v^{\prime})\) where \(v^{\prime}\) is a prefix of \(v\). This allows us to characterize the infinite-nesting property in terms of state call trees. This will later allow us to find a _nesting generator loop_.
The nesting number on an input tree \(t\) on input path \(u\) along output path \(v\) can be expressed as the width of a trimmed version of the state call tree on \(t\) (by _width_ here we mean the number of nodes of a same depth):
We call state call tree trimmed on input path \(u\) along output path \(v\), and we note \(\mathfrak{C}\!\mathfrak{C}(t,u,v)\), the tree obtained from \(\mathfrak{C}\!\mathfrak{C}(t)\) by keeping nodes in \(\mathrm{Prefix}(u)\times\mathrm{Prefix}(v)\) and removing the others.
We get the lemma:
The transducer \(M\) is infinite-nesting if and only if, for all integer \(n\in\mathbb{N}\), there exists a trimmed state call tree \(\mathfrak{C}\!\mathfrak{C}(t,u,v)\) of width \(\geq n\).
Proof.: By definition of infinite-nesting and trimmed state calls \(\mathfrak{C}\!\mathfrak{C}(t,u,v)\).
### Finding loops in state call trees
We now give a theorem implying that if trimmed state call trees are of unbounded width, then there exists a nesting generator loop. However, we keep this theorem more general because it will also be used in the characterization of the LHI property.
Given an infinite set \(S\) of rooted unranked trees whose nodes have labels in a finite set \(Q\), under the following two hypotheses:
* Bounded arity: there is a bound \(m\) on the arity of nodes in trees of \(S\),
* Unbounded width: for all \(n\in\mathbb{N}\) there exist a tree \(t_{n}\in S\) and a depth \(d\in\mathbb{N}\) such that there are at least \(n\) nodes of depth \(d\) in \(t_{n}\),
There must exist a tree \(t\in S\) with the following pattern:
More formally, there must exist 5 nodes \(N_{q_{0}},N_{q},N_{q_{0},0},N_{q,0},N_{q,1}\) of \(t\) such that:
* we have the following depth equations, noting \(d(N)\in\mathbb{N}\) the depth of node \(N\): \[d(N_{q_{0}})=d(N_{q})\] \[d(N_{q_{0},0})=d(N_{q,0})=d(N_{q,1})\]
* we have the following label equations, noting \(L(N)\in Q\) the label of node \(N\): \[L(N_{q_{0}})=L(N_{q_{0},0})=q_{0}\in Q\] \[L(N_{q})=L(N_{q,0})=L(N_{q,1})=q\in Q\]
* there are the following paths in \(t\), noting \(N_{1}\to N_{2}\) if there is a path from \(N_{1}\) to \(N_{2}\): \[N_{q_{0}}\to N_{q_{0},0}\] \[N_{q_{0}}\to N_{q,0}\] \[N_{q}\to N_{q,1}\]
The notation choice of \(q_{0}\) and \(q\) obviously hints that the labeling is linked to the states corresponding to the state calls. But we will need to add more information in this labeling, both for the characterization of LSHI and LHI. As a first corollary of this theorem, we get that infinite-nesting implies the existence of a nesting generator loop:
* Claim 11. Any infinite-nesting MTT \(M\) has a _nesting generator loop_.
Proof. For each trimmed state call tree \(\mathfrak{SE}(t,u,v)\), to each node \((u^{\prime},v^{\prime})\) we give the label \(L((u^{\prime},v^{\prime}))=(q,p,i)\) where:
* \(q\in Q\) is the state of \(M\) such that \(M^{\#}(t)|_{v^{\prime}}=\#_{q}\)
* \(p\in P\) is the look-ahead state of \(t|_{u^{\prime}}\)
* \(i\) is the index of the argument of \(q\) whose root node occurs along path \(v\) in the output \(M^{\#}(t)\) if it exists, otherwise \(i=0\). In other words, if there is a state call \((u^{\prime},v^{\prime\prime})\) with \(v^{\prime}<v^{\prime\prime}\leq v\) then this state call appears in \(M^{\#}(t_{\uparrow u^{\prime}})\) below the state call \((u^{\prime},v^{\prime})\) in its \(i\)-th argument; and \(i=0\) otherwise. The added look-ahead and argument information is necessary to imply the existence of a nesting generator loop. We can now apply theorem 10 to the set of labeled \(\mathfrak{SE}(t,u,v)\) because:
* the trees have _unbounded width_ according to Lemma 9,
* the trees have _bounded arity_ because the arity is bounded by the maximum state nesting in the rules of \(M\),
* the nodes of trees are labeled in the _finite set_\(Q\times P\times[0,n]\) where \(n\) is the maximum number of parameters of states of \(M\). The theorem gives us a tree \(\mathfrak{SE}(t,u,v)\) and in it five nodes \((u_{1},v_{1}),(u_{1},v_{2}),(u_{2},v_{3}),(u_{2},v_{4}),(u_{2},v_{5})\) labeled respectively \((q_{0},p,j),(q,p,i),(q_{0},p,j),(q,p,i),(q,p,i)\) (the equalities between the \(u_{1}\), \(u_{2}\) and \(p\) come from the depth equalities). By definition of the labels, if there exists two nodes \((u^{\prime},v^{\prime})\) and \((u^{\prime},v^{\prime\prime})\) with \(v^{\prime}<v^{\prime\prime}\) then the label of \((u^{\prime},v^{\prime})\) is \((q^{\prime},p^{\prime},i^{\prime})\) with \(i^{\prime}\neq 0\). This applies to \((u_{2},v_{4})\) and \((u_{2},v_{5})\): since \(v_{4}\) and \(v_{5}\) are both prefixes of \(v\), either \(v_{4}<v_{5}\) or \(v_{5}<v_{4}\), in either case \(i\neq 0\). Then we have the _nesting generator loop_\((C[X],p,q_{0},q)\) with \(C[X]=(t_{\uparrow u_{2}}[X])|_{u_{1}}\) and:
* there exists \(t_{\uparrow u_{1}}[X]\) such that \(C(t|_{u_{2}})=t|_{u_{1}}\in L_{p}\)
* there exists \(t_{\uparrow u_{1}}[X]\) such that for all variable \(X_{p}\) of look-ahead \(p\): \(q_{0}(X_{p})\) appears in \(M^{\#}(t_{\uparrow u_{1}}(X_{p}))\),
* because \((u_{1},v_{2})\) is an ancestor node of \((u_{2},v_{5})\) in \(\mathfrak{SE}(t,u,v)\) and \((u_{1},v_{2})\) and \((u_{2},v_{5})\) both have label \((q,p,i)\) with \(i\neq 0\), parameter \(y_{i}\) must appear in \(M^{\#}_{q}(C(X_{p}))\) in the \(i\)-th argument of \(q(X_{p})\),
* because \((u_{1},v_{1})\) is an ancestor node of \((u_{2},v_{3})\) and \((u_{2},v_{4})\) in \(\mathfrak{SE}(t,u,v)\), and because \((u_{1},v_{1})\), \((u_{2},v_{3})\) and \((u_{2},v_{4})\) have labels \((q_{0},p,j),(q,p,i)\) and \((q_{0},p,j)\) respectively, we have two cases depending on whether \(j=0\) or not:
* if \(j\neq 0\) then, in \(M^{\#}_{q_{0}}(C(X_{p}))\), a single occurrence of parameter \(y_{j}\) appears in the \(j\)-th argument of \(q_{0}(X_{p})\) and in the \(i\)-th argument of \(q(X_{p})\),
* if \(j=0\) then, in \(M^{\#}_{q_{0}}(C(X_{p}))\), \(q_{0}(X_{p})\) appears in the \(i\)-th argument of \(q(X_{p})\).
Theorem 3 is therefore a consequence of claims 11, 6 and 4: any MTT in Depth-proper normal form is LSHI (Linear input Size to output Height Increase) if and only if it is finite-nesting.
### Deciding Lshi
We can now use Theorem 3 to give an algorithm to decide the LSHI property of MTT\({}^{R}\). Given a MTT\({}^{R}\)\(M\) we compute its Depth-proper normal form, then decide if it is finite-nesting:
**Corollary 12**.: _The LSHI property is decidable for MTT\({}^{R}\)._
Proof.: Given \(M\) a MTT\({}^{R}\), we can decide if \(M\) is LSHI by checking if the Depth-proper normal form \(M^{\prime}\) of \(M\) is finite-nesting.
The finite-nesting problem can be reduced to the finiteness of ranges of Macro tree transducers, which is known to be decidable.
## 5 Deciding LHI
The structure of this section is the same as section 4. Replacing the LSHI with the LHI property entails slight changes in the type of nesting we use to characterize LHI, the type of loops we look for, and more minute details in the proofs. But the main ideas are very similar.
The Linear Height Increase (LHI) property is defined similarly to the Linear Size Increase (LSI) property from [8] and Linear input Size to output Height Increase (LSHI) property:
A MTT \(M\) is _Linear Height Increase_ (LHI) if there exists a bound \(b\in\mathbb{N}\) such that, for all input tree \(t\) of height \(h\), the height of \(M(t)\) is less than \(b*h\).
In this section we first characterize the LHI property of MTTs under Depth-proper normal form as those that are both finite-nesting and _finite-multi-leaf-nesting_ (or finite-ML-nesting), then we will use this characterization to decide whether a MTT is LHI. The finite-ML-nesting property is defined as follows:
A MTT \(M\) is _finite ML-nesting_ if there is a bound \(b\in\mathbb{N}\) such that, for all input tree-context \(t[X_{1},\ldots,X_{n}]\) with variables \(X_{1},\ldots,X_{n}\), in the provisional output of \(M(t[X_{1},\ldots,X_{n}])\), states of \(M\) applied to variables \(X_{1},\ldots,X_{n}\) appear nested (i.e. along a same output path) at most \(b\) times.
A MTT is _infinite ML-nesting_ if it is not _finite ML-nesting_.
### LHI characterization
This subsection is dedicated to proving the following theorem:
Any MTT in Depth-proper normal form is LHI (Linear Height Increase) if and only if it is finite-ML-nesting.
We first prove that any finite-ML-nesting MTT \(M\) in Depth-proper normal form is LHI. The idea is that an anti-chain of nodes in the input tree (i.e. set of nodes that are not ancestors of each other) can only contribute a bounded number of nodes to each output path. Given an anti-chain \(A\) of input nodes and an output path \(v\), the finite-ML-nesting property tells us that the number of occurrences of states applied to nodes in \(A\) along path \(v\) in the output is bounded by some integer \(n\). So the number of nodes along path \(v\) with origin in \(A\) is bounded by \(n\) times the maximum size of the right-hand side of rules of \(M\). Formally:
Given a MTT \(M\) in Depth-proper normal form, if \(M\) is finite-ML-nesting then \(M\) is LHI.
Proof.: We note \(b\in\mathbb{N}\) the ML-nesting bound of \(M\).
We look at the provisional outputs \(M(C[X_{1},\ldots,X_{k}])\) where each variable \(X_{i}\) occurs once in \(C\), and how the height of such a provisional output increases when we apply a substitution \(\mathfrak{S}\) of height 1, i.e. a substitution where each \(X_{i}\) is substituted with a tree of the form \(\sigma(Y_{1},\ldots,Y_{m})\) where \(\sigma\) is an output tree symbol of arity \(m\). The height of a provisional
output is defined inductively, similarly to the height of trees, with \(H(q(X,t_{1},\ldots,t_{n}))=1+\max(0,H(t_{1}),\ldots,H(t_{n}))\).
By definition of \(b\), the height of \(M(C[X_{1},\ldots,X_{k}])\) increases, through the substitution \(\mathfrak{S}\) by at most \(b.c\), where \(c\) is the maximum height of a right-hand-side of rule of \(M\).
For all input tree \(t\), the output \(M(t)\) can be computed from the provisional output \(M(X)\) by successive substitutions of height \(1\). The number of substitutions needed is the height \(H(t)\) of \(t\). The height of \(M(X)\) is constant. The height of the provisional output increases by at most \(b.c\) after each substitution, so the height of \(M(t)\) is at most linear in the height of input tree \(t\). Therefore \(M\) is LHI.
Proving the converse implication requires more work. The general idea is that any infinite-ML-nesting MTT must have either a nesting generator loop \(5\) or a new kind of loop we call _nesting generator loop_. Either type of loop can be pumped to contradict the LHI property.
### ML-nesting generator loops
ML-nesting generator loops are defined so as to be the most general type of loop which can induce infinite-ML-nesting: a generator state \(q_{0}\) calls itself and a generated state \(q\) by nesting them. The difference with nesting generator loops is that the calls to \(q_{0}\) and \(q\) should not be on the same subtree.
\(\star\)**Definition 17**.: _A ML-nesting generator loop in MTT \(M\) is given by a tuple \((C[X_{1},X_{2}],p_{0},p,q_{0},q)\) where:_
* \(C[X_{1},X_{2}]\) _is an input tree-context with variables_ \(X_{1}\) _and_ \(X_{2}\)_,_
* \(p_{0}\) _and_ \(p\) _are look-ahead states such that, noting_ \(L_{p_{0}}\) _and_ \(L_{p}\) _the sets of input trees with look-ahead_ \(p_{0}\) _and_ \(p\) _respectively, there exists_ \(t_{0}\in L_{p_{0}}\) _such that:_ \(C(t_{0},t_{1})\in L_{p_{0}}\)__
* \(q_{0}\) _and_ \(q\) _are states of_ \(M\) _such that there exists an input tree-context_ \(C_{0}[X]\) _so that_ \(q_{0}(X)\) _appears in the provisional output of_ \(M(C_{0}[X])\) _when_ \(X\) _has look-ahead_ \(p_{0}\)__
* _In the provisional output_ \(M_{q_{0}}(C(X_{p_{0}},X_{p}))\) _where_ \(X_{p_{0}}\) _and_ \(X_{p}\) _are input tree variables with look-ahead_ \(p_{0}\) _and_ \(p\) _respectively, we have either:_
* \(q_{0}(X_{p_{0}})\) _appears in an argument of_ \(q(X_{p})\)__
* _a single occurrence of a parameter_ \(y_{i}\) _of_ \(q_{0}\) _appears in the_ \(i\)_-th argument of_ \(q_{0}(X_{p_{0}})\) _and in an argument of_ \(q(X_{p})\)_._
Note that a nesting generator loop is not an ML-nesting generator loop. For this reason we will prove that the existence of either of these two kinds of loops in a Depth-proper transducer contradicts the LHI property.
\(\rhd\)**Claim 18**.: Given a MTT \(M\) in Depth-proper normal form, if \(M\) has either a _nesting generator loop_ or a _ML-nesting generator loop_ then \(M\) is not LHI.
Proof.: If there exists a _nesting generator loop_ then, according to claim 6, \(M\) is not LSHI and therefore not LHI.
We now assume that there exists a _ML-nesting generator loop_. By definition of such loops, we have tree-contexts \(C_{0}\) and \(C\) such that, in the provisional output of \(M(C_{0}(C(X_{p_{0}},X_{p})))\), \(q_{0}(X_{p_{0}})\) and \(q(X_{p})\) appear nested. We note \(C^{n}(X_{p_{0}},X_{p})\) the input tree
\(C(\ldots C(C(X_{p_{0}},X_{p}),X_{p})\ldots,X_{p})\) where \(C\) is pumped \(n\) times, and we note \(y_{j}\) the argument of \(q\) on which \(q(X_{p})\) is nesting in the loop.
Moreover, by pumping the loop, we increase the nesting of \(q(X_{p})\), i.e. \(q(X_{p})\) appears \(n\)-times along a single path in the provisional output of \(M(C_{0}(C^{n}(X_{p_{0}},X_{p})))\) where \(C\) is pumped \(n\) times. This notably implies that \(M\) is not finite-ML-nesting.
So, for all \(t_{0}\in L_{p_{0}},t\in L_{p}\), the height of \(M(C_{0}(C^{n}(t_{0},t)))\) is at least \(n\) times the maximum depth of the parameter \(y_{i}\) in \(q(t)\). The height of the input \(C_{0}(C^{n}(t_{0},t))\) is:
\[H(C_{0}(C^{n}(t_{0},t)))\leq d_{C_{0}}+n.d_{C}+\max(H(t_{0}),H(t))\]
where \(d_{C_{0}}\) is the depth of \(X\) in \(C_{0}[X]\) and \(d_{C}\) is the maximum depth of \(X_{p_{0}}\) or \(X_{p}\) in \(C(X_{p_{0}},X_{p})\). The height of the output is:
\[H(M(C_{0}(C^{n}(t))))\geq n.\text{Depth}(y_{i},q(t))\]
If \(M\) was LHI there would be a bound \(b\in\mathbb{N}\) such that, for all \(n\in\mathbb{N}\):
\[b.(d_{C_{0}}+n.d_{C}+\max(H(t_{0}),H(t)))\geq n.\text{Depth}(y_{i},q(t))\]
This means that \(b\geq\text{Depth}(y_{i},q(t))/d_{C}\) for all \(t\in L_{p}\). Since \(M\) is in Depth-proper normal form, there is no bound on the depth of \(y_{i}\) in \(q(t)\) for \(t\in L_{p}\). Then such a \(b\) cannot exist, therefore \(M\) is not LHI.
The last step in proving theorem 4.1 is to show that if a MTT \(M\) in Depth-proper normal form is _infinite-ML-nesting_ then it has either a _nesting generator loop_ or a _ML-nesting generator loop_. Similarly to the LSHI case, we look at the structure of state calls.
### The infinite-ML-nesting property for state call trees
Similarly to the infinite-nesting property, the infinite-ML-nesting property can be expressed as a property of state call trees. Given an input tree \(t\), a path \(u\) in \(t\) and a path \(v\) in \(M^{\#}(t)\), the nesting along path \(v\) of states on the input node at path \(u\) is the number of nodes in the state call tree \(\mathfrak{EC}(t)\) of the form \((u,v^{\prime})\) where \(v^{\prime}\) is a prefix of \(v\). This allows us to characterize the infinite-nesting property in terms of state call trees. This will later allow us to find a _nesting generator loop_.
The nesting number on an input tree \(t\) on input path \(u\) along output path \(v\) can be expressed as the width of a trimmed version of the state call tree on \(t\) (by _width_ here we mean the number of nodes of a same depth):
We call state call tree trimmed on input path \(u\) along output path \(v\), and we note \(\mathfrak{EC}(t,u,v)\), the tree obtained from \(\mathfrak{EC}(t)\) by keeping nodes in \(\text{Prefix}(u)\times\text{Prefix}(v)\) and removing the others.
We get the lemma:
The transducer \(M\) is infinite-nesting if and only if, for all integer \(n\in\mathbb{N}\), there exists a trimmed state call tree \(\mathfrak{EC}(t,u,v)\) of width \(\geq n\).
Proof.: By definition of infinite-nesting and trimmed state calls \(\mathfrak{EC}(t,v)\).
By using lemma 4.1 and theorem 4.1, we can now prove the following claim:
Any infinite-ML-nesting MTT \(M\) has a _ML-nesting generator loop_.
Proof.: If \(M\) is infinite-nesting then, by Claim 11, it has a _nesting generator loop_. From now on we assume that \(M\) is not infinite nesting, i.e. there is a bound on the nesting of state calls from a same input node.
To detect ML-nesting of state calls, we trim state call trees along a path of the output tree (to look at ML-nesting state calls), but forcing state calls to fork with respect to the input tree (so that two different state calls \((u,v)\) and \((u^{\prime},v^{\prime})\) have \(u\neq u^{\prime}\)). Formally, for all input tree \(t\), output path \(v\in M^{\#}(t)\) and partial function \(f:P(t)\rightarrow\operatorname{Prefix}(v)\) such that:
\[\forall u\in\operatorname{dom}(f),\ \ (u,f(u))\in\mathfrak{C}\mathfrak{C}(t)\ \ \ \text{and}\]
\[\forall u^{\prime}\in\operatorname{Prefix}(u),\ \ (u^{\prime},f(u^{\prime})) \text{ is an ancestor of }(u,f(u))\text{ in }\mathfrak{C}\mathfrak{C}(t)\]
we note \(\mathfrak{C}\mathfrak{C}(t,v,f)\) the tree obtained from \(\mathfrak{C}\mathfrak{C}(t)\) by keeping nodes in \((u,f(u))\) and removing the others (by definition of \(\mathfrak{C}\mathfrak{C}(t)\), this is still a tree rooted in \((\varepsilon,\varepsilon)\)).
For each such \(\mathfrak{C}\mathfrak{C}(t,v,f)\), to each node \((u^{\prime},v^{\prime})\) we give the label \(L((u,v^{\prime}))=(q,p,i)\) where:
* \(q\in Q\) is the state of \(M\) such that \(M^{\#}(t)|_{v^{\prime}}=\#_{q}\)
* \(p\in P\) is the look-ahead state of \(t|_{u^{\prime}}\)
* \(i\) is the index of the argument of \(q\) whose root node occurs along path \(v\) in the output \(M^{\#}(t)\) if it exists, otherwise \(i=0\). In other words, if there is a state call \((u^{\prime},v^{\prime\prime})\) with \(v^{\prime}<v^{\prime\prime}\leq v\) then this state call appears in \(M^{\#}(t_{\uparrow u^{\prime}})\) below the state call \((u^{\prime},v^{\prime})\) in its \(i\)-th argument; and \(i=0\) otherwise.
In the label, we add the argument index in order to get the proper form of the ML-nesting generator loop.
We can now apply theorem 10 to the set of labeled \(\mathfrak{C}\mathfrak{C}(t,v,f)\) because:
* the trees have _unbounded width_ because \(M\) is infinite ML-nesting and the width of \(\mathfrak{C}\mathfrak{C}(t,v,f)\) is the maximum ML-nesting of state calls along output path \(v\) and there is a bound on the nesting of state calls for the same input node,
* the trees have _bounded arity_ because the arity is bounded by the maximum state nesting in the rules of \(M\),
* the nodes of trees are labeled in the _finite set_\(Q\times P\times[0,n]\) where \(n\) is the maximum number of parameters of states of \(M\).
The theorem gives us a tree \(\mathfrak{C}\mathfrak{C}(t,v)\) and in it five nodes \((u_{0},v_{0}),(u_{1},v_{1}),(u_{2},v_{2}),(u_{3},v_{3})\) and \((u_{4},v_{4})\) labeled respectively \((q_{0},p_{0},j),(q,p,i),(q_{0},p_{0},j),(q,p,i),(q,p,i)\).
By definition of the labels, if there exists two nodes \((u^{\prime},v^{\prime})\) and \((u^{\prime\prime},v^{\prime\prime})\) at the same depth in \(\mathfrak{C}\mathfrak{C}(t,v,f)\) with \(v^{\prime}<v^{\prime\prime}\) then the label of \((u^{\prime},v^{\prime})\) is \((q^{\prime},p^{\prime},i^{\prime})\) with \(i^{\prime}\neq 0\). This applies to \((u_{3},v_{3})\) and \((u_{4},v_{4})\): since \(v_{3}\) and \(v_{4}\) are both prefixes of \(v\), either \(v_{3}<v_{4}\) or \(v_{4}<v_{3}\), in either case we have \(i\neq 0\).
Then we have the _ML-nesting generator loop_\((C_{0}[X_{1},X_{2}],C[X],p_{0},p,q_{0},q)\) with
\(C_{0}[X_{1},X_{2}]=(t_{\uparrow u_{2},u_{3}}[X_{1},X_{2}])|_{u_{0}}\) and \(C[X]=(t_{\uparrow u_{4}}[X])|_{u_{1}}\) and:
* there exists \(t|_{u_{2}}\in L_{p_{0}}\) and \(t|_{u_{3}},t|_{u_{4}}\in L_{p}\) such that \(C_{0}(t|_{u_{2}},t|_{u_{3}})=t|_{u_{0}}\in L_{p_{0}}\) and \(C(t|_{u_{4}})=t|_{u_{1}}\in L_{p}\)
* there exists \(t_{\uparrow u_{0}}[X]\) such that for all variable \(X\) of look-ahead \(p_{0}\): \(q_{0}(X)\) appears in \(M(t_{\uparrow u_{0}}(X))\),
* because \((u_{1},v_{1})\) is an ancestor node of \((u_{4},v_{4})\) in \(\mathfrak{C}\mathfrak{C}(t,u,v)\), both with label \((q,p,i)\) with \(i\neq 0\), parameter \(y_{i}\) must appear in \(M_{q}(C(X_{p}))\) in the \(i\)-th argument of \(q(X_{p})\) when \(X_{p}\) is a variable of look-ahead \(p\),
* because \((u_{0},v_{0})\) is an ancestor node of \((u_{2},v_{2})\) and \((u_{3},v_{3})\) in \(\mathfrak{C}\mathfrak{C}(t,u,v)\) with labels \((q_{0},p_{0},j),(q,p,i)\) and \((q_{0},p_{0},j)\) respectively, we have two cases depending on whether \(j=0\) or not:
* if \(j\neq 0\) then, in \(M_{q_{0}}(C_{0}(X_{p_{0}},X_{p}))\), a single occurrence of parameter \(y_{j}\) appears in the \(j\)-th argument of \(q_{0}(X_{p_{0}})\) and in the \(i\)-th argument of \(q(X_{p})\),
* if \(j=0\) then, in \(M_{q_{0}}(C_{0}(X_{p_{0}},X_{p}))\), \(q_{0}(X_{p_{0}})\) appears in the \(i\)-th argument of \(q(X_{p})\).
Theorem 15 is therefore a consequence of claims 21, 18 and 16: any MTT in Depth-proper normal form is LHI (Linear Height Increase) if and only if it is finite-ML-nesting.
### Deciding LHI
We can now use Theorem 15 to give an algorithm to decide the LHI property of \(\mathrm{MTT}^{R}\). Given a \(\mathrm{MTT}^{R}\)\(M\) we compute its Depth-proper normal form, then decide if it is finite-nesting:
The LHI property is decidable for \(\mathrm{MTT}^{R}\).
Proof.: Given \(M\) a \(\mathrm{MTT}^{R}\), we can decide if \(M\) is LHI by checking if the Depth-proper normal form \(M^{\prime}\) of \(M\) is finite-ML-nesting.
Similarly to finite-nesting property, the finite-ML-nesting property can be reduced to the finiteness of ranges of Macro tree transducers, which is known to be decidable.
## 6 Complexity of Deciding LSOI
In this section we consider _linear size-to-number-of-distinct-output-subtrees increase_ ("LSOI"). Let us first define LSOI more formally. Let \(\tau\) be a total function from \(T_{\Sigma}\) to \(T_{\Delta}\) for some ranked alphabets \(\Sigma,\Delta\). For a tree \(t\) let us define its set \(\mathrm{sub}(t)\) of subtrees as \(\{t/u\mid u\in V(t)\}\). Then \(\tau\) is of _LSOI_ if there exists a number \(c\) such that for every input tree \(s\in T_{\Sigma}\) it holds that \(|\mathrm{sub}(\tau(s))|\leq c\cdot|s|\).
Without explaining details about attributed tree transducers, the intuition is: with each node of the input tree a fixed number of attributes are associated and attributes define output trees in terms of other attributes. This implies that there can only be at most the product of the number of attributes and the size of the input tree many distinct output subtrees. Therefore the translation of every attributed tree transducer is of LSOI. Since every attributed transducer can be transformed into an equivalent macro tree transducer (even without look-ahead, see, e.g., the proof of Lemma 6.1 of [15]) it suffices to consider mtts that are of LSOI. Here, we do not need look-ahaed. In our terminology, an "mtt" is an mttr \((Q,P,\Sigma,\Delta,q_{0},R,h)\) for which \((P,\Sigma,h)\) is the trivial automaton with \(P=\{p\}\) and \(\hat{h}(s)=p\) for every \(s\in T_{\Sigma}\).
The decision problem whether a given mtt is of LSOI is as hard as the equivalence problem of mtts that are of LSOI.
Proof.: Let \(M_{1}=(\Sigma,\Delta,Q_{1},q_{1},R_{1})\) and \(M_{2}=(\Sigma,\Delta,Q_{2},q_{2},R_{2})\) be LSOI MTTs to be decided their equivalence \(M_{1}=M_{2}\), i.e., \(M_{1}(t)=M_{2}(t)\) for every \(t\in T_{\Sigma}\). Without loss of generality, we assume that \(Q_{1}\) and \(Q_{2}\) are disjoint. We construct an MTT \(M\) which should be LSOI if and only if \(M_{1}=M_{2}\).
Let \(M=(\Sigma^{\prime},\Delta^{\prime},Q,q_{0},R)\) be an MTT where
* \(\Sigma^{\prime}=\Sigma\cup\{a^{(1)}\}\) with \(a\notin\Sigma\)
* \(\Delta^{\prime}=\Delta\cup\{f^{(2)},e^{(0)}\}\) with \(f,e\notin\Delta\)
* \(Q=Q_{1}\cup Q_{2}\cup\{q_{0}^{(0)},q^{(3)}\}\) with \(q_{0},q\notin Q_{1}\cup Q_{2}\)
\[\begin{array}{
As mentioned above, every attributed tree transducer can be (effectively) realized by an mtt that is of LSOI. Therefore, we obtain from Theorem 6.1 the following corollary.
The decision problem whether a given mtt is of LSOI is as hard as the equivalence problem of attributed tree transducers.
## 7 Conclusions
We have proven that for a given macro tree transducer (with look-ahead) it is decidable whether or not it has linear height increase (LHI) and, whether or not it has linear size-to-height increase (LSHI). Both decision procedures rely on a novel normal form that is called "depth-proper" normal form. Roughly speaking the normal form requires that each parameter of every state of the transducer appears at arbitrary depth in output trees generate by that state (and for a given look-ahead state). Our constrution of the normal form removes parameters that only appear at a bounded number of depths. Once in depth-normal form, we can reduced the check for LHI and LSHI to the finiteness of ranges of mtts.
Both LHI and LSHI are natural properties and have several useful applications. For instance, if an mtt is not of LHI, then we know that it cannot be realized by a top-down or a bottom-up tree transducers. And, if an mtt is not of LSHI, then we know that it cannot be realized by any attributed tree transducers.
The most prevailing open problem is to solved the conjecture (see Introduction) that the translation of an mtt can be realized by an attributed tree transducer if and only if the translation is of LSOI (linear size-to-number-of-distinct-output-subtrees). In this paper we merely shows that _deciding_ if an mtt is of LSOI is as hard as deciding equivalence of attributed transducers (a long standing difficult open problem). We believe that the depth-proper normal form will be helpful in solving the conjecture. Intuitively, loops need to be considered which produced arbitary number of copies of states with parameters. In these loops we can exclude certain state nestings, due to the normal form.
Another interesting open problem is the question whether or not the mtt hierarchy (generated by sequential compositions of mtts) collapses for LSHI or for LHI. By this we mean (in the case of LSHI), whether or not there exists some number \(n\) such that \(\cup\{\text{MTT}^{k}\mid k\geq 1\}\cap\text{LSHI}\subseteq\text{MTT}^{n}\). Note that for linear size increase, the hierarchy collapses [6] to level one, i.e., \(n=1\). Another question is, whether \((\text{MTT}\cap\text{LSHI})^{k+1}\geq(\text{MTT}\cap\text{LSHI})^{k}\). To see that this indeed holds, consider compositions of the translation that takes a binary tree \(s\) as input and outputs a full binary tree of height \(|s|\). The corresponding question for LHI is open. We do have a candidate translation that would show (for level two) both, that a two-fold composition of mtts of LHI can do strictly more than a single mtt, and, that two-fold compositions that are of LHI are strictly more than a single mtt: input trees are monadic trees of the form \(a^{n}(e)\) and output trees are full binary trees of height \(n\), at the leaves of which are monadic trees that represent the the Dewey paths of those "leaves" of the binary tree. E.g. \(a(a(e))\) is translated to
\[f(f(1(1(e)),1(2(e))),f(2(1(e)),2(2(e)))).\]
Note that if we output _reverse_ such Dewey paths, then the translation indeed can be realized by a single mtt (see, e.g., Fig. 6.2 in [15]). Thus, the second transducer of the composition reverses the reverse Dewey paths. |
2309.14863 | Automated analysis of oscillations in coronal bright points | Coronal bright points (BPs) are numerous, bright, small-scale dynamical
features found in the solar corona. Bright points have been observed to exhibit
intensity oscillations across a wide range of periodicities and are likely an
important signature of plasma heating and/or transport mechanisms. We present a
novel and efficient wavelet-based method that automatically detects and tracks
the intensity evolution of BPs using images from the Atmospheric Imaging
Assembly (AIA) on board the Solar Dynamics Observatory (SDO) in the 193\r{A}
bandpass. Through the study of a large, statistically significant set of BPs,
we attempt to place constraints on the underlying physical mechanisms. We used
a continuous wavelet transform (CWT) in 2D to detect the BPs within images.
One-dimensional CWTs were used to analyse the individual BP time series to
detect significant periodicities. We find significant periodicity at 4, 8-10,
17, 28, and 65 minutes. Bright point lifetimes are shown to follow a power law
with exponent $-1.13\pm0.07$. The relationship between the BP lifetime and
maximum diameter similarly follows a power law with exponent $0.129\pm0.011$.
Our wavelet-based method successfully detects and extracts BPs and analyses
their intensity oscillations. Future work will expand upon these methods, using
larger datasets and simultaneous multi-instrument observations. | Brad Ramsey, Erwin Verwichte, Huw Morgan | 2023-09-26T11:39:04Z | http://arxiv.org/abs/2309.14863v1 | # Automated analysis of oscillations in coronal bright points
###### Abstract
Context:Coronal bright points (BPs) are numerous, bright, small-scale dynamical features found in the solar corona. Bright points have been observed to exhibit intensity oscillations across a wide range of periodicities and are likely an important signature of plasma heating and/or transport mechanisms.
Aims:We present a novel and efficient wavelet-based method that automatically detects and tracks the intensity evolution of BPs using images from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) in the 193A bandpass. Through the study of a large, statistically significant set of BPs, we attempt to place constraints on the underlying physical mechanisms.
Methods:We used a continuous wavelet transform (CWT) in 2D to detect the BPs within images. One-dimensional CWTs were used to analyse the individual BP time series to detect significant periodicities.
Results:We find significant periodicity at 4, 8-10, 17, 28, and 65 minutes. Bright point lifetimes are shown to follow a power law with exponent \(0.129\pm 0.011\).
Conclusions:Our wavelet-based method successfully detects and extracts BPs and analyses their intensity oscillations. Future work will expand upon these methods, using larger datasets and simultaneous multi-instrument observations.
## 1 Introduction
Coronal bright points (BPs) are ubiquitous forms of activity seen as areas of point-like emission in extreme-ultraviolet (EUV) to X-ray wavelengths, in both quiet-Sun and coronal hole regions of the solar corona (Madjarska, 2019). They have been the subject of intense interest since their discovery in the 1970s (Vaiana et al., 1973; Sheeley & Golub, 1979). A BP is a collection of small coronal loops (a mini active region) that forms an area of diffuse emission 10-60 arcsecs in size with a bright core of about 5 to 10 arcsecs (Golub et al., 1977; Hirzberger et al., 2008). Bright points are associated with small magnetic bipolar regions with typical photospheric magnetic fluxes of \(10^{-19}\)-\(10^{-20}\)Mx (Golub et al., 1976).
On average, there are 400-800 BPs per day on the solar disk (Sattarov et al., 2002; McIntosh & Gurman, 2005; Alipour & Safari, 2015). The BP frequency outside the active region belt does not show much variation with the solar cycle, though the number decreases with temperature since BP production occurs at temperatures well below the temperatures that soft X-ray detectors are sensitive to (Hara & Nakakubo-Morimoto, 2003; McIntosh & Gurman, 2005). The lifetimes of BPs seen in X-ray are found to exhibit a statistical distribution, with a mean value of around 8 hours (Golub et al., 1974). Alipour & Safari (2015) show that BP size approximately follows a power-law distribution with exponent 0.14 with respect to the lifetime. Bright coronal features with smaller scales, around 4Mm, and lifetimes of less than 1 hour have been observed. These features are more commonly known as brightenings, or transient coronal brightenings (Chen et al., 2021; Berghmans et al., 2021), and are not considered BPs in this work.
Bright points have been observed to host intensity oscillations in both X-ray and EUV with a broad range of periodicities, from a few minutes to hours (Sheeley & Golub, 1979; Christensen-Dalsgaard & Frandsen, 1983; Strong et al., 1992; Ugarte-Urra et al., 2004; Kariyappa & Varghese, 2008; Tian et al., 2008; Kumar et al., 2011; Chandrashekhar et al., 2013). Early observations by Sheeley & Golub (1979) showed a morphological evolution of about 6 minutes. Long-period oscillations from 8 to 64 minutes were observed by Tian et al. (2008). Longer-period oscillations were found by Kariyappa & Varghese (2008) in X-ray BPs as seen by Hinode using power spectra analysis, with periods between 9 and 133 minutes. Wavelet analysis of time series by Ugarte-Urra et al. (2004) showed BP oscillation periods as short as 236 seconds, with dominant periodicities at 8 and 13 minutes. It is unknown if these oscillations are a result of propagating magneto-acoustic waves or recurrent magnetic reconnection.
In addition to intensity oscillations, decayless kink oscillations have also been observed in BPs. These physical oscillations have been seen with periods between 1 and 8 minutes, with an average of 5 (Gao et al., 2022).
Intensity oscillations have also been observed in other coronal structures. Longer-period oscillations of 8-27 hours have been detected in coronal filaments (Foullon, C. et al., 2004). Auchere et al. (2014) detected long-period intensity oscillations of 3-16 hours. These oscillations have been seen across active regions, visually associated with coronal loops, and in the quiet Sun.
The physical cause of these long-period oscillations, which can last for several days, is still uncertain. Numerical thermal non-equilibrium models by Muller et al. (2005), used to explain coronal rain, can produce periods within the range observed by Auchere et al. (2014). Thermal non-equilibrium has been suggested as a proposed mechanism for coronal loop formation (Mok et al. 2008). Froment et al. (2015) show that intensity oscillations in loops are linked to loop heating, with evaporation and condensation cycles, with simulations to support this (Froment et al. 2017).
Various physical mechanisms have been proposed to explain intensity oscillations, such as 'leaky p-modes' propagating along inclined magnetic flux tubes (De Pontieu et al. 2005), standing waves within chromospheric cavities (Leibacher & Stein 1981), recurrent small-scale reconnection events (Parker 1988; Chandrashekhar et al. 2013; Chandrashekhar & Sarkar 2015), and cyclic loop heating (Habbal & Withbroe 1981; Verwichte & Koutova 2017). The study of acoustic waves provides physical insight into the mechanism of deposition energy transport in the solar atmosphere (e.g. De Moortel & Hood 2003; Wang et al. 2003). The simpler geometry of BPs compared to active regions makes it easier to disentangle the observed signatures.
Many methods have been developed to automate the detection and characterisation of a statistically meaningful set of BPs. Brajsa et al. (2001) used the regions of interest segmentation package in Interactive Data Language (IDL). Hara & Nakakubo-Morimoto (2003) identified BPs in X-ray images through thresholding based on the estimated noise level and other criteria, such as the size and shape of the candidate regions. McIntosh & Gurman (2005) developed a filtering and thresholding technique. Humphries et al. (2021) developed a method for detecting and characterising small-scale brightenings in EUV imagery using spatio-temporal bandpass filtering and adaptive thresholding. Alipour & Safari (2015) used an automated machine learning method, developed by Alipour et al. (2012), to study the statistical properties of BPs. This method uses a training dataset of coronal BPs that is collated by eye. Coronal BPs between 2 Mm and 20 Mm -- co-located in Atmospheric Imaging Assembly (AIA) 171A, 193A, and 211A bands (Lemen et al. 2012) and detected by the Helioseismic and Magnetic Imager (HMI; Schou et al. 2012) on board the Solar Dynamics Observatory (SDO; Pesnell et al. 2012) -- were used to inform the machine learning algorithm. Alipour & Safari (2015) implemented their method using data at a reduced 45-second cadence.
We wished to expand upon these investigations by applying automated analysis techniques to determine the statistical properties of oscillations in spatially resolved BPs. We focused on persistent BPs that have minimum lifetimes of 1 hour (i.e. not the small, transient, brightenings studied by Chen et al. 2021 and Berghmans et al. 2021). This allowed us to investigate oscillations across four magnitudes of temporal scales. We analysed imaging data from SDO/AIA, which has been collecting data almost continuously since 2010, at the full time cadence of 12s for the EUV channels of AIA. This provides an unprecedented quantity of data, which allows for the systematic analysis of BP oscillations across a large range of periods and across the solar cycle.
The paper is structured as follows. Section 2 lays out the methodology of the automated detection and tracking of BPs, with a focus on the following areas: the acquisition of the data; detection using 2D continuous wavelet transforms (CWTs); BP tracking; BP morphology; and BP time series analysis. The main results and their implications are discussed in Sect. 3. Section 4 concludes the work.
## 2 Automated analysis procedure
### Data acquisition
We focused on analysing imaging data from AIA in the 193A channel, which covers the Fe XII and Fe XXIV emission lines and is sensitive to temperatures around 1.5 and 20 MK. We chose this channel as a compromise between (i) detecting many small-scale features, such as loop foot points, in lower temperature bandpasses and only the hottest BPs in the hotter bandpasses and (ii) maintaining a good signal-to-noise ratio. For this study, imaging data from three days, 01 January 2020 to 03 January 2020 inclusive, were used at the full time cadence of 12s. Level 1 data were used, that is, images have CCD read-out noise (the noise of the on-chip amplifier) removed and rotated so as to align with solar north. The images have a pixel size of 0.6 arcsec.
To find a suitable compromise between image detail and processing time during the detection phase, images were reduced in resolution by a factor of 4, from \(4096\times 4096\) to \(1024\times 1024\), using linear interpolation. The first image in a time sequence was used as the initial reference image, and the pointings of all subsequent images were aligned to it. This reduces offsets and helps keep BPs centred within sub-images during tracking. Consistent image headers were maintained. The near and off-limb areas of the image were excluded from the region of interest (ROI) by use of a circular image mask centred on the solar disk and with a radius equivalent to 0.7 \(R_{\odot}\).
### Detection using 2D continuous wavelets
For the automated detection of BPs in the images, we applied a 2D CWT to the imaging data (Antoine et al. 2002; Hochedez et al. 2002; Delouille et al. 2005; Mallat 2008; White et al. 2012). The CWT of a 2D image, \(I(\@vec{r})\), is defined as
\[\mathrm{CWT}(I)(\@vec{b},a,\theta)=\frac{1}{a^{n}}\iint\limits_{-\infty}^{+ \infty}I(\@vec{r})\ \psi\left(\frac{1}{a}R_{-\theta}(\@vec{r}-\@vec{b})\right)\mathrm{d}^{2} \@vec{r}\, \tag{1}\]
where \(\psi(\@vec{r})\) is called the mother wavelet. In the transform it is translated by a 2D displacement vector, \(\@vec{b}\), scaled by the finite scale parameter \(a\), and rotated through the rotation matrix \(R_{\theta}\) at an angle \(\theta\)(Wang & Lu 2010). The transform can be defined depending on the chosen norm, which is expressed through the index \(n\). For the typical L\({}_{1}\), norm \(n\)=2, which we use throughout this paper. In an L\({}_{1}\) norm, magnitudes of vectors are calculated as the sum of the absolute values of their complex components. The CWT corresponds to a scale-space convolution of an image with a mother wavelet.
The \(\psi(\@vec{r})\) is a wavelet if it is localised both in space and in its reciprocal space, that is, \(\iint\!\!\!\psi(\@vec{r})\,\mathrm{d}^{2}\@vec{r}=0\), and if it fulfils the admissibility condition in reciprocal space \(\iint\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(A\) is the amplitude and \(\sigma_{0}\) is the width, centred on position \(\@vec{c}\):
\[\mathrm{CWT}(I)(\@vec{b},a) = \frac{1}{a^{2}}\iint\limits_{-\infty}^{+\infty}A\exp\left(-\frac{ |\@vec{r}-\@vec{c}|^{2}}{2\sigma_{0}^{2}}\right)\,\phi\left(\frac{\@vec{r}- \@vec{b}}{a}\right)\,\mathrm{d}^{2}\@vec{r}\, \tag{3}\] \[= \frac{2\pi A\left(\frac{a}{\sigma_{0}}\right)^{2}}{\left(1+\left( \frac{a}{\sigma_{0}}\right)^{2}\right)^{2}}\,\,\phi\left(\frac{\@vec{b}-\@vec{ c}}{\sqrt{a^{2}+\sigma_{0}^{2}}}\right)\.\]
The result is a Mexican hat wavelet located at \(\@vec{c}\) with a width and amplitude that depend explicitly on scale \(a\) (illustrated in Fig. 1). The transform is maximal at \(\@vec{b}=\@vec{c}\). At that position, the transform has a clear profile as a function of \(a\) with a maximum value of \(\pi A\) at \(a=\sigma_{0}\). For scales smaller or larger than \(\sigma_{0}\), the transform drops away sharply. This is illustrated in Fig. 2. Therefore, this wavelet can be used to differentiate point-like features across scales. Furthermore, the Mexican hat CWT of a constant or linear trend in intensity is zero due to its characteristic as a Laplacian, that is, \(\mathrm{CWT}(Axy+Bx+Cy+D)=0\). With the removal of background signals, the CWT enhances image contrast (White et al. 2012). All these characteristics make this wavelet particularly well suited for the detection of point-like intensity features in solar images
The CWT was applied in two stages, illustrated in Fig. 3. First, we applied a simple circular mask to the image with a diameter that is 1% shorter than that of the Sun. This obscures the bight limb detail. Then we performed a 2D CWT on that image with a scale of approximately \(a_{\mathrm{AR}}\)=90 arcsec. This scale is much larger than the typical BP, and of the order of active regions. Therefore, regions of high CWT intensity at that scale correspond to active regions. As it is easy for an automated algorithm to confuse BPs and the foot points of active region loops, we excluded active regions from the ROI by subtracting the detected regions from the image mask. An active region was designated where the intensity of the real part of the CWT is greater than the 97th percentile of the intensity. A mask of this defined area was then applied.
Secondly, a 2D CWT was performed on the image with the active regions and limb removed. Another simple circular mask with a diameter of 70% of the Sun-disk radius was applied. This prevents detections close to the limb that may be affected by the line of sight reduction and geometric projection. This means that BPs close to the 0.7R\({}_{\odot}\) boundary appear approximately 30% smaller than at disk centre. The 2D CWT was applied at the scale of \(a_{\mathrm{BP}}\)=7 arcsec, the typical scale of BPs (Golub et al. 1977; Hirzberger et al. 2008). Candidate BPs were initially detected within the second circular mask as regions at the 99th percentile of the real part of the CWT.
### BP tracking
For each candidate BP ROI, properties such as location, total intensity, semi-minor and semi-major radii, elliptical shape, and orientation were extracted. First, we used this to further eliminate false detections. Regions with an eccentricity greater than 0.6 were removed from the ROI, thereby eliminating likely elongated loop structures and other structures that do not fit the general morphological shape of a BP, although this does not prevent a candidate BP from changing shape over its lifetime. Furthermore, regions with total pixel areas of less than 30 arcsec\({}^{2}\) were also removed as they could be short-lived, small-scale solar transients or cosmic rays. The above selection steps reduced the number of detections by about 50%. We call the remainder of the detections simply BPs.
In order to ascertain the time range over which a BP is visible, a detection procedure was repeated at 1-hour time intervals. At each hour, the found BPs were compared with those found an hour earlier. This was to distinguish between newly formed and pre-existing BPs and was achieved in the following way. A binary image of the current image's detections was subtracted from a binary image of the previous image's detections. This created a third image with the following characteristics. Newly formed BPs are designated with a value of -1, pre-existing BPs a value of 0, and BPs that disappear a value of 1. We could use these values to extract the newly formed BPs, for which we could use the average position coordinates to generate a sub-image centred on BP, with a size approximately eight times larger than the size of the detected BP at its birth hour in pixels. This was
Figure 1: Mexican hat wavelet applied to a 2D Gaussian with increasing scales. The left panel shows a 2D Gaussian shape with width \(\sigma_{0}=3\). The remaining panels show, from left to right, the application of a Mexican hat CWT with scales of \(a\)=1, 3, and 6, respectively. The solid curve shows the 1D profile along a direction intersecting with the position of the Gaussian.
Figure 2: Profile of the CWT (\(n\)=2) at the location of a 2D Gaussian of amplitude \(A\)=1 and width \(\sigma_{0}\)=3, as a function of scale, \(a\).
a compromise intended to avoid the computational expense of an unnecessarily large sub-image but include the largest BP size found in the literature (Golub et al., 1977; Madjarska, 2019), approximately 60 arcsec. This sub-image sizing allows the BP to grow over its lifetime and remain within the sub-image. The corresponding heliographic coordinates were then used to track the BP position with time by rotating them according to the local synodic solar differential rotation rate.
We then used a detection procedure and similarity test to determine the last image in which a BP exists, as follows. The 2D CWT was applied to the first sub-image at a scale length of 7 arcsec. As with the initial detection, a threshold mask was applied to the CWT image, initially at the 95th percentile. If more than one area was detected within the CWT image, the process was repeated in increasing 0.01% percentile increments until only one area remained. This area should be the brightest point of the BP sub-image, but it might not be in the centre of the sub-image; therefore, the sub-image was re-centred on this point and the CWT re-applied. This process was then repeated on the sub-image 1 hour later, using the coordinates of the re-centred BP.
Two tests were performed to determine if there is a BP present in a sub-image. First, we determined the difference between the CWT value at the centre of the real CWT image and the minimum CWT value in the image. If this value is small, less than 100, then the CWT value at the centre and the minimum are close together and therefore a BP is unlikely to be in the centre of that image. Next, the centre value of the new BP sub-image and the standard deviation of the average of the previous and current BP sub-image were found. If the quotient of these two numbers was greater than 5 and the first criterion was met, then a BP is said to exist. If either of these conditions was not met, then the BP has disappeared and so we took the time of the last known image containing the BP to be the BP's death hour. This death hour was further refined during the tracking process.
The birth hours were determined during the detection and comparison with previous images. The birth hours for the BPs detected in the first image of the full dataset are not known as they lie outside the bounds of the dataset and were eliminated from the statistics. With birth and death hours established, we created for each BP a 3D data cube from the 193A dataset, at
Figure 4: Sub-image of BP #149 and the corresponding CWT image. Panel (a): AIA 193Å example BP #149 on 01 January 2020 at 01:59:59 UT. Panel (b): Application of a CWT at scale \(a_{\rm BP}\). White crosses highlight the maxima in the CWT, which exceed the threshold value after the application of a 2D weighted Gaussian to the CWT image. The red cross highlights the maxima closest to the centre of the image.
Figure 5: Illustration of the process of extracting a BP region in four steps. Panel (a): AIA 193Å example BP #149 on 01 January 2020 at 01:59:59 UT. Panel (b): Binary mask showing the region that exceeds the median of the whole weighted Gaussian data cube by 6 sigmas. Panel (c): Binary mask showing the previous region but expanded to include pixels down to 3 sigmas. Panel (d): Binary mask with holes filled using morphological filtering. This is the final BP area. The temporal evolution of the BP in panel (a) is available as an online movie.
Figure 3: Full disk AIA and CWT images illustrating the detection of BPs. Panel (a): Full disk AIA 193Å on 01 January 2020 at 01:00:05 UT. Panel (b): 2D CWT at the active region scale, \(a_{\rm AR}\). The active region is detected by applying a threshold value at the 97th percentile of the CWT value. This threshold area is the area within the red contour. Panel (c): 2D CWT at the BP scale, \(a_{\rm BP}\). The active region is masked by removing the area denoted by the red contour, as in panel (b). The larger black circle is applied at 0.7\(R_{\odot}\), obscuring limb, off-limb, and edge effects. In this case, the active region mask is outside the limb mask. Panel (d): Full disk AIA 193Å on 01 January 2020 at 01:00:05 UT. Candidate BPs are shown as white crosses. Masked areas are denoted by red and black contours.
the full spatial and temporal resolution (0.6 arcsec per pixel and 12s), of a restricted field of view of between 72x72 and 115x115 arcsec, at the heliographic rotating coordinates centred on the BP.
### BP morphology
For each BP and at each time step, we extracted the relevant morphological characteristics. To achieve this, we needed to more closely identify which portion of the sub-image is identified as being part of the BP. We again applied the 2D CWT to the image with a scale \(a\)=7 arcsecs. We then further weighted the CWT signal by multiplying it by a 2D Gaussian with unity amplitude centred on the field of view. We then identified the BP maximum as the maximum in the image that is within the 95\({}^{\rm th}\) percentile of the total maximum and closest to the centre of the field of view (see Fig. 4). The 2D Gaussian weight was then centred on the location of the found BP maximum. The BP itself was then identified as the region overlapping the maximum and that exceeds the sub-image median by \(6\sigma\). The region is represented by a binary mask. This region was grown to encompass all neighbouring pixels down to \(3\sigma\). Any remaining holes in the found region were closed using morphological filtering. This BP extraction method is illustrated in Fig. 5. From the binary mask and original 193A sub-image, statistical properties could then be extracted for the BP, such as average and maximum intensity, size, and shape. By repeating this procedure for all time steps, time series of BP properties were created.
### BP time series analysis
The method of extracting a BP from the background, as defined in Sect. 2.4, may not successfully detect and extract the BP across the whole time series. In order to prepare the morphological statistics of the BPs, some further processing steps must be considered. In these instances, two actions were performed on the morphological data prior to extracting statistical results. First, instances in the time series were found wherein a BP is not extracted consecutively for eight images; this is approximately 96 seconds and is about half the period of the frequently observed 3-minute oscillations. Any instances of missing data that last for longer than or equal to eight images remove potentially important data; therefore, the time series were cut off at the beginning of the gap. The remaining gaps in the data, fewer than eight images, were then filled in using linear interpolation. Second, if the number of remaining gaps equated to more than 15% of the number of data points, the whole time series was dismissed. This removes the statistical unreliability that comes with interpolating too many gaps in a dataset. Additionally, any BP time series that has non-physical values, such as negative intensity or a zero-valued area, were dismissed. Some time series show discontinuity in the form of intensity jumps, which will result in the 1D CWT (Torrence & Compo 1998; Verwichte et al. 2004; Auchere et al. 2016) power being dominated by the discontinuous jumps if present; therefore, before these time series were analysed, a point filter was applied to smooth sporadic jumps. The point filter identified outliers by comparing values to the local standard deviation and replaced the value with a new value closer to the local mean. Lastly, each time series was manually checked to ensure that any anomalous time series were not included in the analysis results. Each time series of average BP intensity was analysed using a 1D CWT and a custom noise model given by Auchere et al. (2016) in the form \(\sigma(v)=Av^{\prime}+BK_{o}(v)+C\), where the first term represents the power-law dependence given by background stochastic fluctuations. The second term is a kappa function that is related to pulses in the time series. The final term corresponds to high-frequency white noise.
## 3 Results and discussion
This detection method found 3308 BPs over the three-day period: 1191 on 01 January 2020, 1141 on 02 January 2020, and 976 on 03 January 2020. The number of BPs used in the analysis was eventually reduced to 656.
Figure 8: Lifetime versus maximum BP diameter. A total of 656 BPs are used. A power law with the form \(\beta=\sigma r^{B}\), where \(\alpha=7.72\pm 0.87\) and \(\beta=0.129\pm 0.011\), is shown in red, and the grey background shows the \(3\sigma\) confidence level. The root–mean-square error for the fit is 9.7. The average maximum BP diameter for each lifetime bin is shown as a white circle, with the standard deviation as the error.
Figure 6: Histogram showing the distribution of lifetimes. A power-law fitting with exponent \(-1.13\pm 0.07\) is shown in red. The grey background shows the \(2\sigma\) confidence level
Figure 7: Histogram showing the distribution of the average BP diameter. We show a Gaussian fit with \(\mu=24.06\pm 0.19\), \(\sigma=4.93\pm 0.19\), and \(fwhm=11.60\pm 0.45\).
### General BP characteristics
Figure 6 shows the distribution of BP lifetimes, with a mean lifetime of 6.8 hours and a range from 1 hour to 22 hours. We fitted a power law of the form \(y=\alpha s^{\beta}\) with an exponent equal to \(-1.13\pm 0.05\). The lifetime shows an almost power-law distribution across all lifetimes, with a good fit up to approximately 700 minutes, after which the function begins to diverge. Previous work by McIntosh & Gurman (2005) found a power-law behaviour with exponentials at longer lifetimes. The exponents for the power law by McIntosh & Gurman (2005) vary with temperature and time over the solar cycle, but fall between -1.24 and -2.00 for the 195A channel of the Extreme ultraviolet Imaging Telescope (EIT) on board the Solar and Heliospheric Imager (SOHO). Alipour & Safari (2015) find an exponent of -1.6. These previous exponents were found at varying stages of the solar cycle, and, considering our very small dataset, we cannot comment on the significance of our exponent in relation to the solar cycle. Our exponent is consistently different from literature values, perhaps due to our power-law-only fit with a much shorter maximum lifetime. McIntosh & Gurman (2005), for example, show EUV BPs with lifetimes in excess of 100 hours. In contrast, the maximum lifetime we find is 22 hours. This is most likely the source of the discrepancy between the power-law fits. Additionally, short-lifetime detections are not considered BPs within our data as BPs with lifetimes of less than an hour and scales of around 4 Mm could be considered coronal brightenings (Chen et al., 2021; Berghmans et al., 2021).
The average BP diameter was obtained by taking twice the mean semi-major radius for each BP across its lifetime. Similarly, the maximum BP diameter is defined as twice the maximum semi-major radius that a BP achieves across its lifetime. The mean diameter is \(24.06\pm 0.19\) arcsec, with a normal distribution of width \(\sigma=4.93\pm 0.19\) and a range from 10 arcsec to 39 arcsec, as shown in Fig. 7. When we compared the mean diameter against a BP's lifetime, we find no clear relationship. However, there is a relationship between the maximum diameter and lifetime, as shown in Fig. 8. The distribution in this figure shows a general increase in the diameter with lifetime. We binned the lifetimes in one-hour increments and calculated the mean and standard deviation of the maximum diameters for each bin. These are illustrated in Fig. 8 and show an increase from the shortest to longest lifetimes. At the very longest lifetimes, we see a decrease in the average maximum diameter, but there are only a few BPs with such long lifetimes, so the statistics are not reliable. To further characterise this relationship, we fitted the maximum diameters to a power law of the lifetimes. We note that this fit was done with all the individual BP data points, not the bin averages. Our power-law fit has the form \(D_{\rm max}=\alpha\tau^{\beta}\), where
\[D_{\rm max}\ =\ (7.72\pm 0.87)\,\tau^{0.129\pm 0.011}. \tag{4}\]
Here, \(D_{\rm max}\) and \(\tau\) are the maximum BP diameter in mega-metres and lifetime in seconds, respectively. The parameter errors here were obtained as the standard deviation of the non-linear least mean square fitting method used to fit the data, with an RMS error of 9.7. This power law confirms the clear relationship between the maximum diameter and lifetime. That there is no clear relationship between mean diameter and lifetime is surprising, particularly given the relationship shown by Fig. 11 of Alipour & Safari (2015). Their study, however, looks at considerably smaller lifetime and spatial scales than ours. The lack of a relationship between lifetime and mean diameter and the clear relationship between lifetime and maximum diameter is interesting and requires further study. Our detection method does have a bias, in that we discard BPs with some minimum diameter ( 6.2") and minimum lifetime (1 hour). Furthermore, at long lifetimes
Figure 9: 1D CWT applied to BP#149. The top panel shows the time series of the average BP intensity normalised by the standard deviation of the time series. The bottom-left panel shows the wavelet power, with the COI in red and the global confidence shown within the white contours. The bottom-right panel shows the normalised Fourier spectrum in grey, the global wavelet spectrum in black, the global significance level in red, and the local significance level in orange. The noise model components are shown as follows: power law in dashed orange, the kappa function in dashed blue, and the white noise in dashed green.
Figure 10: From top to bottom: 1D CWT plots for BPs #195, #685, #735, #840, and #1375. The left panels show the average BP intensity time series, with the wavelet power below. The right panels show the Fourier spectrum in grey and the global wavelet power spectrum in black. The solid red and orange lines show the global and local wavelet significance levels, respectively. The noise model components are shown as follows: power law in dashed orange, the kappa function in dashed blue, and the white noise in dashed green.
we have far fewer BPs. A larger study would help with the statistics at longer lifetimes.
### Example BPs
Using BPs #149, #195, #685, #735, #840, and #1378 as examples, we applied 1D CWTs to the average BP intensity, as shown in Fig. 9. We can see, in the bottom-left panel, several periodicities with significant power appearing regularly across the lifetime of the BP, namely at periods between 1 and 10 minutes. We can see, in the right panel, a highly structured Fourier spectrum shown along with the wavelet power. We can see regions of wavelet power above the significance levels; these regions are more concentrated at the beginning of the BP's lifetime, especially at shorter periods.
The global and local significance levels are shown in Fig. 9. The global significance level represents the wavelet power that lies above a global confidence level when compared with the noise model. The local significance level represents the probability that power in a single bin is significant when compared to
Figure 11: Illustration of the process of bandpass-filtering the BP region. The leftmost panel shows the AIA 193Å image at the beginning of the time range of interest. The remaining three panels show images at three times separated by 2 minutes. At each pixel, the time series has been bandpass-filtered around the period of 4 minutes with a Hann filter with a typical width equal to the mode frequency. The temporal evolution of the bandpass-filtered images is available as an online movie
Figure 12: Illustration of the process of bandpass-filtering the BP region. The leftmost panel shows the AIA 193Å image at the beginning of the time range of interest. The remaining three panels show images at three times separated by 4 minutes. At each pixel, the time series has been bandpass-filtered around the period of 8 minutes with a Hann filter with a typical width equal to the mode frequency. The temporal evolution of the bandpass-filtered images is available as an online movie
Figure 13: Illustration of the process of bandpass-filtering the BP region. The leftmost panel shows the AIA 193Å image at the beginning of the time range of interest. The remaining three panels show images at three times separated by 8 minutes. At each pixel, the time series has been bandpass-filtered around the period of 16 minutes with a Hann filter with a typical width equal to the mode frequency. The temporal evolution of the bandpass-filtered images is available as an online movie
the noise model. Auchere et al. (2016) provide a detailed explanation of the noise model and accompanying significance levels.
At shorter periods of between 1 and 10 minutes the wavelet power is above both the local and global significance levels. This is in contrast to longer periods of approximately 30 and 70 minutes, where the wavelet power is only above the local significance level.
We can see further examples of these wavelet power spectra in Fig. 10. Here we show the wavelet spectra and powers for BPs #195, #685, #735, #840, and #1378. These BPs show a range of lifetimes, from 2 to over 16 hours. We can see some common periodicities, for example for BPs #685 and #735, fairly distinct peaks in both the Fourier and wavelet power spectra that lie above the local and global significance level, at approximately 4 minutes. On the other hand, BP #840 has a suggestion of periodicity at 4 minutes above the local significance only. At longer periods we can start to see some common periodicity in BPs #195, #685, and #735, at approximately 30-40 minutes.
The periods of interest seen in Fig. 9 are shown for BP #149 in Figs. 11, 12, and 13. We chose to look at periods of 4, 8, and 16 minutes as they can be seen in the main wavelet plot as areas above the significance level. We applied a temporal bandpass filter around the period of interest with a Hann filter with a typical width equal to the mode frequency. This shows a change over the oscillation cycle, with the spatial structure of the three oscillation periods showing clear differences. For the 4-minute oscillation in Fig. 11, the BP shows hints of anti-phase behaviour between its two sides, which could be an m=1 mode. Such modes have also been observed in sunspots (Jess et al. 2017) and chromospheric vortices (Murabito et al. 2020). While the structure of a BP is significantly different from that of a sunspot or chromospheric vortex, this behaviour could suggest leaky p modes on the eastern and western sides, which would constitute an apparent m=1 mode structure. The 8- and 16-minute oscillations show a phase coherence that maps out the coronal loops in the BP. The acoustic cut-off frequencies in the quiet Sun vary in the range 4-6 mHz (3-4 minutes; Felipe & Sangeetha 2020). Intensity oscillations with similar or shorter periods are interpreted to be acoustic in nature. Those with periods substantially longer, 10 minutes and more, are unlikely to be acoustic waves propagating up from below and instead may be evanescent acoustic tails or be associated with thermal limit cycles (e.g. Foullon, C. et al. (2004); Auchere et al. (2014); Froment et al. (2017); Verwichte & Kohutova (2017))
### General periodicity
There are, more generally, many periodicities potentially detected across all of the analysed BPs. To better visualise the significance of the detected periodicities, we took the significant normalised wavelet power - that is, the power within the cone of influence (COI; see Fig. 9) - as an example; it lasts for at least three complete periods. These powers were then summed together for all BPs and normalised. All BPs can contain periodicities of less than 1 hour; conversely, periodicities greater than 1 hour are not possible for BPs whose lifetimes are shorter than 1 hour. We therefore applied a weighting to the total normalised power. This result can be seen in Fig. 14, where we show the normalised and weighted total significant power for BPs with lifetimes greater or less than 1 hour. Combining these BPs results in a discontinuity at 10 minutes as this is the longest periodicity that the wavelet can detect; periods greater than 10 minutes fall outside the COI for BPs with lifetimes of 1 hour.
We can see from these two plots a peak below and above 4 minutes (average time, 4.11 minutes). For BPs of 1 hour, we see another peak between 8 and 9 minutes, whereas for the remaining BPs we see this peak at 10 minutes. There is a final noticeable peak at approximately 17 minutes. For longer periodicities, there are slight humps within the plot at approximately 28, 40, and 49 minutes. At longer periods we see hints of additional periodicity at approximately 65 to 75 minutes.
Ugarte-Urra et al. (2004) showed dominant periods of between 8 and 13 minutes in BP oscillations. Our 10-minute peak falls within this range; its physical nature is still uncertain it but could be evanescent acoustic modes moving through the transition regions resonant in loops or thermal-limit cycles (Habbal & Withbroe 1981; Verwichte & Kohutova 2017). The peaks at 17, 28, and 65 minutes seen here have been seen by Tian et al. (2008) (16, 28, and 64 minutes). We do not see the clear peak at 32 minutes noted by Tian et al. (2008); however, this potential peak in our case could be too broad and therefore not discernable in Fig. 14. Zhang et al. (2012) found a periodicity of about 1 hour, which they suggested was due to quasi-periodic recurrent flashes. The peak at 17 minutes falls within the range of 15-25 minutes seen by Chandrashekhar & Sarkar (2015) within their simulated loop and nanoflare model, in addition to the observed values by Chandrashekhar et al. (2013). The most dominant periods are the 4- and 10-minute ones seen in Fig. 14. The 4-minute period is around the 5-minute photospheric p-mode, which could contribute to oscillations in the corona under certain circumstances, namely the 'leaky' p-mode, (De Pontieu et al. 2005), Srivastava & Dwivedi (2010) found periodicity of 241 \(\pm\) 60s (4.0 \(\pm\) 1 minutes). Our 4-minute period falls within that range. Future work will focus on the multi-bandpass (multi-thermal) and spatial mode structure aspects of the BP oscillations. A comparison of various analysis methodologies employed by various studies may also be undertaken.
Figure 14: Normalised and weighted by BP lifetime, the total significant power for oscillations that last for at least three periods for all BPs whose lifetime is greater than 1 hour (in black). Blue shows BPs with lifetimes of 1 hour; the power has been scaled down by a factor of 7.
## 4 Conclusion
The aim of this work was to analyse a large set of coronal BPs using CWTs in 2D and 1D. We present a novel method for the detection and tracking of BPs using 2D CWTs. We analysed the morphology of these BPs and investigated intensity oscillations using 1D CWTs.
We find that BPs have a lifetime distribution that follows a power law, with exponent \(-1.13\pm 0.07\). We find that the relationship between a BP's lifetime and maximum diameter roughly follows a power law with exponent \(0.129\pm 0.011\). These statistical results compare well with previous studies of BPs (Alipour & Safari 2015; McIntosh & Gurman 2005). The analysis of intensity oscillations within BPs shows a broad range of significant periodicity between 1 and 100 minutes, with notable peaks at 4, 10, and 17 minutes and the suggestion of peaks at longer periods, namely 28 and 65 minutes. Further in-depth analysis is required to study the spatial mode structure of these oscillations and place constraints on their physical nature.
There is a clear relationship between a BP's area and intensity. However, the effects of limb projection are not considered here, and future work will endeavour to address this. The hope is the automated methods described here will allow for a much larger statistical study of BP intensity oscillations and their morphological characteristics in the future. Additionally, further work will endeavour to expand the automated methods into different SDO/AIA passbands and different instruments.
###### Acknowledgements.
The wavelet transform has been performed using the Python wavelet module by Erwin Verwichte (University of Warwick) and was supported by a UK STFC grant ST/L006324/1. We acknowledge STFC studentship ST/V506527/1, and STFC grant ST/S000518/1 to Aberstyyth University.
|
2306.00063 | Topological model for q-deformed rational number and categorification | Let $\mathbf{D}_{3}$ be a bigraded 3-decorated disk with an arc system
$\mathbf{A}$. We associate a bigraded simple closed arc
$\widehat{\eta}_{\frac{r}{s}}$ on $\mathbf{D}_{3}$ to any rational number
$\frac{r}{s}\in\overline{\mathbb{Q}}=\mathbb{Q}\cup\{\infty\}$. We show that
the right (resp. left) $q$-deformed rational numbers associated to
$\frac{r}{s}$, in the sense of Morier-Genoud-Ovsienko (resp.
Bapat-Becker-Licata) can be naturally calculated by the
$\mathfrak{q}$-intersection between $\widehat{\eta}_{\frac{r}{s}}$ and
$\mathbf{A}$ (resp. dual arc system $\mathbf{A}^*$). The Jones polynomials of
rational knots can be also given by such intersections. Moreover, the
categorification of $\widehat{\eta}_{\frac{r}{s}}$ is given by the spherical
object $X_{\frac{r}{s}}$ in the Calabi-Yau-$\mathbb{X}$ category of Ginzburg
dga of type $A_2$. Reduce to CY-2 case, we recover result of
Bapat-Becker-Licata with a slight improvement. | Li Fan, Yu Qiu | 2023-05-31T18:00:03Z | http://arxiv.org/abs/2306.00063v1 | # Topological model for \(\mathfrak{q}\)-deformed rational number and categorification
###### Abstract.
Let \(\mathbf{D}_{3}\) be a bigraded \(3\)-decorated disk with an arc system \(\mathbf{A}\). We associate a bigraded simple closed arc \(\widehat{\eta}_{\frac{r}{s}}\) on \(\mathbf{D}_{3}\) to any rational number \(\frac{r}{s}\in\overline{\mathbb{Q}}=\mathbb{Q}\cup\{\infty\}\). We show that the right (resp. 'left') \(q\)-deformed rational numbers associated to \(\frac{r}{s}\), in the sense of [13] (resp. [BBL]) can be naturally calculated by the \(\mathfrak{q}\)-intersection between \(\widehat{\eta}_{\frac{r}{s}}\) and \(\mathbf{A}\) (resp. dual arc system \(\mathbf{A}^{\star}\)). The Jones polynomials of rational knots can be also given by such intersections. Moreover, the categorification of \(\widehat{\eta}_{\frac{r}{s}}\) is given by the spherical object \(X_{\frac{r}{s}}\) in the Calabi-Yau-\(\mathbb{X}\) category of Ginzburg dga of type \(A_{2}\). Reduce to CY-2 case, we recover result of [BBL] with a slight improvement.
## 1. Introductions
The notion of (right) \(q\)-deformed rational numbers \([\frac{r}{s}]^{\sharp}\) was originally introduced by Morier-Genoud and Ovsienko in [14] via continued fractions. They also developed \(q\)-deformations to irrational numbers in [14] by the convergency property. Such \(q\)-deformations own many good combinatorial properties and are related to a wide variety of areas, such as the Farey triangulation, \(F\)-polynomials of cluster algebras and the Jones polynomial of rational (two-bridge) knots [14]. Motivated by the study of compactification of spaces of stability conditions, Bapat, Becker and Licata [BBL] introduced a twin notion, the left \(q\)-deformation \([\frac{r}{s}]^{\flat}\), which also shares all the good properties of \([\frac{r}{s}]^{\sharp}\). They showed that the two \(q\)-deformations can be both described via the action of \(\operatorname{PSL}_{2,\mathfrak{q}}(\mathbb{Z})\) by fractional linear transformations. Moreover, Farey graph plays an important role in the definition of \(q\)-deformations, where the edges are assigned weights according to some iterative rules [14].
On the other hand, the homotopy classes of simple closed curves on torus with at most one boundary can be parameterized by \(\overline{\mathbb{Q}}=\mathbb{Q}\cup\{\infty\}\). We aim to give a topological realization of \(q\)-deformations and their categorification. The topological model we use is the decorated surface \(\mathbf{S}_{\triangle}\) with bigrading introduced by Khovanov and Seidel in [10]. The bigrading of arcs provides bi-indices for their intersections, which we call \(\mathfrak{q}\)-intersections. We consider the \(A_{2}\) case, where \(\mathbf{S}_{\triangle}=\mathbf{D}_{3}\) is a disk with three decorations and the set of simple closed arcs on \(\mathbf{D}_{3}\) can be parameterized by \(\overline{\mathbb{Q}}\). We show that the right/left \(q\)-deformed rationals can be naturally calculated by the \(\mathfrak{q}\)-intersections between corresponding arcs (Theorem 3.18 and Theorem 3.21). The topological realization directly implies many combinatorial properties of \(q\)-deformations, including positivity and specialization (Corollary 3.23). Surprisingly, the bi-index always collapses into one, which is not obvious from the construction/definition of \(\mathfrak{q}\)-intersection.
For the categorification, we consider the Calabi-Yau-\(\mathbb{X}\) category \(\mathcal{D}_{\mathbb{X}}(\mathbf{S}_{\triangle})\) associated to \(\mathbf{S}_{\triangle}\) (cf. [11, 12]), which is the perfect valued derived category of the bigraded Ginzburg algebra constructed from \(\mathbf{S}_{\triangle}\). The \(\mathbb{X}\)-spherical objects in \(\mathcal{D}_{\mathbb{X}}(\mathbf{S}_{\triangle})\) correspond to the bigraded simple closed arcs in \(\mathbf{S}_{\triangle}\), and their \(\mathfrak{q}\)-dimensions of Hom-spaces equal to the \(\mathfrak{q}\)-intersections between the corresponding arcs [11, 12]. One can specialize \(\mathbb{X}=N\), and \(\mathcal{D}_{\mathbb{X}}(\mathbf{S}_{\triangle})\) becomes a Calabi-Yau-\(N\) category, for any integer \(N\geq 2\). When \(N=3\), \(\mathcal{D}_{3}(\mathbf{S}_{\triangle})\) provides an additive categeorification of cluster algebras of surface type (cf. e.g. [13]). When \(\mathbf{S}_{\triangle}=\mathbf{D}_{3}\) and \(N=2\), we recover [1]'s result (with a slight improvement).
The paper is organized as follows. In section 2, we recall several equivalent definitions of left and right \(\mathfrak{q}\)-deformed rationals from [10, 11], via continued fractions, braid twist action and \(\mathfrak{q}\)-weighted Farey graph respectively. In section 3, we recall the graded decorated surface in the sense of [11, 12] and prove the main results. In section 4, we give the categorification and in section 5, we discuss reduction and the relation with Jones polynomials.
**Acknowledgment.** Fl is grateful to Qy for leading her into this research area and providing her lots of help and supervisions. This work is inspired by the work of Morier-Genoud, Ovsienko [10] and Bapat, Becker, Licata [1]. Qy is supported by National Key R&D Program of China (No.2020YFA0713000) and National Natural Science Foundation of China (No.12031007 and No.12271279).
## 2. \(\mathfrak{q}\)-deformed rationals and Farey graph
In the paper, we fix the following conventions.
**Conventions.**:
* Let \(\mathfrak{q}\) be a formal parameter.
* A rational number always belongs to \(\overline{\mathbb{Q}}:=\mathbb{Q}\cup\{\infty\}\). We also denote \(\overline{\mathbb{Q}^{\geq 0}}:=\mathbb{Q}^{\geq 0}\cup\{\infty\}\). We usually state the results for \(\overline{\mathbb{Q}}\) but prove the non-negative case since the negative case holds by symmetry.
* We denote a rational number by \(\frac{r}{s}\), including the exceptional cases when \(0=\frac{0}{1}\) and \(\infty=\frac{1}{0}\). We assume that \(\frac{r}{s}\) is irreducible.
### Right and left \(\mathfrak{q}\)-deformed rationals
We first recall the definitions of right and left \(\mathfrak{q}\)-deformations of rational numbers via finite continued fractions and formulate their basic properties. For a positive rational \(\frac{r}{s}\), it can be expressed as an expansion of continued fraction as
\[\frac{r}{s}\quad=\quad a_{1}+\frac{1}{a_{2}+\frac{1}{\ddots+ \frac{1}{a_{2m}}}}\colon=[a_{1},\dots,a_{2m}], \tag{2.1}\]
for \(a_{1}\in\mathbb{N}\) and \(a_{2},\cdots,a_{2m}\in\mathbb{N}\setminus\{0\}\), which is known as the (_regular_) continued fraction (expression). For the exceptional cases, we denote \(0=[-1,1]\) and \(\infty=[\hskip-1.422638pt]\).
For a non-negative integer \(a\), the _right \(\mathfrak{q}\)-deformation_ is defined as
\[[a]^{\sharp}_{\mathfrak{q}}:=\frac{1-\mathfrak{q}^{a}}{1-\mathfrak{q}}=1+ \mathfrak{q}+\mathfrak{q}^{2}+\cdots+\mathfrak{q}^{a-1},\]
and the corresponding _left \(\mathfrak{q}\)-deformation_ is defined as
\[[a]^{\flat}_{\mathfrak{q}}:=\frac{1-\mathfrak{q}^{a-1}+\mathfrak{q}^{a}- \mathfrak{q}^{a+1}}{1-\mathfrak{q}}=1+\mathfrak{q}+\cdots+\mathfrak{q}^{a-2} +\mathfrak{q}^{a}.\]
**Definition 2.1** ([1, 2]).: Let \(\frac{r}{s}\in\mathbb{Q}^{+}\) be a rational number with continued fraction expansion \([a_{1},\ldots,a_{2m}]\).
\(1^{\circ}\). We define its _right \(\mathfrak{q}\)-deformation_ by the following formula:
\[\left[\frac{r}{s}\right]^{\sharp}_{\mathfrak{q}}:=[a_{1}]^{\sharp}_{ \mathfrak{q}}+\frac{\mathfrak{q}^{a_{1}}}{[a_{2}]^{\sharp}_{\mathfrak{q}^{-1} }+\frac{\mathfrak{q}^{-a_{2}}}{[a_{3}]^{\sharp}_{\mathfrak{q}}+\frac{ \mathfrak{q}^{a_{3}}}{[a_{4}]^{\sharp}_{\mathfrak{q}^{-1}}+\frac{\mathfrak{q} ^{-a_{4}}}{\ddots}}}}\quad. \tag{2.2}\]
\(2^{\circ}\). We define its _left \(\mathfrak{q}\)-deformation_ by the following formula:
\[\left[\frac{r}{s}\right]^{\flat}_{\mathfrak{q}}:=[a_{1}]^{\sharp}_{\mathfrak{ q}}+\frac{\mathfrak{q}^{a_{1}}}{[a_{2}]^{\sharp}_{\mathfrak{q}^{-1}}+\frac{ \mathfrak{q}^{-a_{2}}}{[a_{3}]^{\sharp}_{\mathfrak{q}}+\frac{\mathfrak{q}^{a_ {3}}}{[a_{4}]^{\sharp}_{\mathfrak{q}^{-1}}+\frac{\mathfrak{q}^{-a_{4}}}{ \ddots}}}}\quad. \tag{2.3}\]
We normalize them as
\[\left[\frac{r}{s}\right]^{\sharp}_{\mathfrak{q}}=\frac{\mathbf{R}^{\sharp}_{ \mathfrak{q}}(r/s)}{\mathbf{S}^{\sharp}_{\mathfrak{q}}(r/s)},\quad\left[\frac {r}{s}\right]^{\flat}_{\mathfrak{q}}=\frac{\mathbf{R}^{\flat}_{\mathfrak{q}}( r/s)}{\mathbf{S}^{\flat}_{\mathfrak{q}}(r/s)},\]
so that the denominators are polynomials of \(\mathfrak{q}\) with lowest non-zero constant term. For \(0\) and \(\infty\), we set
\[\mathbf{R}^{\sharp}_{\mathfrak{q}}(0)=0,\quad\mathbf{S}^{\sharp}_ {\mathfrak{q}}(0)=1,\quad\mathbf{R}^{\flat}_{\mathfrak{q}}(0)=\mathfrak{q}-1,\quad\mathbf{S}^{\flat}_{\mathfrak{q}}(0)=\mathfrak{q};\] \[\mathbf{R}^{\sharp}_{\mathfrak{q}}(\infty)=1,\quad\mathbf{S}^{ \sharp}_{\mathfrak{q}}(\infty)=0,\quad\mathbf{R}^{\flat}_{\mathfrak{q}}( \infty)=1,\quad\mathbf{S}^{\flat}_{\mathfrak{q}}(\infty)=1-\mathfrak{q}.\]
Next, we consider the group
\[\mathrm{PSL}_{2}(\mathbb{Z})=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}|a,b,c,d\in\mathbb{Z},ad-bc=1\right\},\]
which is generated by
\[t_{1}:=\begin{pmatrix}1&1\\ 0&1\end{pmatrix},\quad t_{2}:=\begin{pmatrix}1&0\\ -1&1\end{pmatrix}.\]
It acts on rational numbers \(\overline{\mathbb{Q}}\) by linear fractional transformation as
\[\begin{pmatrix}a&b\\ c&d\end{pmatrix}\cdot\left(\frac{r}{s}\right)=\frac{ar+bs}{cr+ds}, \tag{2.4}\]
where \(\frac{r}{s}\in\overline{\mathbb{Q}}\) and \(\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\mathrm{PSL}_{2}(\mathbb{Z})\). For a rational number \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\) with continued fraction expansion as (2.1), it is well-known that (cf. [1, Proposition 2.2])
\[\frac{r}{s} =t_{1}^{a_{1}}t_{2}^{-a_{2}}t_{1}^{a_{3}}t_{2}^{-a_{4}}\cdots t_{1}^{a _{2m-1}}t_{2}^{-a_{2m}}(\frac{1}{0}). \tag{2.5}\]
**Proposition/Definition 2.2** ([1, 15]).: _Consider the \(\mathfrak{q}\)-deformation \(\mathrm{PSL}_{2,\mathfrak{q}}(\mathbb{Z})\) of \(\mathrm{PSL}_{2}(\mathbb{Z})\), which is generated by_
\[t_{1,\mathfrak{q}}=\begin{pmatrix}\mathfrak{q}&1\\ 0&1\end{pmatrix},\quad t_{2,\mathfrak{q}}=\begin{pmatrix}1&0\\ -\mathfrak{q}&\mathfrak{q}\end{pmatrix}.\]
_For a rational number \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\) with expression (2.1), we have_
\[\begin{cases}\left[\frac{r}{s}\right]_{\mathfrak{q}}^{\sharp}&=t_{1, \mathfrak{q}}^{a_{1}}t_{2,\mathfrak{q}}^{-a_{2}}t_{1,\mathfrak{q}}^{a_{3}}t_{2,\mathfrak{q}}^{-a_{4}}\cdots t_{1,\mathfrak{q}}^{a_{2m-1}}t_{2,\mathfrak{q}}^ {-a_{2m}}\left(\frac{1}{0}\right),\\ \left[\frac{r}{s}\right]_{\mathfrak{q}}^{\flat}&=t_{1, \mathfrak{q}}^{a_{1}}t_{2,\mathfrak{q}}^{a_{2}}t_{1,\mathfrak{q}}^{a_{3}}t_{2,\mathfrak{q}}^{-a_{4}}\cdots t_{1,\mathfrak{q}}^{a_{2m-1}}t_{2,\mathfrak{q}}^ {-a_{2m}}\left(\frac{1}{1-\mathfrak{q}}\right).\end{cases} \tag{2.6}\]
### \(\mathfrak{q}\)-deformations via Farey graph
The classical _Farey graph_ FG is an infinite graph with vertices set
\[\mathrm{FG}_{0}=\overline{\mathbb{Q}}.\]
There is an edge between \(\frac{p}{q}\) and \(\frac{u}{v}\) if and only if \(pv-uq=\pm 1\) (see Figure 1). If \(\frac{p}{q}\) and \(\frac{u}{v}\) are connected by an edge, we define their _Farey sum_ by
\[\frac{p}{q}\oplus\frac{u}{v}:=\frac{p+u}{q+v}.\]
Moreover, \(\mathrm{FG}_{0}\) is parametrized by homotopy classes of simple closed arcs on torus with at most one boundary and the edges are those arcs with intersection number one. \(\mathrm{PSL}_{2}(\mathbb{Z})\) acts on FG by (2.4) taking one edge to another. In particular, if \(T\in\mathrm{PSL}_{2}(\mathbb{Z})\) takes the form
\[T_{\frac{r}{s}}=\begin{pmatrix}1+rs&-r^{2}\\ s^{2}&1-rs\end{pmatrix},\]
then it is a rotation which fixes \(\frac{r}{s}\).
**Lemma/Definition 2.3** ([1, Section 2.2]).: _Let \(\frac{r}{s}\in\mathbb{Q}^{+}\) be any rational number with continued fraction expansion as (2.1). Then it can be uniquely written as Farey sum of two rationals \(\frac{p}{q},\frac{u}{v}\in\overline{\mathbb{Q}^{\geq 0}}\), i.e._
\[\frac{r}{s}=\frac{p}{q}\oplus\frac{u}{v},\]
_with \(uq-pv=1\) and \(\frac{p}{q}<\frac{r}{s}<\frac{u}{v}\). In fact,_
\[\frac{p}{q}=\begin{cases}[a_{1},a_{2},\ldots,a_{2m-2}+1],&\text{ if }a_{2m-1}=1 \text{and }m>1;\\,&\text{ otherwise},\end{cases} \tag{2.7}\]
_and_
\[\frac{u}{v}=\begin{cases}[a_{1},a_{2},\ldots,a_{2m-1},a_{2m}-1],&\text{ if }a_{2m}\geq 2;\\,&\text{ if }a_{2m}=1.\end{cases} \tag{2.8}\]
_Moreover, there is an associated integer defined as_
\[l=l(\frac{r}{s})=\begin{cases}0,&\text{ if }a_{2m}\geq 2;\\ a_{2m-1},&\text{ if }a_{2m}=1.\end{cases} \tag{2.9}\]
_In particular, we have \(l(n+1=\frac{n}{1}\oplus\frac{1}{0})=n-1\)._
On the other hand, \(l\) can also be defined for an edge in FG connecting \(\frac{p}{q}\) and \(\frac{u}{v}\), provided \(\frac{p}{q}<\frac{u}{v}\). More precisely, \(l(\frac{p}{q},\frac{u}{v}):=l(\frac{p}{q}\oplus\frac{u}{v})\).
As in [1], we assign a weight to each edge of the Farey graph, that goes along with the right or left \(\mathfrak{q}\)-deformations associated to vertices. Then the right and left \(\mathfrak{q}\)-deformations can also be defined via \(\mathfrak{q}\)-Farey sum.
**Proposition/Definition 2.4** ([1]).: _Let \(\frac{r}{s}\in\mathbb{Q}^{+}\) be a rational with the decomposition \(\frac{r}{s}=\frac{p}{q}\oplus\frac{u}{v}\) and \(l=l(\frac{p}{q},\frac{u}{v})\) as above. For right \(\mathfrak{q}\)-deformation, we have_
\[\mathbf{R}_{\mathfrak{q}}^{\sharp}(\frac{r}{s}):=\mathbf{R}_{\mathfrak{q}}^{ \sharp}(\frac{p}{q})+\mathfrak{q}^{l+1}\,\mathbf{R}_{\mathfrak{q}}^{\sharp}( \frac{u}{v}),\quad\mathbf{S}_{\mathfrak{q}}^{\sharp}(\frac{r}{s}):=\mathbf{S }_{\mathfrak{q}}^{\sharp}(\frac{p}{q})+\mathfrak{q}^{l+1}\mathbf{S}_{ \mathfrak{q}}^{\sharp}(\frac{u}{v}). \tag{2.10}\]
Figure 1. The Farey graph
_For left \(\mathfrak{q}\)-deformation, if we set \(\overline{\mathbf{R}}^{\flat}_{\mathfrak{q}}(0)=\mathbf{S}^{\flat}_{\mathfrak{q} }(0),\overline{\mathbf{R}}^{\flat}_{\mathfrak{q}}(\infty)=\mathbf{S}^{\flat}_ {\mathfrak{q}}(\infty)\) and define_
\[\overline{\mathbf{R}}^{\flat}_{\mathfrak{q}}(\frac{r}{s}):=\overline{\mathbf{ R}}^{\flat}_{\mathfrak{q}}(\frac{p}{q})+(\mathfrak{q}^{-1})^{l+1}\overline{ \mathbf{R}}^{\flat}_{\mathfrak{q}}(\frac{u}{v}),\quad\overline{\mathbf{S}}^{ \flat}_{\mathfrak{q}}(\frac{r}{s}):=\overline{\mathbf{S}}^{\flat}_{\mathfrak{ q}}(\frac{p}{q})+(\mathfrak{q}^{-1})^{l+1}\overline{\mathbf{S}}^{\flat}_{ \mathfrak{q}}(\frac{u}{v}), \tag{2.11}\]
_then we have_
\[\left[\frac{r}{s}\right]^{\flat}_{\mathfrak{q}}=\frac{\overline{\mathbf{R}}^ {\flat}_{\mathfrak{q}}(r/s)}{\overline{\mathbf{S}}^{\flat}_{\mathfrak{q}}(r/s )}.\]
We label a weight to each edge and a right or left \(\mathfrak{q}\)-deformation to each vertex in Farey graph, which are drawn in Figure 2 and Figure 3 respectively. Here the integer \(l=l(\frac{p}{q},\frac{u}{v})\) as above.
**Remark 2.5**.: For \(\frac{r}{s}\in\mathbb{Q}^{+}\) with continued fraction expression \([a_{1},\dots,a_{2m}]\), we have
\[-\frac{r}{s}=[-a_{1},\dots,-a_{2m}], \tag{2.12}\]
and
\[-\frac{r}{s}=t_{1}^{-a_{1}}t_{2}^{a_{2}}t_{1}^{-a_{3}}t_{2}^{a_{4}}\cdots t_{1} ^{-a_{2m-1}}t_{2}^{a_{2m}}(\frac{1}{0}). \tag{2.13}\]
Figure 3. The left \(\mathfrak{q}\)-deformation via \(\mathfrak{q}\)-Farey sum
Figure 2. The right \(\mathfrak{q}\)-deformation via \(\mathfrak{q}\)-Farey sum
The right and left \(\mathfrak{q}\)-deformations for negative rational numbers are defined as:
\[\begin{cases}\left[-\frac{r}{s}\right]_{\mathfrak{q}}^{\sharp}&=t_{1, \mathfrak{q}}^{-a_{1}}t_{2,\mathfrak{q}}^{a_{2}}t_{1,\mathfrak{q}}^{-a_{3}}t_{2,\mathfrak{q}}^{a_{4}}\cdots t_{1,\mathfrak{q}}^{-a_{2m-1}}t_{2,\mathfrak{q}}^{ a_{2m}}\left(\frac{1}{0}\right),\\ \left[-\frac{r}{s}\right]_{\mathfrak{q}}^{\flat}&=t_{1,\mathfrak{q}}^{-a_{1}} t_{2,\mathfrak{q}}^{a_{2}}t_{1,\mathfrak{q}}^{-a_{3}}t_{2,\mathfrak{q}}^{a_{4}} \cdots t_{1,\mathfrak{q}}^{-a_{2m-1}}t_{2,\mathfrak{q}}^{a_{2m}}\left(\frac{1} {1-\mathfrak{q}}\right).\end{cases} \tag{2.14}\]
In fact, we can obtain \(\mathfrak{q}\)-deformations of negative rationals from positive ones by the following formula:
\[\left[-\frac{r}{s}\right]_{\mathfrak{q}}^{*}:=-\mathfrak{q}^{-1}\left[\frac{ r}{s}\right]_{\mathfrak{q}^{-1}}^{*}, \tag{2.15}\]
where \(*\in\{\sharp,\flat\}\).
## 3. The topological model
In this section, we introduce decorated (marked) surfaces as the topological model which we will use. We first summarize the setting and results in [11, 12] and then show that the \(\mathfrak{q}\)-intersections of certain arcs describe the left/right \(\mathfrak{q}\)-deformations for rational numbers.
### Decorated surfaces
Let \(\mathbf{S}\) be an oriented surface with non-empty boundary \(\partial\mathbf{S}\) and we denote its interior by \(\mathbf{S}^{\circ}=\mathbf{S}\setminus(\partial\mathbf{S})\). We decorate \(\mathbf{S}\) with a finite set \(\triangle\) of points (_decorations_) in \(\mathbf{S}^{\circ}\), denoted by \(\mathbf{S}_{\triangle}\).
Let \(\mathbf{S}_{\triangle}^{\circ}=\mathbf{S}\setminus(\partial\mathbf{S}\cup \triangle)\). An _arc_\(c\) in \(\mathbf{S}_{\triangle}\) is a curve \(c:[0,1]\to\mathbf{S}\) such that \(c(t)\in\mathbf{S}_{\triangle}^{\circ}\) for any \(t\in(0,1)\). The _inverse_\(\overline{c}\) of an arc \(c\) is defined as \(\overline{c}(t)=c(1-t)\) for any \(t\in[0,1]\).
**Definition 3.1**.: A _closed arc_\(c\) is an arc whose endpoints \(c(0)\) and \(c(1)\) are in \(\triangle\). It is _simple_ if moreover it satisfies \(c(0)\neq c(1)\), without self-intersections in \(\mathbf{S}_{\triangle}^{\circ}\). We denote by \(\mathrm{CA}(\mathbf{S}_{\triangle})\) the set of simple closed arcs.
In this paper, we always consider arcs up to taking inverse and homotopy relative to endpoints and exclude the arcs which are isotopic to a point in \(\mathbf{S}_{\triangle}\). Two arcs are in _minimal position_ if their intersection is minimal in the homotopy class. For three simple closed arcs, they form a _contractible triangle_ if they bound a disk which is contractible. For \(\sigma,\tau\in\mathrm{CA}(\mathbf{S}_{\triangle})\), we use the notations
\[\mathrm{Int}_{\Pi}(\sigma,\tau):=\left|\cap(\sigma,\tau)\cap\Pi\right|,\]
where \(\Pi=\triangle\) or \(\mathbf{S}_{\triangle}^{\circ}\), a subset of \(\mathbf{S}_{\triangle}\), and
\[\mathrm{Int}_{\mathbf{S}_{\triangle}}(\sigma,\tau):=\frac{1}{2}\cdot\mathrm{ Int}_{\triangle}(\sigma,\tau)+\mathrm{Int}_{\mathbf{S}_{\triangle}^{\circ}}( \sigma,\tau). \tag{3.1}\]
The _mapping class group_\(\mathrm{MCG}(\mathbf{S}_{\triangle})\) of a decorated surface \(\mathbf{S}_{\triangle}\) consists of the isotopy classes of the homeomorphisms of \(\mathbf{S}\) which fix \(\partial\mathbf{S}\) pointwise and fix \(\triangle\) setwise. For any \(\alpha\in\mathrm{CA}(\mathbf{S}_{\triangle})\), the associated _braid twist_\(B_{\alpha}\in\mathrm{MCG}(\mathbf{S}_{\triangle})\) is defined in Figure 4. We have the formula
\[B_{\Psi(\alpha)}=\Psi\circ B_{\alpha}\circ\Psi^{-1} \tag{3.2}\]
for any \(\alpha\in\mathrm{CA}(\mathbf{S}_{\triangle})\) and \(\Psi\in\mathrm{MCG}(\mathbf{S}_{\triangle})\). We define \(\mathrm{BT}(\mathbf{S}_{\triangle})\) to be the subgroup of \(\mathrm{MCG}(\mathbf{S}_{\triangle})\) generated by \(B_{\alpha}\) for \(\alpha\in\mathrm{CA}(\mathbf{S}_{\triangle})\). The braid twist can be illustrated by
smoothing out.
**Construction 3.2**.: For any \(\sigma,\tau\in\operatorname{CA}(\mathbf{S}_{\triangle})\) with \(\sigma(0)=\tau(0)=z\in\triangle\). The extension \(\sigma\wedge\tau\) of \(\sigma\) by \(\tau\) (with respect to the common starting point) is defined in Figure 5, which is the operation of smoothing out the intersection moving from \(\sigma\) to \(\tau\) clockwisely.
Notice that if \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\sigma,\tau)=\frac{1}{2}\), i.e. they only intersect at one decoration, then
\[\sigma\wedge\tau=B_{\tau}(\sigma)=B_{\sigma}^{-1}(\tau). \tag{3.3}\]
**Lemma 3.3**.: _Assume that \(|\triangle\mid\geq 3\). For any \(\eta\in\operatorname{CA}(\mathbf{S}_{\triangle})\), we have_
\[\operatorname{CA}(\mathbf{S}_{\triangle})=\operatorname{BT}(\mathbf{S}_{ \triangle})\cdot\{\eta\}. \tag{3.4}\]
Proof.: Let \(\xi\) be a simple closed arc in \(\operatorname{CA}(\mathbf{S}_{\triangle})\). We notice that the intersection \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)\in\frac{1}{2}\cdot \mathbb{Z}^{+}\). When \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)>0\), we use induction on it. For the starting case when \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)=\frac{1}{2}\), we take \(\alpha=\xi\wedge\eta\in\operatorname{CA}(\mathbf{S}_{\triangle})\), and then \(\xi=B_{\alpha}(\eta)\) by (3.3).
Now suppose the assertion holds when \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)<k\) and consider the case when \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)=k\in\frac{1}{2}\cdot \mathbb{Z}^{+}\). There exists some decoration \(z\) which is not the endpoint of \(\xi\) (if the endpoints of \(\eta\) and \(\xi\) are not coincide, we take \(z\) to be an endpoint of \(\eta\)) and we
Figure 4. The braid twist
Figure 5. The extension as smoothing out
connect \(z\) to \(\xi\) by \(l\) such that it intersects \(\xi\) at \(p\) (see Figure 6). We cut \(\xi\) at \(p\) and connect two parts with \(l\) respectively. Then we obtain \(\xi_{1},\xi_{2}\in\operatorname{CA}(\mathbf{S}_{\triangle})\) such that \(\xi=\xi_{1}\wedge\xi_{2}\) and
\[\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)=\operatorname{Int}_{ \mathbf{S}_{\triangle}}(\eta,\xi_{1})+\operatorname{Int}_{\mathbf{S}_{ \triangle}}(\eta,\xi_{2}). \tag{3.5}\]
By assumption, there exists \(b\in\operatorname{BT}(\mathbf{S}_{\triangle})\) such that \(\xi_{1}=b(\eta)\). Thus \(\xi=B_{\xi_{2}}(\xi_{1})=(B_{\xi_{2}}\cdot b)(\eta)\).
Finally, we consider the case when \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\xi)=0\). We can choose a simple closed arc \(\alpha\) such that \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\eta,\alpha)=\operatorname{Int}_{ \mathbf{S}_{\triangle}}(\xi,\alpha)=\frac{1}{2}\). By the starting case, we have \(\alpha\in\operatorname{BT}(\mathbf{S}_{\triangle})\cdot\eta\) and \(\xi\in\operatorname{BT}(\mathbf{S}_{\triangle})\cdot\alpha\subset \operatorname{BT}(\mathbf{S}_{\triangle})\cdot\eta\).
**The branched double cover**. Let \(\Sigma_{\Delta}\) be a branched double cover of \(\mathbf{S}_{\triangle}\) branching at decorations. When there exists extra structure (e.g. a quadratic differential), \(\Sigma_{\Delta}\) can be constructed as the spectral cover (cf. [KS]). We denote the covering map by \(\pi:\Sigma_{\Delta}\to\mathbf{S}_{\triangle}\).
We consider the special case when \(\mathbf{S}_{\triangle}=\mathbf{D}_{3}\) is a disk and \(\triangle=\{z_{\infty},z_{*},z_{0}\}\). We fix two initial simple closed arcs \(\eta_{0}\) and \(\eta_{\infty}\) such that \(\eta_{0}\cap\eta_{\infty}=\{z_{*}\}\), see Figure 7. Notice that there is an anti-clockwise angle from \(\eta_{\infty}\) to \(\eta_{0}\). The branched double cover \(\Sigma_{\Delta}\) is a torus with one boundary \(\partial_{\Sigma}\).
For simplicity, we draw \(\partial_{\Sigma}\) as a puncture in the figures. We take the \(\mathbb{Z}^{2}\)-covering \(\widetilde{\Sigma_{\Delta}}\) of \(\Sigma_{\Delta}\), where the white area is a fundamental domain (see Figure 8), and we denote the covering map by \(\widetilde{\pi}:\widetilde{\Sigma_{\Delta}}\to\Sigma_{\Delta}\). When forgetting the punctures and decorations of \(\widetilde{\Sigma_{\Delta}}\), it is the universal cover of torus. Hence we embed \(\widetilde{\Sigma_{\Delta}}\) into \(\mathbb{R}^{2}\) such that all decorations and punctures are integer points, where its fundamental domain is
Figure 6. Decomposing \(\xi\)
a unit square. For each line (it is not allowed to pass through the punctures) in \(\widetilde{\Sigma_{\Delta}}\) with rational slope \(\frac{r}{s}\in\overline{\mathbb{Q}}\), it becomes a simple closed curve \(C_{\frac{r}{s}}\) in \(\Sigma_{\Delta}\) under the map \(\widetilde{\pi}\).
**Lemma 3.4** ([10, SS10]).: _The set \(\operatorname{CA}(\mathbf{D}_{3})\) of simple closed arcs in \(\mathbf{D}_{3}\) can be parameterized by rational numbers, i.e there is a bijection_
\[\begin{CD}\operatorname{Br}_{3}/\operatorname{Z}(\operatorname{Br}_{3}) @V{\iota}V{}V@V{\operatorname{PSL}_{2}(\mathbb{Z})}V{}V\\ \operatorname{CA}(\mathbf{D}_{3})@>{}>{\cong}>\overline{\mathbb{Q}}\end{CD} \tag{3.6}\]
_sending \(\eta_{\frac{r}{s}}\) to \(\frac{r}{s}\)._
Proof.: We lift simple closed arcs in \(\operatorname{CA}(\mathbf{S}_{\triangle})\) to simple closed curves in \(\Sigma_{\Delta}\), which can be parametrized by rational numbers in \(\overline{\mathbb{Q}}\). That is, for any \(\frac{r}{s}\in\overline{\mathbb{Q}}\), there exists an \(\eta_{\frac{r}{s}}\in\operatorname{CA}(\mathbf{S}_{\triangle})\) which corresponds to a simple closed curve \(C_{\frac{r}{s}}\) in \(\Sigma_{\Delta}\). Notice that the homology group \(H_{1}(\Sigma_{\Delta})=\mathbb{Z}[C_{\infty}]\oplus\mathbb{Z}[C_{0}]\). Thus the homology class \([C_{\frac{r}{s}}]\) corresponds to \((r,s)\) in \(H_{1}(\Sigma_{\Delta})\cong\mathbb{Z}^{2}\). Notice that the braid twist \(\operatorname{BT}(\mathbf{D}_{3})\cong\operatorname{Br}_{3}\) lifts to Dehn twist \(\operatorname{DT}(\Sigma_{\Delta})\subset\operatorname{MCG}(\Sigma_{\Delta})\), which is generated by \(C_{0}\) and \(C_{\infty}\). By the identification \(\operatorname{DT}(\Sigma_{\Delta})/\operatorname{Z}(\operatorname{DT}(\Sigma_ {\Delta}))\cong\operatorname{PSL}_{2}(\mathbb{Z})\), the lemma holds.
**Remark 3.5**.: Here is a consequence of the lemma above. For a rational number \(\frac{r}{s}\in\mathbb{Q}^{+}\) with expression (2.5), the corresponding arc in \(\operatorname{CA}(\mathbf{D}_{3})\) is
\[\eta_{\frac{r}{s}}=B_{\eta_{\infty}}^{a_{1}}B_{\eta_{0}}^{-a_{2}}B_{\eta_{ \infty}}^{a_{3}}B_{\eta_{0}}^{-a_{4}}\cdots B_{\eta_{\infty}}^{a_{2m-1}}B_{ \eta_{0}}^{-a_{2m}}(\eta_{\infty}). \tag{3.7}\]
Moreover,
\[\eta_{-\frac{r}{s}}=B_{\eta_{\infty}}^{-a_{1}}B_{\eta_{0}}^{a_{2}}B_{\eta_{ \infty}}^{-a_{3}}B_{\eta_{0}}^{a_{4}}\cdots B_{\eta_{\infty}}^{-a_{2m-1}}B_{ \eta_{0}}^{a_{2m}}(\eta_{\infty}). \tag{3.8}\]
### Bigraded arcs and \(\mathfrak{q}\)-intersections
Let \(\mathbf{S}_{\triangle}\) be a decorated surface. In this section, we define the bigraded arcs and their \(\mathfrak{q}\)-intersections. Let \(\mathbb{P}T\mathbf{S}_{\triangle}=\mathbb{P}T(\mathbf{S}\setminus\triangle)\) be the real projectivization of the tangent bundle of \(\mathbf{S}\setminus\triangle\). We want to introduce a particular covering of \(\mathbb{P}T\mathbf{S}_{\triangle}\) with covering group \(\mathbb{Z}\oplus\mathbb{Z}\mathbb{X}\cong\mathbb{Z}^{2}\). A _grading_\(\Lambda:\mathbf{S}_{\triangle}\to\mathbb{P}T\mathbf{S}_{\triangle}\) on \(\mathbf{S}_{\triangle}\) is determined by a class in \(\mathrm{H}_{1}(\mathbb{P}T\mathbf{S}_{\triangle},\mathbb{Z}\oplus\mathbb{Z} \mathbb{X})\), with value \(1\) on each anti-clockwise loop \(\{p\}\times\mathbb{R}\mathbb{P}^{1}\) on \(\mathbb{P}T_{p}\mathbf{S}_{\triangle}\) for \(p\notin\triangle\) and value \(-2+\mathbb{X}\) on each anti-clockwise loop \(l_{z}\times\{x\}\) on \(\mathbf{S}_{\triangle}\) around any \(z\in\triangle,x\in\mathbb{R}\mathbb{P}^{1}\). For any simple loop \(\alpha\) on \(\mathbf{S}_{\triangle}\), we denote \(\Lambda_{1}(\alpha)\) the \(\mathbb{Z}\) part of \(\Lambda(\alpha)\) and denote \(\Lambda_{2}(\alpha)\) the \(\mathbb{Z}\mathbb{X}\) part of \(\Lambda(\alpha)\). In fact, the first grading is a line field \(\lambda\) of \(\mathbf{S}_{\triangle}\) which is determined by a class in \(\mathrm{H}_{1}(\mathbb{P}T\mathbf{S}_{\triangle},\mathbb{Z})\). Define \(\widehat{\mathbb{P}T\mathbf{S}_{\triangle}}\) to be the \(\mathbb{Z}\oplus\mathbb{Z}\mathbb{X}\) covering of \(\mathbb{P}T\mathbf{S}_{\triangle}\) classified by the grading \(\Lambda\), and denote the \(\mathbb{Z}\oplus\mathbb{Z}\mathbb{X}\) action on \(\widehat{\mathbb{P}T\mathbf{S}_{\triangle}}\) by \(\chi\).
**Definition 3.6**.: A _graded decorated surface_\(\mathbf{S}_{\triangle}^{\Lambda}\) consists of a decorated surface \(\mathbf{S}_{\triangle}\) and a grading \(\Lambda\) on \(\mathbf{S}_{\triangle}\).
Let \(\mathbf{S}_{\triangle}^{\Lambda}\) be a graded decorated surface and \(c:[0,1]\to\mathbf{S}\) be an arc in \(\mathbf{S}\). There is a canonical section \(s_{c}:c\setminus\triangle\to\mathbb{P}T\mathbf{S}_{\triangle}\) given by \(s_{c}(z)=T_{z}c\). A _bigrading_ on \(c\) is given by a lift \(\widehat{c}\) of \(s_{c}\) to \(\widehat{\mathbb{P}T\mathbf{S}_{\triangle}}\). The pair \((c,\widehat{c})\) is called a _bigraded arc_, and we usually denote it by \(\widehat{c}\). Note that there are \(\mathbb{Z}^{2}\) lifts of \(c\), which are related by the \(\mathbb{Z}^{2}\)-action defined by \(\chi\). One is the shift grading such that \(\widehat{c}[m](t)=\chi(m,0)\widehat{c}(t)\) and the other is the \(\mathbb{X}\)-grading such that \(\widehat{c}\{m\}(t)=\chi(0,m)\widehat{c}(t)\) for any \(m\in\mathbb{Z}\). For any \(\eta\in\mathrm{CA}(\mathbf{S}_{\triangle})\), we call any of its lifts \(\widehat{\eta}\) in \(\widehat{\mathbb{P}T\mathbf{S}_{\triangle}}\) a _bigraded simple closed arc_. Denote by \(\widehat{\mathrm{CA}}(\mathbf{S}_{\triangle})\) the set of bigraded simple closed arcs.
For any bigraded arcs \(\widehat{\sigma}\) and \(\widehat{\tau}\) which are in minimal position with respect to each other, let \(p=\sigma(t_{1})=\tau(t_{2})\in\mathbf{S}^{\circ}\) be the point where \(\sigma\) and \(\tau\) intersect transversally. Fix a small circle \(a\subset\mathbf{S}\setminus\triangle\) around \(p\). Let \(\alpha:[0,1]\to a\) be an embedded arc which moves anti-clockwise around \(p\), such that \(\alpha\) intersects \(\sigma\) and \(\tau\) at \(\alpha(0)\) and \(\alpha(1)\), respectively (cf. Figure 9). If \(p\in\triangle\), then \(\alpha\) is unique up to a change of parametrization (see the left one of Figure 9); otherwise there are two possibilities, which are distinguished by their endpoints (see the right one of Figure 9). Take a smooth path \(\rho:[0,1]\to\mathbb{P}T\mathbf{S}_{\triangle}\) with \(\rho(t)\in\mathbb{P}T_{\alpha(t)}\mathbf{S}_{\triangle}\) for all \(t\), going from \(\rho(0)=T_{\alpha(0)}\sigma\) to \(\rho(1)=T_{\alpha(1)}\tau\), such that \(\rho(t)\neq T_{\alpha(t)}l\) for all \(t\). Lift \(\rho\) to a path \(\widehat{\rho}:[0,1]\to\widehat{\mathbb{P}T\mathbf{S}_{\triangle}}\) with \(\widehat{\rho}(0)=\widehat{\sigma}(\alpha(0))\). Then there exist some integers \(\varrho,\varsigma\in\mathbb{Z}\) such that \(\widehat{\tau}(\alpha(1))=\chi(\varrho+\varsigma\mathbb{X})\widehat{\rho}(1)\).
Figure 9. Intersection at \(p\) when \(p\) is a decoration or not
**Definition 3.7** ([Ks]).: For any bigraded arcs \(\widehat{\sigma}\) and \(\widehat{\tau}\) in \(\mathbf{S}^{\Lambda}_{\triangle}\), we call an intersection \(p\) of \(\widehat{\sigma}\) and \(\widehat{\tau}\) with _bi-index_
\[\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\tau})= \operatorname{ind}_{p}(\widehat{\sigma},\widehat{\tau})+\operatorname{ind}_{p} ^{\mathbb{X}}(\widehat{\sigma},\widehat{\tau})\mathbb{X}, \tag{3.9}\]
where \(\operatorname{ind}_{p}(\widehat{\sigma},\widehat{\tau}):=\varrho\) and \(\operatorname{ind}_{p}^{\mathbb{X}}(\widehat{\sigma},\widehat{\tau}):=\varsigma\) defined as above.
We have the following equations among bi-indices, which will be used later.
**Lemma 3.8** ([13, Lemma 2.6]).: _Let \(\widehat{\sigma},\widehat{\tau}\) be bigraded arcs in \(\mathbf{S}^{\Lambda}_{\triangle}\) with an intersection \(p\in\mathbf{S}^{\circ}\). If \(p\notin\triangle\), we have_
\[\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\tau})+ \operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\sigma})=1. \tag{3.10}\]
_If \(p\in\triangle\), we have_
\[\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\tau})+ \operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\sigma})= \mathbb{X}. \tag{3.11}\]
**Lemma 3.9** ([13, Lemma 2.7]).: _Let \(\widehat{\sigma},\widehat{\tau},\widehat{\alpha}\) be bigraded arcs in \(\mathbf{S}^{\Lambda}_{\triangle}\). If they are in the case in the left one of Figure 10, we have_
\[\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\alpha})= \operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\gamma})+ \operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\gamma},\widehat{\alpha}). \tag{3.12}\]
_If they are in the case in the left one of Figure 9, we have_
\[\begin{split}\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{ \sigma},\widehat{\tau})&=\operatorname{ind}_{\alpha(0)}^{ \mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\alpha})-\operatorname{ind}_{ \alpha(1)}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\alpha})\\ &=\operatorname{ind}_{\alpha(1)}^{\mathbb{Z}^{2}}(\widehat{ \alpha},\widehat{\tau})-\operatorname{ind}_{\alpha(0)}^{\mathbb{Z}^{2}}( \widehat{\alpha},\widehat{\sigma}).\end{split} \tag{3.13}\]
**Lemma 3.10**.: _Let \(\widehat{\sigma},\widehat{\tau},\widehat{\alpha}\) be bigraded arcs on \(\mathbf{S}^{\Lambda}_{\triangle}\) which share the same decoration \(z\) and sitting in anti-clockwise order in the right one of Figure 10. We have_
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\alpha})= \operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\tau})+ \operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\alpha}). \tag{3.14}\]
Proof.: Fix a small circle \(a\subset\mathbf{S}\setminus\triangle\) around \(z\). Let \(\widehat{\theta}:[0,1]\to a\) be an embedded bigraded arc winding anti-clockwisely at \(z\) such that the underlying arc \(\theta\) intersect \(\sigma,\tau\) and \(\alpha\) at
Figure 10. Bigraded arcs intersect at the same point (or decoration) in anti-clockwise
\(\theta(0),\theta(\frac{1}{2})\) and \(\theta(1)\) respectively (see the right one of Figure 10). The arc \(\widehat{\theta}\) is unique up to a change of parametrization. By Lemma 3.9, we have
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{ \alpha}) =\operatorname{ind}_{\theta(0)}^{\mathbb{Z}^{2}}(\widehat{\sigma}, \widehat{\theta})-\operatorname{ind}_{\theta(1)}^{\mathbb{Z}^{2}}(\widehat{ \alpha},\widehat{\theta})\] \[=[\operatorname{ind}_{\theta(0)}^{\mathbb{Z}^{2}}(\widehat{ \sigma},\widehat{\theta})-\operatorname{ind}_{\theta(\frac{1}{2})}^{\mathbb{Z }^{2}}(\widehat{\tau},\widehat{\theta})]+[\operatorname{ind}_{\theta(\frac{1}{ 2})}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\theta})-\operatorname{ind}_{ \theta(1)}^{\mathbb{Z}^{2}}(\widehat{\alpha},\widehat{\theta})]\] \[=\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\sigma}, \widehat{\tau})+\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\tau}, \widehat{\alpha}).\]
**Definition 3.11**.: For \(\widehat{\tau},\widehat{\eta}\in\widehat{\operatorname{CA}}(\mathbf{S}_{\triangle})\) satisfying that \(\operatorname{Int}_{\mathbf{S}_{\triangle}}(\tau,\eta)=\frac{1}{2}\) and \(\operatorname{ind}_{z}^{\mathbb{X}}(\widehat{\tau},\widehat{\eta})=a\), let \(z\in\triangle\) be their common endpoint. Denote by \(\widehat{\tau}\wedge\widehat{\eta}\) to be the bigraded arc in \(\widehat{\operatorname{CA}}(\mathbf{S}_{\triangle})\) whose underlying arc is obtained by the smoothing out \(\widehat{\tau}\cup\widehat{\eta}[(a-1)\mathbb{X}]\) at \(z\) and whose grading inherits from \(\widehat{\tau}\). That is, we have \(\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\tau}\wedge \widehat{\eta})=0\), cf. Figure 11.
**Proposition 3.12**.: _For \(\widehat{\tau},\widehat{\eta}\) and \(\widehat{\tau}\wedge\widehat{\eta}\) in \(\widehat{\operatorname{CA}}(\mathbf{S}_{\triangle})\) as in Definition 3.11, we have_
\[\operatorname{ind}_{v_{2}}^{\mathbb{Z}^{2}}(\widehat{\tau},\widehat{\tau} \wedge\widehat{\eta})+\operatorname{ind}_{v_{1}}^{\mathbb{Z}^{2}}(\widehat{ \tau}\wedge\widehat{\eta},\widehat{\eta})+\operatorname{ind}_{v_{3}}^{ \mathbb{Z}^{2}}(\widehat{\eta},\widehat{\tau})=1.\]
Proof.: We calculate the two gradings separately. For the first grading, it is a line field which is determined by \(\operatorname{H}_{1}(\mathbb{P}T\mathbf{S}_{\triangle},\mathbb{Z})\). We can identify all the projectization of the tangent space of any point in the contractible triangle formed by the three arcs (except the decorations) simultaneously. Hence, the sum of the first grading is \(1\) (rotating anti-clockwisely). For the second grading, we use the log surface in the sense of [12, SS2.4]. By Definition 3.11, the segments of \(\widehat{\tau}\wedge\widehat{\eta}\) and \(\widehat{\eta}[(a-1)\mathbb{X}]\) near \(v_{1}\) are in the same sheet of \(\log(\mathbf{S}_{\triangle})\) and the anti-clockwise angle does not cross the cut (cf. [12, Figure 5]). Thus we have
\[\operatorname{ind}_{v_{2}}^{\mathbb{X}}(\widehat{\tau},\widehat{ \tau}\wedge\widehat{\eta})+\operatorname{ind}_{v_{1}}^{\mathbb{X}}(\widehat{ \tau}\wedge\widehat{\eta},\widehat{\eta})+\operatorname{ind}_{v_{3}}^{ \mathbb{X}}(\widehat{\eta},\widehat{\tau}) =0+(a-1)\mathbb{X}+(1-\operatorname{ind}_{v_{3}}^{\mathbb{X}}( \widehat{\tau},\widehat{\eta}))\] \[=(a-1)+(1-a)=0\]
as required, where \(\operatorname{ind}_{?}^{\mathbb{X}}\) denote the second grading.
**Notations 3.13**.: For \((\varrho,\varsigma)\in\mathbb{Z}^{2}\), we denote \(\cap^{\varrho+\varsigma\mathbb{X}}(\widehat{\sigma},\widehat{\tau})\) the set of intersections between \(\widehat{\sigma}\) and \(\widehat{\tau}\) with bi-index \(\varrho+\varsigma\mathbb{X}\). We will use the notations
\[\operatorname{Int}_{\Pi}^{\varrho+\varsigma\mathbb{X}}(\widehat{\sigma}, \widehat{\tau})\colon\;=\left|\cap^{\varrho+\varsigma\mathbb{X}}(\widehat{ \sigma},\widehat{\tau})\cap\Pi\right|,\]
\[\operatorname{Int}_{\mathbf{S}_{\triangle}}^{\varrho+\varsigma\mathbb{X}}( \widehat{\sigma},\widehat{\tau})\colon\;=\frac{1}{2}\cdot\operatorname{Int}_ {\triangle}^{\varrho+\varsigma\mathbb{X}}(\widehat{\sigma},\widehat{\tau})+ \operatorname{Int}_{\mathbf{S}_{\triangle}}^{\varrho+\varsigma\mathbb{X}}( \widehat{\sigma},\widehat{\tau}),\]
Figure 11. The sum of bi-indices of intersections between bigraded arcs via smoothing out
for the bi-index \((\varrho+\varsigma\mathbb{X})\) intersection numbers at any proper subset \(\Pi\) of \(\mathbf{S}_{\triangle}\) and at all of \(\mathbf{S}^{\circ}\) respectively. The _total intersection_
\[\operatorname{Int}_{?}(\widehat{\sigma},\widehat{\tau})=\sum_{\varrho,\varsigma \in\mathbb{Z}}\operatorname{Int}_{?}^{\varrho+\varsigma\mathbb{X}}(\widehat{ \sigma},\widehat{\tau})\]
is the sum over all bi-indices, where \(?=\triangle\) or \(\mathbf{S}^{\circ}_{\triangle}\).
**Definition 3.14** ([11, 12]).: Let \(q_{1}\) and \(q_{2}\) be two formal parameters. The \(\mathbb{Z}^{2}\)_-graded \(\mathfrak{q}\)-intersection_ of \(\widehat{\sigma},\widehat{\tau}\in\widehat{\operatorname{CA}}(\mathbf{S}_{ \triangle})\) is defined to be
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\sigma},\widehat{\tau})=\sum_{ \varrho,\varsigma\in\mathbb{Z}}q_{1}^{\varrho}q_{2}^{\varsigma}\cdot \operatorname{Int}_{\triangle}^{\varrho+\varsigma\mathbb{X}}(\widehat{\sigma},\widehat{\tau})+(1+q_{1}^{-1}q_{2})\sum_{\varrho,\varsigma\in\mathbb{Z}}q_{ 1}^{\varrho}q_{2}^{\varsigma}\cdot\operatorname{Int}_{\mathbf{S}^{\circ}_{ \triangle}}^{\varrho+\varsigma\mathbb{X}}(\widehat{\sigma},\widehat{\tau}). \tag{3.15}\]
Note that we have \(\operatorname{Int}^{\mathfrak{q}}(-,-)\mid_{q_{1}=q_{2}=1}=2\operatorname{Int} _{\mathbf{S}_{\triangle}}(-,-)=\operatorname{Int}_{\Sigma_{\triangle}}(-,-)\), where \(\Sigma_{\triangle}\) is the branched double cover of \(\mathbf{S}_{\triangle}\).
### Left \(\mathfrak{q}\)-deformations as \(\mathfrak{q}\)-intersections
Recall that \(\mathbf{D}_{3}\) is a disk and \(\triangle=\{z_{\infty},z_{*},z_{0}\}\). By Lemma 3.3, we can label simple closed arcs by \(\overline{\mathbb{Q}}\) as follows. We fix two initial bigraded simple closed arcs and denote them \(\widehat{\eta}_{0}\) and \(\widehat{\eta}_{\infty}\) such that \(\operatorname{ind}_{z_{*}}(\widehat{\eta}_{\infty},\widehat{\eta}_{0})=1\) (see Figure 13).
**Construction 3.15**.: We define a map
\[\begin{split}\widehat{\eta}:\overline{\mathbb{Q}}& \rightarrow\widehat{\operatorname{CA}}(\mathbf{D}_{3})\\ &\frac{r}{s}\mapsto\widehat{\eta}_{\frac{r}{s}}^{r},\end{split} \tag{3.16}\]
as follows. For any rational \(\frac{r}{s}\in\mathbb{Q}^{+}\), by Lemma/Definition 2.3 we know that it can be uniquely written as
\[\frac{r}{s}=\frac{p}{q}\oplus\frac{u}{v}.\]
We iteratively define that
\[\widehat{\eta}_{\frac{r}{s}}:=B_{\eta\frac{u}{v}}(\widehat{\eta}_{\frac{p}{q}} )=\widehat{\eta}_{\frac{p}{q}}\wedge\widehat{\eta}_{\frac{u}{v}}, \tag{3.17}\]
noticing that \(\operatorname{Int}_{\mathbf{D}_{3}}(\eta_{\frac{p}{q}},\eta_{\frac{u}{v}})= \frac{1}{2}\). For negative case, we set \(\widehat{\eta}_{-\infty}:=\widehat{\eta}_{\infty}\) and define
\[\widehat{\eta}_{-\frac{r}{s}}:=B_{\eta_{-\frac{p}{q}}}(\widehat{\eta}_{-\frac {u}{v}}). \tag{3.18}\]
Thus, we get some bigraded closed arcs in \(\widehat{\operatorname{CA}}(\mathbf{D}_{3})\) which are \(\widehat{\eta}_{\frac{r}{s}}[\varrho+\varsigma\mathbb{X}]\), where \(\varrho,\varsigma\in\mathbb{Z}\) and \(\frac{r}{s}\in\overline{\mathbb{Q}}\).
Let \(\frac{r}{s}\in\mathbb{Q}^{+}\) with \(\frac{r}{s}=\frac{p}{q}\oplus\frac{u}{v}\). By Definition 3.11, the grading of the new arc \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) inherits the grading of \(\widehat{\eta}_{\frac{p}{q}}\). That is, for any bigraded arc \(\widehat{\sigma}\) intersecting \(\widehat{\eta}_{\frac{r}{s}}\) and \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) at \(p_{1},p_{2}\in\mathbf{S}^{\circ}\) respectively (see Figure 12), we have
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}},\widehat {\eta}_{\frac{p}{q}\oplus\frac{u}{v}})=0,\quad\operatorname{ind}_{p_{1}}^{ \mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\eta}_{\frac{p}{q}})=\operatorname{ ind}_{p_{2}}^{\mathbb{Z}^{2}}(\widehat{\sigma},\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}), \tag{3.19}\]
where \(z\in\triangle\) is the common endpoint of \(\widehat{\eta}_{\frac{p}{q}}\) and \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\). The second equation in (3.19) follows from the fact that \(\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) and \(\widehat{\eta}_{\frac{u}{v}}^{\frac{u}{v}}\) form a contractible triangle.
From Proposition 3.12, we know that these three simple closed arcs \(\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) and \(\widehat{\eta}_{\frac{u}{v}}^{u}\) (take \(\widehat{\tau}=\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}=\widehat{\eta}_{\frac {u}{v}}\) in Figure 11) satisfy
\[\operatorname{ind}_{v_{2}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}}, \widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}})+\operatorname{ind}_{v_{1}}^{ \mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}},\widehat{\eta}_{ \frac{u}{v}})+\operatorname{ind}_{v_{3}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{ \frac{u}{v}},\widehat{\eta}_{\frac{p}{q}})=1, \tag{3.20}\]
where \(v_{1},v_{2},v_{3}\) are the corresponding intersecting decorations.
We fix the following setting.
**Setting 3.16**.: Recall that in Lemma/Definition 2.3, any fraction \(\frac{r}{s}\in\mathbb{Q}^{+}\) can be uniquely written as
\[\frac{r}{s}=\frac{p}{q}\oplus\frac{u}{v},\]
with \(\frac{p}{q},\frac{u}{v}\in\overline{\mathbb{Q}^{\geq 0}},uq-pv=1\) and an associated integer \(l(\frac{p}{q},\frac{u}{v})\). The corresponding arcs in \(\widehat{\operatorname{CA}}(\mathbf{D}_{3})\) of these fractions are \(\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}, \widehat{\eta}_{\frac{u}{v}}\), where \(\widehat{\eta}_{\frac{p}{q}}\) and \(\widehat{\eta}_{\frac{u}{v}}\) intersect at only one decoration \(z\) in \(\triangle\). We do not distinguish \(\frac{r}{s}\) and \(\frac{p}{q}\oplus\frac{u}{v}\) in the following.
**Lemma 3.17**.: _For any two arcs \(\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{u}{v}}\) in \(\widehat{\operatorname{CA}}(\mathbf{D}_{3})\) in Setting 3.16, the bi-index is of the form \(l(1-\mathbb{X})\) where \(l\in\mathbb{N}\), i.e._
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}},\widehat {\eta}_{\frac{u}{v}})=l(1-\mathbb{X}), \tag{3.21}\]
_except the special case_
\[\operatorname{ind}_{z_{*}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{0},\widehat{\eta }_{\infty})=\mathbb{X}-1. \tag{3.22}\]
_Here \(l=l(\frac{p}{q},\frac{u}{v})\) in Lemma/Definition 2.3._
Proof.: For the special case, we know that
\[\operatorname{ind}_{z_{*}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{0},\widehat{\eta }_{\infty})=\mathbb{X}-\operatorname{ind}_{z_{*}}^{\mathbb{Z}^{2}}(\widehat{ \eta}_{\infty},\widehat{\eta}_{0})=\mathbb{X}-1,\]
from (3.11). In general, we prove the lemma by induction on \(l\). For initial bigraded simple closed arcs \(\widehat{\eta}_{0},\widehat{\eta}_{\frac{1}{1}}\) and \(\widehat{\eta}_{\infty}\) in \(\widehat{\operatorname{CA}}(\mathbf{D}_{3})\), we have \(\operatorname{ind}_{z_{0}}(\widehat{\eta}_{0},\widehat{\eta}_{\frac{1}{1}})= \operatorname{ind}_{z_{\infty}}(\widehat{\eta}_{\frac{1}{1}},\widehat{\eta }_{\infty})=0\) and thus the lemma holds obviously. We assume that (3.21) holds for \(l\) and consider the \(l+1\) case. For any \(\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{u}{v}}\) in \(\widehat{\operatorname{CA}}(\mathbf{D}_{3})\) in Setting 3.16, we assume that they intersect at only one decoration \(v_{3}\) with
\[\operatorname{ind}_{v_{3}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}}, \widehat{\eta}_{\frac{u}{v}})=l(1-\mathbb{X}), \tag{3.23}\]
Figure 12. The bigrading inherited from braid twist
where \(l\in\mathbb{N}\). By (3.11), we have
\[\operatorname{ind}_{v_{3}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}},\widehat {\eta}_{\frac{p}{q}})=(l+1)\mathbb{X}-l.\]
Since the grading of \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) inherits the grading of \(\widehat{\eta}_{\frac{p}{q}}\), we have
\[\operatorname{ind}_{v_{1}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}}, \widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}})=0. \tag{3.24}\]
By (3.20), we deduce that
\[\begin{split}\operatorname{ind}_{v_{2}}^{\mathbb{Z}^{2}}( \widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}},\widehat{\eta}_{\frac{u}{v}})& =1-0-[(l+1)\mathbb{X}-l]\\ &=(l+1)-(l+1)\mathbb{X}.\end{split} \tag{3.25}\]
Finally, combining (3.24) and (3.25), the lemma is true.
**Theorem 3.18**.: _For any rational number \(\frac{r}{s}\in\overline{\mathbb{Q}}\), we have_
\[\left[\frac{r}{s}\right]_{\mathfrak{q}}^{\flat}=\frac{\varepsilon\operatorname {Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{s}{s}},\widehat{\eta}_{0})}{ \operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{s}{s}},\widehat{\eta}_ {0})}\Big{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}}, \tag{3.26}\]
_corresponding to the left \(\mathfrak{q}\)-deformation of \(\frac{r}{s}\), where_
\[\varepsilon=\begin{cases}q_{1}^{-1},&\frac{r}{s}\geq 0,\\ -1,&\frac{r}{s}<0.\end{cases}\]
_In particular, for \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\), we have_
\[\left\{\begin{array}{ll}\overline{\mathbf{R}}_{\mathfrak{q}}^{\flat}(\frac{ r}{s})&=q_{1}^{-1}\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{s}{s}}, \widehat{\eta}_{0})\big{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}},\\ \overline{\mathbf{S}}_{\mathfrak{q}}^{\flat}(\frac{r}{s})&=\operatorname{Int }^{\mathfrak{q}}(\widehat{\eta}_{\frac{r}{s}},\widehat{\eta}_{\infty})\big{|}_ {\mathfrak{q}=q_{1}^{-1}q_{2}}.\end{array}\right. \tag{3.27}\]
Proof.: For \(0\) and \(\infty\), we have
\[\operatorname{Int}_{z_{*}}^{\mathfrak{q}}(\widehat{\eta}_{0},\widehat{\eta}_ {\infty})=q_{1}^{-1}q_{2}\text{ and }\operatorname{Int}_{z_{*}}^{\mathfrak{q}}(\widehat{\eta}_{\infty},\widehat{ \eta}_{0})=q_{1}\]
by definition. We set
\[\overline{\mathbf{R}}_{\mathfrak{q}}^{\flat}(0)=q_{2}-q_{1}\text{ and }\overline{\mathbf{S}}_{\mathfrak{q}}^{\flat}(\infty)=1-q_{1}^{-1}q_{2}.\]
We prove the non-negative case by induction using Setting 3.16. At starting step, the fractions in the theorem are \(\frac{q_{1}^{-1}q_{2}-1}{q_{1}^{-1}q_{2}}\) and \(\frac{1}{1-q_{1}^{-1}q_{2}}\), which coincide the left \(\mathfrak{q}\)-deformation of \(0\) and \(\infty\) respectively when \(\mathfrak{q}=q_{1}^{-1}q_{2}\) We assume that the formula (3.26) holds for \(\frac{p}{q},\frac{u}{v}\in\overline{\mathbb{Q}^{\geq 0}}\) in Setting 3.16 which satisfies
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}},\widehat {\eta}_{\frac{u}{v}})=l(1-\mathbb{X}),\]
where \(l=l(\frac{p}{q}\oplus\frac{u}{v})\in\mathbb{N}\) and \(z\) is the common endpoint of \(\widehat{\eta}_{\frac{p}{q}}\) and \(\widehat{\eta}_{\frac{u}{v}}\). We use the \(\mathbb{Z}^{2}\)-covering \(\widetilde{\Sigma_{\Delta}}\) to compute the \(\mathfrak{q}\)-intersection between \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) and \(\widehat{\eta}_{0}\) (and \(\widehat{\eta}_{\infty}\) is similar). We lift the arcs in \(\widehat{\operatorname{CA}}(\mathbf{D}_{3})\) to lines in \(\widetilde{\Sigma_{\Delta}}\). Then \(\widehat{\eta}_{0}\) becomes a series of horizontal lines which pass through \(z_{*}\) and \(z_{0}\). The topological triangle \(T\) in \(\mathbf{D}_{3}\) bounded by \(\widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{u}{v}}\) and \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) becomes a triangle \(\widetilde{T}\) in \(\mathbb{R}^{2}\) up to translation (or reflection). We may assume that the vertices of \(\widetilde{T}\) are \(\widetilde{z},\widetilde{z}+(p,q),\widetilde{z}+(r,s)\), which says that \(z^{\prime},z,z^{\prime\prime}\) in \(T\). The area of \(\widetilde{T}\) equals to \(\frac{1}{2}\) by the relation \(uq-pv=1\). By Pick's theorem, there are no decorations or punctures in \(\widetilde{T}\). The intersections between \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) and \(\widehat{\eta}_{0}\) consist of two parts (see Figure 14):
* intersections below and on \(a\), which inherit from \(\widehat{\eta}_{\frac{p}{q}}\), and
* intersections above \(a\), which is induced from \(\widehat{\eta}_{\frac{u}{v}}\).
For the first case, if the two intersections are both either in the interior or at decorations, the bi-indexes are same. For the second case, the two bi-indexes differ from \(\operatorname{ind}_{z^{\prime\prime}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{ p}{q}\oplus\frac{u}{v}},\widehat{\eta}_{\frac{u}{v}})=(l+1)\cdot(1- \mathbb{X})\). Thus we have
\[\operatorname{Int}_{\mathbf{D}_{3}\setminus\{p\}}^{\mathfrak{q}}(\widehat{ \eta}_{\frac{p}{q}\oplus\frac{u}{v}},\widehat{\eta}_{0})=\operatorname{Int}_{ \mathbf{D}_{3}\setminus\{z\}}^{\mathfrak{q}}(\widehat{\eta}_{\frac{p}{q}}, \widehat{\eta}_{0})+q_{1}^{l+1}q_{2}^{-l-1}\cdot\operatorname{Int}_{\mathbf{D }_{3}\setminus\{z\}}^{\mathfrak{q}}(\widehat{\eta}_{\frac{u}{v}},\widehat{ \eta}_{0}). \tag{3.28}\]
For the intersection on \(a\), by inheritance, we have bi-index given directly by
\[\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\eta}_{0},\widehat{\eta}_{ \underline{p}_{\underline{a}\oplus\frac{u}{v}}})=\operatorname{ind}_{z}^{ \mathbb{Z}^{2}}(\widehat{\eta}_{0},\widehat{\eta}_{\frac{p}{q}}). \tag{3.29}\]
Hence we have
\[\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\underline {p}_{\underline{a}\oplus\frac{u}{v}}},\widehat{\eta}_{0}) =1-\operatorname{ind}_{p}^{\mathbb{Z}^{2}}(\widehat{\eta}_{0}, \widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}})=1-\operatorname{ind}_{z}^{ \mathbb{Z}^{2}}(\widehat{\eta}_{0},\widehat{\eta}_{\frac{p}{q}})\] \[=1-\mathbb{X}+\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{ \eta}_{\frac{p}{q}},\widehat{\eta}_{0}).\]
Thus, the bi-index in \(\mathbf{D}_{3}^{\circ}\) contributes \((1-\mathbb{X})+\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{ \frac{p}{q}},\widehat{\eta}_{0})\) and \(\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}}, \widehat{\eta}_{0})\) in \(\mathfrak{q}\)-intersection from (3.15).
On the other hand, we have
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{u} {v}},\widehat{\eta}_{0}) =\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{ p}{q}},\widehat{\eta}_{0})-\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{ \frac{p}{q}},\widehat{\eta}_{\frac{u}{v}})\] \[=[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{ p}{q}},\widehat{\eta}_{0})+(1-\mathbb{X})]-[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}( \widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\frac{u}{v}})+(1-\mathbb{X})]\] \[=[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{ p}{q}},\widehat{\eta}_{0})+(1-\mathbb{X})]-(l+1)\cdot(1-\mathbb{X}).\]
Thus, we deduce that
\[\operatorname{Int}_{z}^{\mathfrak{q}}(\widehat{\eta}_{\frac{p}{q} \oplus\frac{u}{v}},\widehat{\eta}_{0})=\operatorname{Int}_{z}^{\mathfrak{q}}( \widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{0})+q_{1}^{l+1}q_{2}^{-l-1}\cdot \operatorname{Int}_{z}^{\mathfrak{q}}(\widehat{\eta}_{\frac{u}{v}},\widehat{ \eta}_{0}). \tag{3.30}\]
Therefore, we have
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{p}{q} \oplus\frac{u}{v}},\widehat{\eta}_{0})=\operatorname{Int}^{\mathfrak{q}}( \widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{0})+q_{1}^{l+1}q_{2}^{-l-1}\cdot \operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{u}{v}},\widehat{\eta} _{0}), \tag{3.31}\]
which coincides with \(\overline{\mathbf{R}}_{\mathfrak{q}}^{\flat}(\frac{r}{s})\) with a multiplication of \(q_{1}^{-1}\) if we take \(\mathfrak{q}=q_{1}^{-1}q_{2}\). Similarly, we deduce that
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{p}{q} \oplus\frac{u}{v}},\widehat{\eta}_{\infty})=\operatorname{Int}^{\mathfrak{q}}( \widehat{\eta}_{\frac{p}{q}},\widehat{\eta}_{\infty})+q_{1}^{l+1}q_{2}^{-l-1} \cdot\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{u}{v}},\widehat{ \eta}_{\infty}). \tag{3.32}\]
If we take \(q_{1}^{-1}q_{2}=\mathfrak{q}\), the fraction we get in the theorem coincides the left \(\mathfrak{q}\)-deformation of \(\frac{r}{s}=\frac{p}{q}\oplus\frac{u}{v}\), which completes the proof.
**Example 3.19**.: We give examples of \(\frac{3}{2}\) and \(-2\) (see Figure 13) for the left \(\mathfrak{q}\)-deformations. We compute that
\[\left\{\begin{array}{ll}\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{ \frac{3}{2}},\widehat{\eta}_{0})&=q_{1}+q_{2}+q_{1}^{3}q_{2}^{-2},\\ \operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{3}{2}},\widehat{\eta}_ {\infty})&=1+q_{1}^{2}q_{2}^{-2};\end{array}\right.\]
and
\[\left\{\begin{array}{ll}\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{ -\frac{2}{2}},\widehat{\eta}_{0})&=q_{1}^{3}q_{2}^{-2}+q_{1},\\ \operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{-\frac{2}{2}},\widehat{\eta}_ {\infty})&=q_{2}.\end{array}\right.\]
By Theorem 3.18, we have
\[\left[\frac{3}{2}\right]_{\mathfrak{q}}^{\flat}=\frac{q_{1}^{-1} \operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{3}{2}},\widehat{\eta}_ {0})}{\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}_{\frac{3}{2}},\widehat{ \eta}_{\infty})}\bigg{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}}=\frac{\mathfrak{q}^{3}+ \mathfrak{q}^{2}+1}{\mathfrak{q}^{2}+1},\]
and
\[\Big{[}-\frac{2}{1}\Big{]}_{\mathfrak{q}}^{\flat}=-\frac{\operatorname{Int}^{ \mathfrak{q}}(\widehat{\eta}_{-\frac{2}{1}},\widehat{\eta}_{0})}{\operatorname{ Int}^{\mathfrak{q}}(\widehat{\eta}_{-\frac{2}{1}},\widehat{\eta}_{\infty})}\bigg{|}_{ \mathfrak{q}=q_{1}^{-1}q_{2}}=-\frac{\mathfrak{q}^{2}+1}{\mathfrak{q}^{3}}.\]
### Right \(\mathfrak{q}\)-deformations as \(\mathfrak{q}\)-intersections
In this subsection, we add a finite set \(\mathbf{M}\) of _(open) marked points_ to \(\partial\mathbf{S}\) satisfying \(|\mathbf{M}|=|\bigtriangleup|\) and get a decorated marked surface (or _DMS_ for short). We still denote the DMS by \(\mathbf{S}_{\bigtriangleup}\). An arc \(c\) is called _open_ if \(c(0)\) and \(c(1)\) are in \(\mathbf{M}\), without self-intersections in \(\mathbf{S}_{\bigtriangleup}^{\circ}\). We call two open arcs do not cross each other if they do not have intersections in \(\mathbf{S}_{\bigtriangleup}^{\circ}\).
We also have bigraded open arcs as before. For an open arc \(\gamma\), we define the _\(\mathbb{Z}^{2}\)-graded \(\mathfrak{q}\)-intersection_ between a lift \(\widehat{\gamma}\) of \(\gamma\) and \(\widehat{\tau}\in\widehat{\operatorname{CA}}(\mathbf{S}_{\bigtriangleup})\) to be
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma},\widehat{\tau})=\sum_{ \varrho,\varsigma\in\mathbb{Z}}q_{1}^{\varrho}q_{2}^{\varsigma}\cdot \operatorname{Int}_{\mathbf{S}_{\bigtriangleup}}^{\varrho+\varsigma\mathbb{X} }(\widehat{\gamma},\widehat{\tau}). \tag{3.33}\]
Note that we have \(\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma},\widehat{\tau})\mid_{q_{1}=q _{2}=1}=\operatorname{Int}_{\mathbf{S}_{\bigtriangleup}}(\gamma,\tau)\). We define a special class of open (bigraded) arcs.
**Definition 3.20**.: An _open full formal arc system_\(\mathbf{A}=\{\gamma_{1},\cdots,\gamma_{n}\}\) of a DMS \(\mathbf{S}_{\bigtriangleup}\) is a collection of pairwise non-crossing open arcs, such that it divides the surface \(\mathbf{S}_{\bigtriangleup}\) into polygons, called _\(\mathbf{A}\)-polygons_, satisfying that each \(\mathbf{A}\)-polygon contains exactly one decoration. We call \(\sigma\in\operatorname{CA}(\mathbf{S}_{\bigtriangleup})\) the _dual_ to \(\gamma_{i}\), if \(\gamma_{i}\) intersects it once and \(\gamma_{j}\) does not intersect it for any \(j\neq i\). Denote \(s_{i}\) the dual to \(\gamma_{i}\) and \(\mathbf{A}^{*}=\{s_{1},\cdots,s_{n}\}\).
Let \(\widehat{\gamma}_{1},\cdots,\widehat{\gamma}_{n},\widehat{s}_{1},\cdots, \widehat{s}_{n}\) be their bigraded lifts with
\[\operatorname{ind}^{\mathbb{Z}^{2}}(\widehat{\gamma}_{i},\widehat{s}_{j})= \delta_{ij}. \tag{3.34}\]
We add three open marked points to the boundary of \(\mathbf{D}_{3}\) in Section 3.3. Let \(\gamma_{0}\), \(\gamma_{\infty}\) be two open arcs which form an open arc system in \(\mathbf{D}_{3}\) and intersect with \(\eta_{0}\) and \(\eta_{\infty}\) transitively only once respectively. Let \(\widehat{\gamma}_{0},\widehat{\gamma}_{\infty}\) be their bigraded lifts respectively which satisfy
\[\operatorname{ind}^{\mathbb{Z}^{2}}(\widehat{\gamma}_{\infty},\widehat{\eta}_ {\infty})=\operatorname{ind}^{\mathbb{Z}^{2}}(\widehat{\gamma}_{0},\widehat{ \eta}_{0})=0. \tag{3.35}\]
We draw them as blue arcs in Figure 13.
**Theorem 3.21**.: _For any rational number \(\frac{r}{s}\in\overline{\mathbb{Q}}\), we have_
\[\Big{[}\frac{r}{s}\Big{]}_{\mathfrak{q}}^{\sharp}=\left.\frac{\varepsilon \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{\infty},\widehat{\eta}_{ \frac{r}{s}})}{\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat {\eta}_{\frac{r}{s}})}\right|_{\mathfrak{q}=q_{1}^{-1}q_{2}}, \tag{3.36}\]
_corresponding to the right \(\mathfrak{q}\)-deformation of \(\frac{r}{s}\), where_
\[\varepsilon=\begin{cases}1,&\frac{r}{s}\geq 0,\\ -q_{1}^{-1},&\frac{r}{s}<0,\end{cases}\]
_and the polynomials in the numerator and denominator are polynomials in \(\mathbb{Z}[q_{1}^{-1}q_{2}]\). In particular, for \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\), we have_
\[\left\{\begin{array}{ll}\mathbf{R}_{4}^{\sharp}(\frac{r}{s})&= \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{\infty},\widehat{\eta}_{ \frac{r}{s}})\big{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}},\\ \mathbf{S}_{4}^{\sharp}(\frac{r}{s})&=\operatorname{Int}^{\mathfrak{q}}( \widehat{\gamma}_{0},\widehat{\eta}_{\frac{r}{s}})\big{|}_{\mathfrak{q}=q_{1}^ {-1}q_{2}}.\end{array}\right. \tag{3.37}\]
Proof.: The theorem follows the same way of the left version and we prove the non-negative case by induction using Setting 3.16. For the starting case, we have
\[\operatorname{Int}(\widehat{\gamma}_{\infty},\widehat{\eta}_{0}) =\operatorname{Int}(\widehat{\gamma}_{0},\widehat{\eta}_{\infty}) =0,\] \[\operatorname{Int}(\widehat{\gamma}_{\infty},\widehat{\eta}_{ \infty}) =\operatorname{Int}(\widehat{\gamma}_{0},\widehat{\eta}_{0})=q_{1}^{0 }q_{2}^{0}=1.\]
Thus, they coincide with the right \(\mathfrak{q}\)-deformation of \(0\) and \(\infty\) respectively. We assume that the formula (3.36) holds for \(\frac{p}{q},\frac{u}{v}\in\overline{\mathbb{Q}^{\geq 0}}\) in Setting 3.16 which satisfy
\[\operatorname{ind}_{z}^{\mathbb{Z}^{2}}(\widehat{\eta}_{\frac{p}{q}},\widehat {\eta}_{\frac{u}{v}})=l(1-\mathbb{X}),\]
where \(l=l(\frac{p}{q}\oplus\frac{u}{v})\in\mathbb{N}\). Similar as the left version, the intersections between \(\widehat{\gamma}_{0}\) and \(\widehat{\eta}_{\frac{p}{q}\oplus\frac{u}{v}}\) consist of two parts. One inherits from \(\widehat{\eta}_{\frac{p}{q}}\), whose bi-indices are same; and the other one is induced from \(\widehat{\eta}_{\frac{u}{v}}\), whose bi-indices differ from \(-\operatorname{ind}_{z^{\prime\prime}}^{\mathbb{Z}^{2}}(\widehat{\eta}_{ \frac{p}{q}\oplus\frac{u}{v}},\widehat{\eta}_{\frac{u}{v}})=(l+1)\cdot( \mathbb{X}-1)\). Notice that the intersections are all in the interior, which simplifies things a lot. Thus, we have
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat{\eta}_{\frac {p}{q}\oplus\frac{u}{v}})=\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_ {0},\widehat{\eta}_{\frac{p}{q}})+q_{1}^{-l-1}q_{2}^{l+1}\cdot\operatorname{ Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat{\eta}_{\frac{u}{v}}), \tag{3.38}\]
which coincides denominator of the right \(\mathfrak{q}\)-deformation of \(\frac{p}{q}\oplus\frac{u}{v}\) if we take \(q_{1}^{-1}q_{2}=\mathfrak{q}\). Similarly, we deduce that
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{\infty},\widehat{\eta}_{ \frac{p}{q}\oplus\frac{u}{v}})=\operatorname{Int}^{\mathfrak{q}}(\widehat{ \gamma}_{\infty},\widehat{\eta}_{\frac{p}{q}})+q_{1}^{-l-1}q_{2}^{l+1}\cdot \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{\infty},\widehat{\eta}_{ \frac{u}{v}}). \tag{3.39}\]
Thus, we finish the proof.
**Example 3.22**.: We continue the examples of \(\frac{3}{2}\) and \(-2\) (see Figure 13) for the right \(\mathfrak{q}\)-deformations. We compute that
\[\left\{\begin{array}{ll}\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_ {\infty},\widehat{\eta}_{\frac{3}{2}})&=1+q_{1}^{-1}q_{2}+q_{1}^{-2}q_{2}^{2}, \\ \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat{\eta}_{\frac{ 3}{2}})&=1+q_{1}^{-1}q_{2};\end{array}\right.\]
and
\[\left\{\begin{array}{ll}\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_ {\infty},\widehat{\eta}_{-\frac{2}{1}})&=1+q_{1}^{-1}q_{2},\\ \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat{\eta}_{-\frac{ 2}{1}})&=q_{1}^{-3}q_{2}^{2}.\end{array}\right.\]
By Theorem 3.21, we have
\[\left[\frac{3}{2}\right]_{\mathfrak{q}}^{\sharp}=\frac{\operatorname{Int}^{ \mathfrak{q}}(\widehat{\gamma}_{\infty},\widehat{\eta}_{\frac{3}{2}})}{ \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat{\eta}_{\frac{ 3}{2}})}\bigg{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}}=\frac{\mathfrak{q}^{2}+ \mathfrak{q}+1}{\mathfrak{q}+1},\]
and
\[\left.\left[-\frac{2}{1}\right]_{\mathfrak{q}}^{\sharp}=-\frac{q_{1}^{-1} \operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{\infty},\widehat{\eta}_{- \frac{2}{1}})}{\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{0},\widehat{ \eta}_{-\frac{2}{1}})}\right|_{\mathfrak{q}=q_{1}^{-1}q_{2}}=-\frac{\mathfrak{q} +1}{\mathfrak{q}^{2}}.\]
### Combinatorial properties via \(\mathfrak{q}\)-intersections
Next, we give some topological explanation of some properties in [10] via \(\mathfrak{q}\)-intersections. We draw \(\gamma_{0}\) and \(\gamma_{\infty}\) as foliations in the branched double cover \(\Sigma_{\Delta}\), which intersect \(C_{0}\) and \(C_{\infty}\) only once respectively (see Figure 15). We have the following corollary.
**Corollary 3.23** ([10]).: _For any rational number \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\), its right and left \(\mathfrak{q}\)-deformations satisfy the following properties._
**Positivity:**: _The polynomials_ \(\mathbf{R}_{\mathfrak{q}}^{\sharp}(r/s),\mathbf{S}_{\mathfrak{q}}^{\sharp}(r/ s),\mathbf{R}_{\mathfrak{q}}^{\flat}(r/s)\) _and_ \(\mathbf{S}_{\mathfrak{q}}^{\flat}(r/s)\) _have positive integer coefficients._
**Specialization:**: _If we take_ \(\mathfrak{q}=1\)_, we have_
\[\left\{\begin{array}{ll}\mathbf{R}_{\mathfrak{q}}^{\sharp}(r/s)|_{ \mathfrak{q}=1}&=\mathbf{R}_{\mathfrak{q}}^{\flat}(r/s)|_{\mathfrak{q}=1}=r, \\ \mathbf{S}_{\mathfrak{q}}^{\sharp}(r/s)|_{\mathfrak{q}=1}&=\mathbf{S}_{ \mathfrak{q}}^{\flat}(r/s)|_{\mathfrak{q}=1}=s.\end{array}\right. \tag{3.40}\]
Proof.: The positivity follows from Theorem 3.21, Theorem 3.18 and that intersection numbers are all positive.
For specialization, we consider the branched double covering \(\Sigma_{\Delta}\) of \(\mathbf{D}_{3}\). Then the closed arc \(\eta_{\frac{r}{s}}\) becomes the simple closed curve \(C_{\frac{r}{s}}\), whose preimage under \(\widetilde{\pi}\) is a line with slope \(\frac{r}{s}\), on \(\Sigma_{\Delta}\) through these corresponding decorations which are endpoints of \(\eta_{\frac{r}{s}}\). When we take \(q_{1}=q_{2}=1\), the \(\mathfrak{q}\)-intersection degenerates to usual intersection. By construction above, we have
\[\operatorname{Int}_{\mathbf{D}_{3}}(\eta_{\frac{r}{s}},\eta_{0}) =\frac{1}{2}\cdot\operatorname{Int}_{\Sigma_{\Delta}}(C_{\frac{r} {s}},C_{0})=r,\quad\operatorname{Int}_{\mathbf{D}_{3}}(\eta_{\frac{r}{s}}, \eta_{\infty})=\frac{1}{2}\cdot\operatorname{Int}_{\Sigma_{\Delta}}(C_{\frac{r }{s}},C_{\infty})=s;\] \[\operatorname{Int}_{\mathbf{D}_{3}}(\gamma_{\infty},\eta_{\frac{r }{s}}) =\frac{1}{2}\cdot\operatorname{Int}_{\Sigma_{\Delta}}(\gamma_{\infty },C_{\frac{r}{s}})=r,\quad\operatorname{Int}_{\mathbf{D}_{3}}(\gamma_{0},\eta_ {\frac{r}{s}})=\frac{1}{2}\cdot\operatorname{Int}_{\Sigma_{\Delta}}(\gamma_{0},C_{\frac{r}{s}})=s.\]
Therefore, the results follows from Theorem 3.18 and Theorem 3.21.
**Example 3.24**.: We notice that in the example of \(\frac{3}{s}\), \(\eta_{\frac{2}{2}}\) hits \(\gamma_{\infty}\) three times and hits \(\gamma_{0}\) twice in \(\mathbf{S}_{\triangle}\), which implies that \(\mathbf{R}_{\mathfrak{q}}^{\sharp}(3/2)|_{\mathfrak{q}=1}=3\) and \(\mathbf{S}_{\mathfrak{q}}^{\sharp}(3/2)|_{\mathfrak{q}=1}=2\).
## 4. Categorification
### Ginzburg algebra and derived categories
**Definition 4.1** ([Ke, IQ]).: Let \(Q=(Q_{0},Q_{1})\) be a finite quiver with vertices set \(Q_{0}=\{1,2,\dots,n\}\) and arrows set \(Q_{1}\). The _Ginzburg Calabi-Yau-\(\mathbb{X}\) ddg algebra_\(\Gamma_{\mathbb{X}}Q:=(\mathbf{k}\overline{Q},d)\) is defined as follows. We define a \(\mathbb{Z}\oplus\mathbb{Z}\mathbb{X}\)-graded quiver \(\overline{Q}\) with the same vertices set as \(Q_{0}\) and the following arrows:
* original arrows \(a:i\to j\in Q_{1}\) with degree \(0\);
* opposite arrows \(a^{*}:j\to i\in Q_{1}\) associated to \(a\in Q_{1}\) with degree \(2-\mathbb{X}\);
* a loop \(e_{i}^{*}\) for each \(i\in Q_{0}\) with degree \(1-\mathbb{X}\), where \(e_{i}\) is the idempotent at \(i\).
Let \(\mathbf{k}\overline{Q}\) be a \(\mathbb{Z}\oplus\mathbb{Z}\mathbb{X}\)-graded path algebra of \(\overline{Q}\), and define a differential \(d:\mathbf{k}\overline{Q}\to\mathbf{k}\overline{Q}\) of degree \(1\) by
* \(da=da^{*}=0\) for \(a\in Q_{1}\);
* \(de_{i}^{*}=e_{i}\big{(}\sum_{a\in Q_{1}}(aa^{*}-a^{*}a)\big{)}e_{i}\).
We denote by \(\mathcal{D}_{\mathbb{X}}(Q):=\operatorname{pvd}\Gamma_{\mathbb{X}}Q\) the perfect value derived category of \(\Gamma_{\mathbb{X}}Q\), which is the same as the finite-dimensional derived category of \(\Gamma_{\mathbb{X}}Q\). We consider the \(A_{2}\) case, where \(A_{2}\) is a quiver with vertices set \(\{1,2\}\) and an arrow \(1\to 2\), and the corresponding category is denoted by \(\mathcal{D}_{\mathbb{X}}(A_{2})\).
### Rational case via spherical objects
In this subsection, we aim to find spherical objects in some certain category which correspond to rational numbers and represent their hom space via right and left \(\mathfrak{q}\)-deformations. We particularly consider the case of the Calabi-Yau-\(\mathbb{X}\) category of the \(A_{2}\) quiver. Recall that a triangulated category \(\mathcal{D}\) is called _Calabi-Yau-\(\mathbb{X}\)_ if for any objects \(L,M\) in \(\mathcal{D}\), we have a natural isomorphism
\[\operatorname{Hom}_{\mathcal{D}}(L,M)\cong D\operatorname{Hom}_{\mathcal{D}}( M,L[\mathbb{X}]),\]
where \(D=\operatorname{Hom}(-,\mathbf{k})\) is the dual functor and \(\mathbf{k}\) is an algebraically closed field. In particular, \(\mathcal{D}_{\mathbb{X}}(A_{2})\) is a Calabi-Yau-\(\mathbb{X}\) category with a distinguish auto-equivalence
\[\mathbb{X}:\mathcal{D}_{\mathbb{X}}(A_{2})\to\mathcal{D}_{\mathbb{X}}(A_{2}).\]
**Definition 4.2** ([IQZ]).: For any \(M,N\in\mathcal{D}\), we define the _bigraded Hom_ as
\[\operatorname{Hom}^{\mathbb{Z}^{2}}(M,N):=\bigoplus_{\varrho,\varsigma\in \mathbb{Z}}\operatorname{Hom}_{\mathcal{D}}(M,N[\varrho+\varsigma\mathbb{X}]),\]
and its \(\mathfrak{q}\)-dimension as
\[\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(M,N)\colon=\sum_{ \varrho,\varsigma\in\mathbb{Z}}q_{1}^{\varrho}q_{2}^{\varsigma}\cdot\dim \operatorname{Hom}_{\mathcal{D}}(M,N[\varrho+\varsigma\mathbb{X}]). \tag{4.1}\]
When \(M=N\), \(\operatorname{Hom}^{\mathbb{Z}^{2}}(M,M)\) becomes a \(\mathbb{Z}^{2}\)-graded algebra, called the Ext-algebra of \(M\) and denoted by \(\operatorname{Ext}^{\mathbb{Z}^{2}}(M,M)\).
By definition, we directly have
\[\dim\operatorname{Hom}^{\mathbb{Z}^{2}}(M,N)=\dim^{\mathfrak{q}}\operatorname {Hom}^{\mathbb{Z}^{2}}(M,N)\mid_{q_{1}=q_{2}=1}. \tag{4.2}\]
**Definition 4.3** ([10]).: An object \(S\) is called \(\mathbb{X}\)-_spherical_ if \(\operatorname{Hom}^{\bullet}(S,S)=\mathbf{k}\oplus\mathbf{k}[-\mathbb{X}]\).
For any spherical object \(S\) in an Calabi-Yau-\(\mathbb{X}\) category \(\mathcal{D}\), there is an associated auto-equivalence, namely the _twist functor_\(\phi_{S}:\mathcal{D}\to\mathcal{D}\), defined by
\[\phi_{S}(X)=\operatorname{Cone}(S\otimes\operatorname{Hom}^{\bullet}(S,X)\to X)\]
with inverse
\[\phi_{S}^{-1}(X)=\operatorname{Cone}(X\to S\otimes\operatorname{Hom}^{- \bullet}(X,S))[-1].\]
By [ST, Lemma 2.11], we have the formula
\[\phi_{\psi(M)}=\psi\circ\phi_{M}\circ\psi^{-1} \tag{4.3}\]
for any spherical object \(M\) and any automorphism \(\psi\) in \(\operatorname{Aut}\mathcal{D}\). We define \(\widehat{\operatorname{Sph}}(\Gamma_{\mathbb{X}}A_{2})\) to be the set of all spherical objects in \(\mathcal{D}_{\mathbb{X}}(A_{2})\) which are simples in some hearts (cf. [KQ, Section 10]). Let
\[\operatorname{Sph}(\Gamma_{\mathbb{X}}A_{2}):=\widehat{\operatorname{Sph}}( \Gamma_{\mathbb{X}}A_{2})/\langle[1],[\mathbb{X}]\rangle.\]
We define \(\operatorname{ST}(A_{2})\) to be the subgroup of \(\operatorname{Aut}\mathcal{D}_{\mathbb{X}}(A_{2})\) generated by \(\phi_{S}\) for any \(S\in\widehat{\operatorname{Sph}}(\Gamma_{\mathbb{X}}A_{2})\).
Let \(\mathbf{D}_{3}\) be the three decorated disk as before. There are reachable spherical objects up to shifts [1] and \(\mathbb{X}\) in \(\mathcal{D}_{\mathbb{X}}(A_{2})\) corresponding to rational numbers. We have a categorification of Lemma 3.4 as follows.
**Proposition 4.4** ([10, SS4]).: _There is a bijection \(X\) and an isomorphism \(\iota\) which fits into the following:_
(4.4)
_sending \(\widehat{\eta}_{\pm\frac{r}{s}}\) to \(X_{\pm\frac{r}{s}}\) and \(B_{\eta_{\pm\frac{r}{s}}}\) to \(\phi_{X_{\pm\frac{r}{s}}}\), satisfying_
\[X_{B_{\eta_{\frac{u}{v}}}(\widehat{\eta}_{\frac{p}{q}})}=\phi_{X_{\frac{u}{v}} }(X_{\frac{p}{q}}), \tag{4.5}\]
_and_
\[X_{B_{\eta_{-\frac{p}{q}}}(\widehat{\eta}_{-\frac{u}{v}})}=\phi_{X_{-\frac{p} {q}}}(X_{-\frac{u}{v}}) \tag{4.6}\]
_for \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\). Hence, (3.21) translates to the triangle_
(4.7)
_where \(l=l(\frac{p}{q}\oplus\frac{u}{v})\) is the integer in Setting 3.16._
In fact, when we draw \(\{X_{\frac{r}{s}}\}_{\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}}\) on the weighted Farey graph in the right one in Figure 16, \(\mathfrak{q}^{l},l\geq 0\) represents that there is a morphism of degree \(l(1-\mathbb{X})\) between the connected spherical objects.
Here is the iterative construction of Proposition 4.4. Let \(X_{0},X_{\infty}\) be two spherical objects which are simples in some canonical heart with \(\operatorname{Ext}^{1}(X_{\infty},X_{0})\neq 0\). Let \(X_{\frac{1}{1}}=\phi_{X_{\infty}}(X_{0})\), and we deduce that \(X_{\frac{1}{1}}\) is also a spherical object. We have a triangle
\[X_{0}\rTo X_{\frac{1}{1}}\rTo X_{\infty}\rTo X_{0}[1]\]
by construction. We assume that the triangle in (4.7) holds for \(l\) and consider the \(l+1\) case. By Calabi-Yau-\(\mathbb{X}\) duality, we have non-zero morphisms \(X_{\frac{p}{q}\oplus\frac{u}{v}}\to X_{\frac{p}{q}}[\mathbb{X}]\) and \(X_{\frac{u}{v}}\to X_{\frac{p}{q}\oplus\frac{u}{v}}[(l+1)\mathbb{X}-l]\). Hence we can extend them to triangles:
\[X_{\frac{p}{q}}\rTo Y\rTo X_{\frac{p}{q}\oplus\frac{u}{v}}[1-\mathbb{X}] \rTo X_{\frac{p}{q}}[1]\]
and
\[X_{\frac{p}{q}\oplus\frac{u}{v}}\rTo Y^{\prime}\rTo X_{\frac{u}{v}}[(l+1)(1- \mathbb{X})]\rTo X_{\frac{p}{q}\oplus\frac{u}{v}}[1],\]
where these new spherical objects \(Y=\phi_{X_{\frac{p}{q}\oplus\frac{u}{v}}[1-\mathbb{X}]}(X_{\frac{p}{q}})=\phi_ {X_{\frac{p}{q}\oplus\frac{u}{v}}}(X_{\frac{p}{q}})\) and \(Y^{\prime}=\phi_{X_{\frac{u}{v}}[(l+1)(1-\mathbb{X})]}(X_{\frac{p}{q}\oplus \frac{u}{v}})=\phi_{X_{\frac{u}{v}}}(X_{\frac{p}{q}\oplus\frac{u}{v}})\). Thus we construct the spherical objects associated to non negative rational numbers and the negative case is similar.
**Theorem 4.5** ([10, Lemma 3.4, Proposition 4.6]).: _For any \(\widehat{\eta},\widehat{\eta}^{\prime}\in\widehat{\operatorname{CA}}(\mathbf{ D}_{3})\) satisfying \(\operatorname{Int}_{\mathbf{D}_{3}^{\circ}}(\widehat{\eta},\widehat{\eta}^{\prime})=0\), and \(\widehat{\gamma}_{i}\in\mathbf{A}\), we have_
\[\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(P_{i},X_{\widehat{ \eta}})=\operatorname{Int}^{\mathfrak{q}}(\widehat{\gamma}_{i},\widehat{\eta}), \tag{4.8}\]
_and_
\[\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(X_{\widehat{\eta}},X_ {\widehat{\eta}^{\prime}})=\operatorname{Int}^{\mathfrak{q}}(\widehat{\eta}, \widehat{\eta}^{\prime}), \tag{4.9}\]
_where \(P_{i}\) is the indecomposable projective module corresponding to \(\widehat{\gamma}_{i}\)._
By Theorem 3.18, Theorem 3.21 and Theorem 4.5, we have the following corollaries directly.
**Corollary 4.6**.: _For any rational number \(\frac{\mathrm{r}}{s}\in\mathbb{Q}\setminus\{0\}\), the fraction_
\[\frac{\varepsilon\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(X_{ \frac{r}{s}},X_{0})}{\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(X_ {\frac{r}{s}},X_{\infty})}\bigg{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}} \tag{4.10}\]
Figure 16. The categorification
corresponds to the left \(\mathfrak{q}\)-deformation of \(\frac{r}{s}\), where_
\[\varepsilon=\begin{cases}q_{1}^{-1},&\frac{r}{s}\geq 0,\\ -1,&\frac{r}{s}<0.\end{cases}\]
**Corollary 4.7**.: _For any rational number \(\frac{r}{s}\in\overline{\mathbb{Q}}\), the fraction_
\[\frac{\varepsilon\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(P_{ \infty},X_{\frac{r}{s}})}{\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2 }}(P_{0},X_{\frac{r}{s}})}\bigg{|}_{\mathfrak{q}=q_{1}^{-1}q_{2}} \tag{4.11}\]
_corresponds to the right \(\mathfrak{q}\)-deformation of \(\frac{r}{s}\), where \(P_{0}\) and \(P_{\infty}\) are the corresponding indecomposable projectives satisfying that \(\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(P_{i},X_{j})=\delta_{ i,j},i,j\in\{0,\infty\}\). Here_
\[\varepsilon=\begin{cases}1,&\frac{r}{s}\geq 0,\\ -q_{1}^{-1},&\frac{r}{s}<0.\end{cases}\]
## 5. Applications
### Reduction to single grading as foliations
Let \(N\geq 2\) be an integer. We collapse the double grading \(\Lambda\) on \(\mathbf{S}_{\triangle}\) to single grading \(\lambda\), which is a line field (or foliation) in \(\mathbb{P}T\mathbf{S}_{\triangle}\) by setting \(\mathbb{X}=N\). More precisely, a double grading \((a,b)\in\mathbb{Z}\oplus\mathbb{Z}\mathbb{X}\) collapses into \(a+bN\in\mathbb{Z}\). The foliations in such cases are given by quadratic differentials
\[(z^{3}+az+b)^{N-2}\mathrm{d}z^{\otimes 2}\]
on \(\mathbb{CP}^{1}\) with real blow-up at \(\infty\), cf. Figure 17 for \(N=2,3,4\) and \(\mathbf{S}_{\triangle}=\mathbf{D}_{3}\). Notice that the foliations come from quadratic differential (cf. [I, BQS]). Then \(q_{2}=q_{1}^{N}\) and the \(\mathfrak{q}\)-intersection formula (3.15) reduces to
\[\operatorname{Int}^{\mathfrak{q}}(\widehat{\sigma},\widehat{\tau})=\sum_{k\in \mathbb{Z}}q_{1}^{k}\cdot\operatorname{Int}_{\triangle}^{k}(\widehat{\sigma},\widehat{\tau})+(1+q_{1}^{N-1})\sum_{k\in\mathbb{Z}}q_{1}^{k}\cdot \operatorname{Int}_{\mathbf{S}_{\triangle}^{\infty}}^{k}(\widehat{\sigma}, \widehat{\tau}). \tag{5.1}\]
for \(\widehat{\sigma},\widehat{\tau}\in\widehat{\operatorname{CA}}(\mathbf{S}_{ \triangle})\). Moreover, \(\mathfrak{q}=q_{1}^{N-1}\) in Theorem 3.18 and Theorem 3.21. When \(N=2\), \(\mathfrak{q}=q_{1}\) and no specialization is required.
Figure 17. The foliations of the CY-2,3,4 case
### Relation with BBL's results
In Definition 4.1, if we replace \(\mathbb{X}\) by an integer \(N\geq 2\), we obtain \(N\)-Ginzburg dg algebra \(\Gamma_{N}Q\) and the corresponding Calabi-Yau-\(N\) category \(\mathcal{D}_{N}(Q)\). That is, there is a projection
\[\pi_{N}:\Gamma_{\mathbb{X}}Q\to\Gamma_{N}Q \tag{5.2}\]
collapsing the double grading \(\mathbb{Z}\oplus\mathbb{ZX}\) into \(\mathbb{Z}\) by setting \(\mathbb{X}=N\) similar as above. It induces a functor \(\pi_{N}:\mathcal{D}_{\mathbb{X}}(Q)\to\mathcal{D}_{N}(Q).\) We consider the case when \(Q=A_{2}\). For \(\frac{r}{s}\in\overline{\mathbb{Q}^{\geq 0}}\), we claim that
\[\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(P_{\uparrow},X_{\frac{ r}{s}})=\sum_{k\in\mathbb{Z}}m(\frac{r}{s},k)q_{1}^{-k}q_{2}^{k},\]
where \(m(\frac{r}{s},k)\) is the occurrence times of \(X_{?}[k-k\mathbb{X}]\) in the HN-filtration of \(X_{\frac{r}{s}}\), \(?\in\{0,\infty\}\). It holds naturally from the fact that
\[\dim^{\mathfrak{q}}\operatorname{Hom}^{\mathbb{Z}^{2}}(P_{i},X_{j}[\varrho+ \varsigma\mathbb{X}])=\delta_{i,j}\delta_{\varrho,0}\delta_{\varsigma,0} \tag{5.3}\]
where \(i,j\in\{0,\infty\}\) and induction on \(l(\frac{r}{s})\).
Consider the case when \(N=2\) again with \(\mathfrak{q}=q_{1}\) and we write \(q_{1}=q\). In [BBL], they define two functionals
\[\operatorname{occ}_{q},\overline{\operatorname{hom}}_{q}:\widehat{ \operatorname{Sph}}^{\mathbb{Z}}(\Gamma_{2}A_{2})\times\widehat{\operatorname {Sph}}^{\mathbb{Z}}(\Gamma_{2}A_{2})\to\mathbb{Z}[q^{-1},q],\]
where \(\widehat{\operatorname{Sph}}^{\mathbb{Z}}(\Gamma_{2}A_{2})\) is the set of spherical objects in \(\mathcal{D}_{2}(A_{2})\). The former one \(\operatorname{occ}_{q}(X_{?},X_{\frac{r}{s}})\) counts the occurrences of \(X_{?}\) in the HN-filtration of \(X_{\frac{r}{s}}\) for \(?\in\{0,\infty\}\). By (5.3), we deduce that
\[\operatorname{occ}_{q}(X_{?},X)=\dim^{\mathfrak{q}}\operatorname{Hom}^{ \mathbb{Z}^{2}}(P_{?},X)\mid_{q_{1}^{-1}q_{2}=q^{-1}}.\]
The latter one is
\[\overline{\operatorname{hom}}_{q}(L,M):=\begin{cases}q^{k}(q^{-2}-q^{-1}),&M \cong L[k],\\ \sum_{k\in\mathbb{Z}}\dim\operatorname{Hom}(L,M[k])q^{-k},&\text{otherwise}. \end{cases}\]
Recall that in Remark 2.5, the \(q\)-deformations for negative rational numbers are defined as:
\[\left[-\frac{r}{s}\right]_{q}^{*}:=-q^{-1}\left[\frac{r}{s}\right]_{q^{-1}}^{ *},\]
where \(\frac{r}{s}\in\mathbb{Q}^{+}\cup\{\infty\}\) and \(*\in\{\sharp,\flat\}\).
**Corollary 5.1**.: _When specializing \(\mathbb{X}=2\), the formulae (4.11) and (4.10) in Corollary 4.7 and Corollary 4.6 become the formulae in [BBL, Theorem 3.7, Theorem 3.8] respectively. Notice that the condition \(X\geq 0\) corresponds to our \(X_{-\frac{r}{s}}\) with \(\frac{r}{s}\geq 0\)._
### Grothendieck group interpretation
Recall that \(\mathcal{D}_{\mathbb{X}}(A_{2})\) is a Calabi-Yau-\(\mathbb{X}\) category. The Grothendieck group \(\operatorname{K}(\mathcal{D}_{\mathbb{X}}(A_{2}))\) admits a basis \(\{[X_{0}],[X_{\infty}]\}\) and is a \(\mathbb{Z}[\mathfrak{q}^{\pm 1}]\)-module defined by the action
\[q_{1}^{l}q_{2}^{k}\cdot[E]:=[E[-l-k\mathbb{X}]]. \tag{5.4}\]
We have the following result.
**Proposition 5.2**.: _For any \(\frac{r}{s}\in\overline{\mathbb{Q}}\), we have_
\[[X_{\frac{r}{s}}]=\boldsymbol{R}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{r}{s})[X_{0}]+ \boldsymbol{S}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{r}{s})[X_{\infty}], \tag{5.5}\]
_where \(\boldsymbol{R}^{\sharp}_{q_{1}^{-1}q_{2}}\)\((\)resp. \(\boldsymbol{S}^{\sharp}_{q_{1}^{-1}q_{2}}\)\() is a polynomial of \(q_{1}^{-1}q_{1}\) if we take \(\mathfrak{q}=q_{1}^{-1}q_{1}\) in \(\boldsymbol{R}^{\sharp}_{\mathfrak{q}}\)\((\)resp. \(\boldsymbol{S}^{\sharp}_{\mathfrak{q}}\)\()\)._
Proof.: We only prove the non-negative case by induction on \(l(\frac{r}{s})\). For the initial case, it holds for \(X_{0}\) and \(X_{\infty}\) obviously. We assume that it holds for \(X_{\frac{p}{q}}\) and \(X_{\frac{u}{v}}\) where \(\frac{p}{q},\frac{u}{v}\) are in Setting 3.16. For \(X_{\frac{p}{q}\oplus\frac{u}{v}}\), we have
\[\begin{split}[X_{\frac{p}{q}\oplus\frac{u}{v}}]&=[X _{\frac{p}{q}}]+[X_{\frac{u}{v}}[(l+1)(1-\mathbb{X})]]\\ &=[X_{\frac{p}{q}}]+q_{1}^{-l-1}q_{2}^{l+1}[X_{\frac{u}{v}}]\\ &=\boldsymbol{R}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{p}{q})[X_{0}]+ \boldsymbol{S}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{p}{q})[X_{\infty}]+q_{1}^{-l- 1}q_{2}^{l+1}\big{(}\boldsymbol{R}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{u}{v})[X_ {0}]+\boldsymbol{S}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{u}{v})[X_{\infty}]\big{)} \\ &=\boldsymbol{R}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{p}{q}\oplus \frac{u}{v})[X_{0}]+\boldsymbol{S}^{\sharp}_{q_{1}^{-1}q_{2}}(\frac{p}{q}\oplus \frac{u}{v})[X_{\infty}],\end{split} \tag{5.6}\]
which implies the result.
### Relation to Jones polynomials for rational case
For every rational number \(\frac{r}{s}>1\), there is an associated rational (two-bridge) knot \(C(\frac{r}{s})\) in the sense of [LS]. The _Jones polynomial_ of \(C(\frac{r}{s})\) is defined via the skein relation in [LS], which is denoted by
\[V_{\frac{r}{s}}(t)\in t^{\frac{1}{2}}\mathbb{Z}[t,t^{-1}]\cup\mathbb{Z}[t,t^{- 1}].\]
When we take \(\mathfrak{q}=-t^{-1}\) and multiply the leading term, we get a polynomial \(J_{\frac{r}{s}}(\mathfrak{q})\in\mathbb{Z}[\mathfrak{q}]\), which is called the _normalized Jones polynomial_. There is a corollary directly from [BBL, Theorem A.3] and Theorem 3.18.
**Corollary 5.3**.: _For every rational \(\frac{r}{s}>1\), we have \(J_{\frac{r}{s}}(\mathfrak{q})=|\operatorname{Int}^{\mathfrak{q}}(\widehat{ \eta}_{\frac{r}{s}},\widehat{\eta}_{0})\mid_{\mathfrak{q}=q_{1}^{-1}q_{2}}|\), where \(|\cdot|\) is the normalized one._
|
2309.07189 | Learning From Drift: Federated Learning on Non-IID Data via Drift
Regularization | Federated learning algorithms perform reasonably well on independent and
identically distributed (IID) data. They, on the other hand, suffer greatly
from heterogeneous environments, i.e., Non-IID data. Despite the fact that many
research projects have been done to address this issue, recent findings
indicate that they are still sub-optimal when compared to training on IID data.
In this work, we carefully analyze the existing methods in heterogeneous
environments. Interestingly, we find that regularizing the classifier's outputs
is quite effective in preventing performance degradation on Non-IID data.
Motivated by this, we propose Learning from Drift (LfD), a novel method for
effectively training the model in heterogeneous settings. Our scheme
encapsulates two key components: drift estimation and drift regularization.
Specifically, LfD first estimates how different the local model is from the
global model (i.e., drift). The local model is then regularized such that it
does not fall in the direction of the estimated drift. In the experiment, we
evaluate each method through the lens of the five aspects of federated
learning, i.e., Generalization, Heterogeneity, Scalability, Forgetting, and
Efficiency. Comprehensive evaluation results clearly support the superiority of
LfD in federated learning with Non-IID data. | Yeachan Kim, Bonggun Shin | 2023-09-13T09:23:09Z | http://arxiv.org/abs/2309.07189v1 | # _Learning From Drift_:
###### Abstract
Federated learning algorithms perform reasonably well on independent and identically distributed (IID) data. They, on the other hand, suffer greatly from heterogeneous environments, i.e., Non-IID data. Despite the fact that many research projects have been done to address this issue, recent findings indicate that they are still sub-optimal when compared to training on IID data. In this work, we carefully analyze the existing methods in heterogeneous environments. Interestingly, we find that regularizing the classifier's outputs is quite effective in preventing performance degradation on Non-IID data. Motivated by this, we propose Learning from Drift (LfD), a novel method for effectively training the model in heterogeneous settings. Our scheme encapsulates two key components: drift estimation and drift regularization. Specifically, LfD first estimates how different the local model is from the global model (i.e., drift). The local model is then regularized such that it does not fall in the direction of the estimated drift. In the experiment, we evaluate each method through the lens of the five aspects of federated learning, i.e., Generalization, Heterogeneity, Scalability, Forgetting, and Efficiency. Comprehensive evaluation results clearly support the superiority of LfD in federated learning with Non-IID data.
## 1 Introduction
With the increasing privacy concerns, transmitting privacy-sensitive data (e.g., browsing history, electronic health records, or data containing intellectual property) to outside of local networks makes the training on different sources further difficult. To address the above challenge, federated learning enables multiple parties (i.e., regions, devices, and users) to cooperatively train a neural model without sending the local data between participants [16, 17, 18, 19].
FedAvg [18] is the most widely used method in practice. In this method, a centralized server distributes an initial model to participants, and they perform a local optimization on their local data. After the optimization, the trained models are aggregated on the server by averaging the trained parameters from different participants. However, as the local models1 are trained to fit the local data distribution instead of the global data distribution, heterogeneity of data distribution (i.e., Non-IID data) degrades the performance of the federated learning [16, 17, 18, 19, 20]. The promising way to overcome such degradation is to constrain the local optimization using the global model which is typically treated as more reliable and generalized than the local models [16]. However, recent works find that these approaches still suffer performance degradation and show no advantages over FedAvg in several heterogeneity settings [14, 15].
Footnote 1: In this paper, we use the terms _locally trained model_ and _local model_ interchangeably. Similarly, we give no difference the terms between _aggregated model_ and _global model_.
To further understand the performance degradation, we perform an in-depth analysis about the robustness against client drift when constraining outputs [16, 15] and parameters [16, 17] of different layers on the settings of heterogeneous federated learning. Interestingly, we observe that regularizing the classifier's outputs, which is not the target in most works, is quite effective to prevent the drift while constraining others still suffer from the client drift.
Motivated by our upfront analysis, we propose a novel method to prevent client drift in the heterogeneous environment, coined Learning from Drift (LfD). Unlike the previous works that the model is constrained to generate the same features and parameters as the global model, our scheme mainly focuses on the prediction differences on the same data between the local and the global model (we denote the difference as _drift_), and it roughly contains how the local model differs from the global model over the categorical distribution. Based on that, the model is trained with the drift regularization, which enforces the training model not to fall in the drift direction.
We compare LfD with strong baselines through the lens of five important factors for federated learning on heterogeneous environments, which are generalization, heterogeneity, scalability, catastrophic forgetting, and efficiency. Comprehensive results show that LfD effectively prevents client drift while yielding strong performance for federated learning. For example, compared to strong baselines, LfD achieves the state-of-the-art performance and shows nearly the same performance as the training on the total dataset in several heterogeneous settings. In summary, our contributions include followings:
* We perform an in-depth analysis about the client drift when constraining different parts of the local model. Interestingly, we find that constraining classifier's output is most effective to prevent the client drift while others still suffer from the drift (Section 3).
* Based on our analysis, we propose a robust federated learning algorithm against diverse heterogeneous settings by explicitly estimating the drift and regularizing local models over the prediction space. (Section 4).
* We compare the proposed method with the strong baselines via the five important aspects of federated learning and observe that LfD achieves the best performance over diverse heterogeneous settings and datasets (Section 5).
## 2 Problem setup
**Federated Learning.** We assume there is a central server that can transmit and receive messages from \(K\) client devices in federated learning. Each client \(k\in[K]\) has its local data \(\mathcal{D}_{k}\) which consists of \(N_{k}\) training instances in the form of input features \(\mathbf{x}\) and its corresponding label \(y\). The objective for federated learning is to learn a single model that performs well on the distributed dataset without sharing the data between clients. To this end, the server first sends a global model parameterized by \(\mathbf{\omega}_{t}\) to each client where subscript \(t\) indicates the current communication round, and the clients update the received model by optimizing the following local objective:
\[\min_{\mathbf{\omega}_{t}^{k}}\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D}_{k}}[ \mathcal{L}(\mathbf{x},y;\mathbf{\omega}_{t}^{k})] \tag{1}\]
where \(\mathcal{L}\) is the algorithm-dependent loss function, and \(\mathbf{\omega}_{t}^{k}\) indicates the locally optimized model initialized by the global model \(\mathbf{\omega}_{t}\). After the local optimization, the server aggregates the local models \(\mathbf{\omega}_{t}^{i}\) where \(i\in\{1,2,...K\}\) to update the global model by weighted averaging the models:
\[\mathbf{\omega}_{t+1}=\sum_{k=1}^{K}p_{k}\mathbf{\omega}_{t}^{k} \tag{2}\]
where the weight \(p_{k}\) is typically determined by the number of the local data over the entire dataset, i.e., \(p_{k}=|\mathcal{D}_{k}|/\sum_{j=1}^{K}|\mathcal{D}_{j}|\). The server and participants repeat the above procedures until the the global model converges.
## 3 Global constraint on Federated Learning
One of the widely-used approaches to prevent client-drift is to constrain the model not to deviate from the global model. To understand how these constraints affect the training model in federated learning on Non-IID data, we perform an analysis by conducting experiments on heterogeneous environments. The setting includes ten clients with heterogeneously distributed CIFAR-10 data (see Section 5.1 for more information). Here, we constrain the model by minimizing the distance between the training local model and the global model, and the minimizing targets are feature vectors and parameters which are the representative targets in recent studies2[18, 19, 20]. The targets are further decomposed by their positions in the network architectures which are classifier, header3, and feature extractor. We trace the output logits' similarity and prediction divergence to the global model during the local optimization to see the effects of the above constraints. For the similarity measure, we use centered kernel alignment (CKA) [17] similarity and calculate the prediction divergence based on kullback-libeler divergence (KLD).
Footnote 2: For the feature constraint, the targets are the outputs from each component. For the parameter constraints, the targets are the trainable parameters of each layer.
Footnote 3: Header indicates the projection layers that are located between feature extractor and the classifier (i.e., the last layer). Headers are usually used at contrastive learning [1].
Figure 1 reports the above measures during the local optimization. We first observe that the feature constraint encourages the training model to generate more similar features to the global model compared to the parameter constraint. When it comes to the divergence in the prediction, both constraints tend to show progressively deviating from the global model (i.e., increasing KL divergence) revealing that the model still suffers from the drift. However, it is noticeable that constraining the logits from the classifier makes
Figure 1: CKA similarities and KL divergences during the local optimization with different global constraints.
the model less deviate from the global model, whereas regularizing the parameters of the classifier does not exhibit significant differences compared to other targets.
As it is susceptible that constraining logits does not learn the local data while preserving the global knowledge, we estimate the federated learning performance of each constraint in Table 1. From the evaluations, we first observe that the performance is more improved as regularization targets are deeper. In particular, we experimentally find that constraining the logits from the classifier is most effective to prevent the drift compared to other targets. Our finding is aligned with the recent work that demonstrates the deeper layers are vernerable to the client drift [11].
## 4 Learning from Drift (LfD)
In this work, we propose a novel federated learning algorithm, coined Learning from Drift (LfD), for training a model in heterogeneous environments effectively. In an overview, LfD works in two major steps: drift estimation and drift regularization. In the first phase, LfD explicitly estimates the drift between the global model and the locally trained models on the logit space (Section 4.1). Afterward, the model in each client is trained with the regularization, thereby leading the model not to fall in the drift direction during the local optimization (Section 4.2). Figure 2 shows the overview of our method.
### Drift Estimation by Prediction Discrepancy
Based on our upfront analysis, we quantify the degree of the client drift by estimating prediction discrepancy between locally trained models (i.e., \(\mathbf{\omega}_{t-1}^{k}\)) in the previous communication and their aggregated model (i.e., \(\mathbf{\omega}_{t}\)). The probability distribution over classes can be used to represent the prediction, and the drift for input \(x\) is defined as follows:
\[f_{D}^{y_{i}}(x)=\log(\sigma(f_{P}^{y_{i}}(x))-\log(\sigma(f_{G}^{y_{i}}(x))~{ },\forall y_{i}\in\mathcal{Y} \tag{3}\]
where \(\sigma(\cdot)\) is the softmax function, and \(f_{P}^{y_{i}}(\cdot),f_{G}^{y_{i}}(\cdot)\) are the logits of the local model trained in previous communication and its aggregated model (i.e., global model) for the class \(y_{i}\), respectively. The estimated drift indicates how confident the local model is compared to the global model for each class. If the local model reveals more confidence4 than the global model on the given input \(x\), the large drift is estimated, and it can be interpreted as the local model was overfitted to the sample because the global model's confidence is more reliable and generalized than others [12].
Footnote 4: We define the confidence as the predicted probability to the specific class (e.g., ground-truth).
To prevent the drift, it is important to precisely estimate the drift by comparing the confidences between models. However, the confidences between the local models and the global model are not directly comparable due to the inconsistency between the magnitudes of the softmax weights and features. The softmax function for the prediction can be represented with the magnitudes and angles between input fea
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Feature constraint & Parameter constraint \\ \hline No constraint & \multicolumn{2}{c}{69.1 \(\pm\) 0.6} \\ \hline \hline All & 67.3 \(\pm\) 0.8 (\(\downarrow\) 1.8) & 68.6 \(\pm\) 0.2 (\(\downarrow\) 0.5) \\ Feature extractor & 68.9 \(\pm\) 0.3 (\(\downarrow\) 0.2) & 69.5 \(\pm\) 0.3 (\(\uparrow\) 0.4) \\ Header & 69.6 \(\pm\) 0.5 (\(\uparrow\) 0.5) & 69.3 \(\pm\) 0.9 (\(\uparrow\) 0.2) \\ Classifier & 70.8 \(\pm\) 0.4 (\(\uparrow\) 1.7) & 69.7 \(\pm\) 0.5 (\(\uparrow\) 0.6) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Top-1 Test accuracy (%) of the methods with global constraints. The model without constraints is the same as FedAvg.
Figure 2: Overview of the proposed method (LfD). Here, the client has the imbalanced data distribution where the samples for _dog_ are the majority while the samples for _cat_ are the minority. In this case, LfD estimates that the client reveals over-confidence to the class _dog_ and under-confidence to the class _cat_. Afterward, LfD regularizes the local model to predict the same input with slightly lower confidence for _dog_ and higher confidence for _cat_ so as to align with the global model.
tures and classifier weights.
\[\sigma(f^{y_{i}}(x;\omega)) =\frac{e^{f^{y_{i}}(x)}}{\sum_{j=1}^{|\mathcal{Y}|}e^{f^{y_{j}}(x)} }=\frac{e^{W_{i}^{T}u}}{\sum_{j=1}^{|\mathcal{Y}|}e^{W_{j}^{T}u}} \tag{4}\] \[=\frac{e^{\|W_{i}\|\|u\|\cos\theta_{i}}}{\sum_{j=1}^{|\mathcal{Y}| }e^{\|W_{j}\|\|u\|\cos\theta_{ij}}},\]
where \(\theta_{i}\) is the intersection angle between pre-activated features \(u\) and classifier weights \(W_{i}\) for the class \(y_{i}\). As can be seen from the above equation, the magnitudes of features and the classifier weights can affect the confidence of the prediction by similarly serving as the different temperature. Different temperatures between classifiers (i.e., global and locals) result in different predictions in terms of the confidence, making the precise estimation of the drift difficult. Furthermore, the differences are more evident in the heterogeneous environments because this setting makes the classifier weights to be biased toward majority classes [14].
To estimate the drift precisely and comparably, we constrain the magnitudes by normalizing the classifier weights and features during the local optimization.
\[\hat{u}=\frac{u}{\|u\|},\hat{W}_{i}=\frac{W_{i}}{\|W_{i}\|},\forall i\in \mathcal{Y}, \tag{5}\]
Based on the normalized features and classifier weights, the softmax function can be represented as:
\[\sigma(f^{y_{i}}(x_{i};\omega))=\frac{e^{\cos\theta_{i}}}{\sum_{k=1}^{| \mathcal{Y}|}e^{\cos\theta_{k}}}, \tag{6}\]
However, as the logit range is limited between zero to one (i.e., cosine range), the model can be slowly converged in the training. We thus give a margin to the ground-truth class to encourage the model to converge faster with the normalization constraint.
\[\sigma(f^{y_{i}}(x_{i};\omega))=\frac{e^{(\cos\theta_{i}-m)/\tau}}{e^{(\cos \theta_{i}-m)/\tau}+\sum_{k=1,k\neq i}^{|\mathcal{Y}|}e^{\cos\theta_{k}/\tau}}, \tag{7}\]
where \(m\in[0,1]\) is the margin which is the hyper-parameter, \(\tau\) indicates the temperature. The margin encourages the training model to predict the ground-truth label while having the margin \(m\) to other classes on the logit5. This strategy allows to more correctly estimate the drift without slow convergence in the training.
Footnote 5: Note that we use the margin only in the training phase.
### Drift Regularization
After estimating the drift for each sample, the local optimization is performed such that the model does not fall in the drift direction. To this end, the model should be learned in the reverse direction to the estimated drift. We control the learning direction of the local optimization by regularizing the model with the auxiliary label containing the reverse direction of the estimated drift. We define the regularization term \(\mathcal{R}\) to prevent the drift as follows:
\[\mathcal{R}^{y_{i}}(x_{i})=-\hat{y_{i}}\cdot\log(\sigma(f^{y_{i}}(x_{i}))) \text{ where }\hat{y_{i}}=\sigma(-f_{D}^{y_{i}}(x_{i})) \tag{8}\]
We derive the reverse direction of the drift by taking the negative form of the estimated drift \(f_{D}^{y_{i}}(x_{i})\) and normalizing it by the softmax function. With this regularization, the local objective of each client can be formulated as:
\[\mathcal{L}(\mathcal{D},\omega)=-\sum_{(x,y)\in\mathcal{D}_{k}}\sum_{i=1}^{| \mathcal{Y}|}(y_{i}\log\sigma(f^{y_{i}}(x))+\mathcal{R}^{y_{i}}(x)) \tag{9}\]
where the first term is the cross-entropy with the ground-truth label, and the second term is the regularization with the auxiliary label. As the auxiliary label \(\hat{y_{i}}\) includes the supervision for every class compared to the ground-truth, the objective can be decomposed according to the ground-truth and others.
\[\mathcal{L}(x,\omega)=\begin{cases}-(1+\hat{y_{i}})\cdot\log\sigma(f^{y_{i}}(x ))&\text{ if }y_{i}=1\\ -\hat{y_{i}}\cdot\log\sigma(f^{y_{i}}(x))&\text{ otherwise.}\end{cases} \tag{10}\]
For the ground-truth class, the regularization gives upward weights to the loss. The degree of the weights is determined by the confidence of the local model compared to the global model. The upward weights are set to low values if the local model reveals overconfidence. In contrast, higher upward weights are applied to samples for which the global model has higher confidence than the local model so as to avoid deviating from the global model. Figure 3 shows the prediction confidence of the global model, the local model, and the local model regularized by LfD. It shows that the global and local models have different confidence, especially for the minority classes of the local data distribution. However, LfD effectively regularizes the local model to have similar confidence to the global model.
For the other classes, the regularization \(\mathcal{R}\) works similarly to knowledge distillation [10] or label smoothing [13] by providing non-zero weights to other classes. It prevents the local model from forgetting inter-class similarities learned from the global model. This is particularly important when the client data distribution does not cover all classes in global data distribution. In the experiment, we show that the term prevents the model from forgetting the learned knowledge of the global model.
After the model is trained with LfD, the local models are sent to the central server for the aggregation (i.e., Eq. (2)). We provide the overall process of LfD in Algorithm 1 in the supplementary material (Section A).
Figure 3: Confidence differences between global model, local model and local model with LfD. This shows that training with LfD encourages the local model to have the similar confidence to the global model.
## 5 Evaluations
### Experimental Setup
**Federated Simulation** We mainly consider FedML benchmark [14] to evaluate each method on the federated learning scenario, _i.e._, CIFAR-10 [15], CIFAR-100 [16], and CINIC-10 [1]. We also validate our method on other domains to extend our knowledge to different domains, i.e., AGNews [11] for natural language processing and BindingDB [12] for biology. To simulate federated learning, we randomly split the training samples of each dataset into \(K\) batches (i.e., the number of clients, 10 by default), and assign one training batch to each client. As we are interested in Non-IID setting, we use Dirichlet distribution to generate the Non-IID data partition among participants. Specifically, we sample \(p_{ik}\sim Dir_{K}(\beta)\) and allocate a \(p_{ik}\) proportion of the instances of class \(i\) to client \(k\), where \(Dir(\beta)\) is the Dirichlet distribution with a concentration parameter \(\beta\) (0.5 by default), and smaller \(\beta\) makes more heterogeneous data distribution.
**Baselines and Implementations** We consider comparing the test accuracies of the representative federated learning algorithms: FedAvg [16], FedProx [12], FedAvgM [12], and state-of-the-art method MOON [12]. We carefully choose the best hyper-parameters used in FedProx (_i.e.,_ weighting factor \(\mu\) for euclidean distance), FedAvgM (_i.e.,_\(\beta\) momentum factor) and MOON (_i.e.,_\(\tau\) temperature and \(\mu\) weighting factor for contrastive learning) through the validation set. For our method, we set the temperature as 0.1 (CIFAR-10, CINIC-10), 0.05 (CIFAR-100) for all softmax functions6 and set the margin as 0.15 by default.
Footnote 6: We use the small temperature due to the normalization.
We use a simple 2-layers CNNs with 2-layers MLP projection networks for CIFAR-10. For CIFAR100 and CINIC-10, we adopt ResNet-18 [14]. For a fair comparison, we use the same networks and augmentation for all methods. For the optimization, we use the SGD optimizer with a learning rate 0.01 for all approaches. The SGD weight decay is set to 0.00001 and the SGD momentum is set to 0.9. The batch size is set to 128. The number of local epochs is set to 300 for Union. The number of local epochs is set to 10 for all federated learning approaches unless explicitly specified. Each result is averaged result over three trials and these are implemented by PyTorch 1.7 on the NVIDIA A100 GPUs.
### Experimental Results
We evaluate each method on following viewpoints to see the strength of the proposed methods in federated learning:
* **Generalization**: The federated learning algorithm should be well generalized to diverse domains and datasets.
* **Heterogeneity**: The federated learning algorithm should be robust on different levels of heterogeneity, _i.e.,_ class skewness and training data distribution.
* **Scalability**: In reality, the number of clients is highly variable depending on the application. Moreover, there is no guarantee that all participants join the update steps due to communication loss or battery issues. Therefore, the federated learning method should work well with a large number of clients and the variation of the participating clients for the global update.
* **Forgetting**: One of the challenges for federated learning with Non-IID data is the class distribution's discrepancy, which interferes with the converge of the global model. To mitigate such a bad effect, the local model should maintain the knowledge learned from the global model during the local optimization.
* **Efficiency**: Since the communication between clients and server is the main source of energy consumption [17, 1], it is important that federated learning methods should achieve strong accuracy with the small number of communications.
**Generalization** We first evaluate whether each method can be generalized to different domains and datasets. We conduct the experiments on three different domains, i.e., image (CIFAR-10/100), drug discovery (BindingDB), and natural language (AGNews), and the evaluation results are tabulated in Table 2. It can be observed that the model learned by LfD yields strong accuracies on all domains (e.g., LfD achieves three state-of-the-art performances out of four domains), revealing that explicitly regularizing the local model based on the estimated drift is quite effective. Particularly, the models with global constraints, i.e., FedProx and MOON, often fail to achieve a great advantage over FedAvg. In contrast, LfD consistently shows improved accuracies by a large margin over FedAvg without considering domains and datasets. It further confirms the necessity of learning from the drift not just from the global constraints.
**Heterogeneity** We confirm whether each method still works well with the different levels of heterogeneity and the shift of training data distribution. To this end, we adjust the concentration parameter \(\beta\) to increase the level of heterogeneity (i.e, \(\beta\in\{0.5,0.1,0.05\}\)) and perform the experiment on CINIC-10 which is constructed from ImageNet [18] and CIFAR10. As the samples from CINIC-10 are not drawn from an identical distribution, we can naturally evaluate how sensitive each method is to
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Method & CIFAR-10 & CIFAR-100 & BindingDB & AGNews \\ \hline Union & 74.5 \(\pm\) 0.9 & 70.9 \(\pm\) 0.4 & 89.5 \(\pm\) 1.2 & 90.1 \(\pm\) 0.7 \\ \hline \hline FedAvg & 69.1 \(\pm\) 0.6 & 64.0 \(\pm\) 0.3 & 85.7 \(\pm\) 0.8 & 85.6 \(\pm\) 1.2 \\ FedAvgM & 69.9 \(\pm\) 1.1 & 64.1 \(\pm\) 0.9 & 86.5 \(\pm\) 1.5 & 86.1 \(\pm\) 0.8 \\ FedProx & 68.6 \(\pm\) 0.9 & 63.1 \(\pm\) 0.7 & 86.1 \(\pm\) 0.5 & 86.9 \(\pm\) 0.5 \\ MOON & 71.2 \(\pm\) 0.5 & 64.3 \(\pm\) 0.8 & **89.0**\(\pm\) 0.3 & 85.4 \(\pm\) 0.4 \\ LfD (ours) & **74.1**\(\pm\) 0.4 & **69.4**\(\pm\) 0.5 & 88.1 \(\pm\) 0.4 & **88.4**\(\pm\) 0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 2: (Generalization) Top-1 Test Accuracy (%) on CIFAR-10, CIFAR-100, BindingDB and AGNews. Best and second best results are highlighted in **boldface** and underlined, respectively.
the distribution shift. We tabulate the evaluation results on Table 3. We see that the state-of-the-art method, MOON, starts to largely lose accuracy as the level of heterogeneity increases. Specifically, when the local dataset is highly skewed (i.e., \(\beta\) = 0.05), MOON shows roughly 10% lower accuracy than FedAvg. Similar observation is observed in recent work Luo et al. (2021). In contrast, LfD consistently yields higher accuracy than other baselines. For the case where the training distribution is shifted, i.e., CINIC-10, we can see LfD still works well, revealing that LfD is robust on both the different levels and types of heterogeneity.
ScalabilityWe evaluate the scalability to verify that each method can be extended to the more realistic scenario, i.e., the more number of clients and the randomly active clients. For the evaluation, we increase the number of clients to 50 and 100, i.e., \(K\in\{50,100\}\), while decreasing the active clients7 to 25% and 50%. The overall results are shown in Table 4. It can be observed that increasing the number of clients and reducing the percentage of active clients degrades the performance of all methods because these adjustments can be one of the factors increasing the heterogeneity. However, LfD outperforms all baselines in all settings, demonstrating that LfD performs well on the more realistic scenarios.
Footnote 7: We denote _active client_ as the clients who participate in the aggregation in Eq. (2).
ForgettingThe global model usually has the better knowledge for the global distribution Li et al. (2021). However, the learned knowledge can be forgotten after the local optimization as the training makes the model fit the local distribution. Such forgetting phenomena is called _catastrophic forgetting_, and it is widely known that the forgetting leads to the slow convergence and worse performance on federated learning Li et al. (2018); Zhao et al. (2018). To estimate how much each method forgets or maintains the learned knowledge, we calculate the accuracy for existing and absence classes before and after the local optimization and quantify the forgetting by defining learning performance (LP) as:
\[LP(\mathcal{D}^{test};\omega_{t}^{k},\omega_{t})=\frac{1}{|\mathcal{Y}|}\sum_{y _{i}\in\mathcal{Y}}\frac{Acc(\mathcal{D}^{test}(y_{i}),\omega_{t}^{k})}{Acc( \mathcal{D}^{test}(y_{i}),\omega_{t})} \tag{11}\]
where \(\mathcal{D}^{test}(y_{i})\) is the test dataset that contains samples belonging the class \(y_{i}\), and \(Acc()\) is the accuracy on the given dataset and classifier. \(LP(\cdot)\) is the relative metric estimating how the categorical accuracy is changed after the local optimization. For the case where \(LP(\cdot)\) is larger than the value of one, we can interpret this as the improved accuracy after the optimization. In contrast, the lower value of one indicates the forgetting after the optimization. Based on this metric, we select the specific client who has the only half of categories and estimate \(LP\) for existing and absence categories. Table 5 and Figure 4 shows the LP performance and its dynamics during the local optimization, respectively8. From the table, the accuracy for existing categories is improved after the optimization in all baselines. However, the baselines completely forget the discriminative ability for the absence classes. Interestingly, LfD maintains the learned knowledge to some extent without learning any samples for the corresponding classes. The results can be explained by the distillation effect of the drift regularization on Eq. (3).
Footnote 8: Here, we use the trained models from each method that have nearly similar test accuracy to the global model.
between 5 and 30 epochs due to the client drift, which is commonly observed in other studies [11, 12]. These results validate that LfD can achieve strong accuracy with the small number of communication and computation costs.
## 6 Related work
In this section, we mainly review the federated learning algorithms dealing with Non-IID data because it is one of the most challenging settings that degrade the performance and slow down the convergence [11, 2].
### Global Regularization
While the FedAvg method works well with IID data, the performance significantly degrades with increasing heterogeneity on data distribution [11]. To prevent the worse effect, one of the mainstream is to give constraint to local optimization by maximizing agreement with the global model. SCAFFOLD [13] corrects local optimization by introducing _control variate_ from the global model. FedProx [11] regularize the networks by minimizing the euclidean distance to the global model. However, recent work [11] shows that those works have little or even no advantage over FedAvg and propose MOON which adopts the contrastive learning to increase agreement with the global model based on projected head vectors. FedDyn [1] changes the local optimization to ensure that the local optimum on each client is asymptotically consistent with the stationary points of the global objective.
### Model Aggregation
Instead of naively averaging the model weights, some of the works revise the aggregation step in the orchestration server. FedAvgM [12] updates the aggregated model with server momentum. FedMA [14] mitigate the model heterogeneity by matching similar neurons (e.g., convolutional filters and hidden states in LSTMs) between clients and performing the average to build the global model. Similarly, Fed\({}^{2}\)[13] align similar features by model structure adaptation and feature-paired averaging on similar functioning neurons.
### Data Sharing and Generation
Sharing the local data between clients is prohibited in the federated learning scenario. One of the practices is to share public data [20, 11] and unlabeled or synthesized data points [10]. Instead of sharing or generating input points, [12] estimate the feature statistics and augment the features followed the global distribution to calibrate the classifier. These works are promising but have potential risks for privacy preservation.
## 7 Conclusion
In this work, we have analyzed the existing federated learning algorithms that employ the global model to prevent client drift. We have observed that regularizing the features on logit space is most effective, whereas constraining other features and parameters does not provide much improvement over FedAvg. Based on the upfront analysis, we propose a federated learning algorithm, coined Learning from Drift (LfD). LfD explicitly estimates the client drift over logit space and regularizes the local model to learn the inverse direction of the estimated drift. In the experiments, we have evaluated our method and strong baselines with regard to five perspectives (i.e., Generalization, Heterogeneity, Scalability, Forgetting, and Efficiency). Comprehensive evaluation results clearly show that LfD effectively prevents client drift and achieves state-of-the-art results on experiments with diverse heterogeneous settings.
Figure 4: Test accuracy (%) for existing (Top) and absence (Bottom) classes during the local optimization. This shows that only LfD maintains the knowledge for the absence classes.
Figure 5: Test accuracy over the different number of communication rounds (Top) and local optimization steps (Bottom). |
2309.11933 | Fully Transformer-Equipped Architecture for End-to-End Referring Video
Object Segmentation | Referring Video Object Segmentation (RVOS) requires segmenting the object in
video referred by a natural language query. Existing methods mainly rely on
sophisticated pipelines to tackle such cross-modal task, and do not explicitly
model the object-level spatial context which plays an important role in
locating the referred object. Therefore, we propose an end-to-end RVOS
framework completely built upon transformers, termed \textit{Fully
Transformer-Equipped Architecture} (FTEA), which treats the RVOS task as a mask
sequence learning problem and regards all the objects in video as candidate
objects. Given a video clip with a text query, the visual-textual features are
yielded by encoder, while the corresponding pixel-level and word-level features
are aligned in terms of semantic similarity. To capture the object-level
spatial context, we have developed the Stacked Transformer, which individually
characterizes the visual appearance of each candidate object, whose feature map
is decoded to the binary mask sequence in order directly. Finally, the model
finds the best matching between mask sequence and text query. In addition, to
diversify the generated masks for candidate objects, we impose a diversity loss
on the model for capturing more accurate mask of the referred object. Empirical
studies have shown the superiority of the proposed method on three benchmarks,
e.g., FETA achieves 45.1% and 38.7% in terms of mAP on A2D Sentences (3782
videos) and J-HMDB Sentences (928 videos), respectively; it achieves 56.6% in
terms of $\mathcal{J\&F}$ on Ref-YouTube-VOS (3975 videos and 7451 objects).
Particularly, compared to the best candidate method, it has a gain of 2.1% and
3.2% in terms of P$@$0.5 on the former two, respectively, while it has a gain
of 2.9% in terms of $\mathcal{J}$ on the latter one. | Ping Li, Yu Zhang, Li Yuan, Xianghua Xu | 2023-09-21T09:47:47Z | http://arxiv.org/abs/2309.11933v1 | # Fully Transformer-Equipped Architecture for End-to-End Referring Video Object Segmentation
###### Abstract
Referring Video Object Segmentation (RVOS) requires segmenting the object in video referred by a natural language query. Existing methods mainly rely on sophisticated pipelines to tackle such cross-modal task, and do not explicitly model the object-level spatial context which plays an important role in locating the referred object. Therefore, we propose an end-to-end RVOS framework completely built upon transformers, termed _Fully Transformer-Equipped Architecture_ (FTEA), which treats the RVOS task as a mask sequence learning problem and regards all the objects in video as candidate objects. Given a video clip with a text query, the visual-textual features are yielded by encoder, while the corresponding pixel-level and word-level features are aligned in terms of semantic similarity. To capture the object-level spatial context, we have developed the Stacked Transformer, which individually characterizes the visual appearance of each candidate object, whose feature map is decoded to the binary mask sequence in order directly. Finally, the model finds the best matching between mask sequence and text query. In addition, to diversify the generated masks for candidate objects, we impose a diversity loss on the model for capturing more accurate mask of the referred object. Empirical studies have shown the superiority of the proposed method on three benchmarks, e.g., FETA achieves 45.1% and 38.7% in terms of mAP on A2D Sentences (3782 videos) and J-HMDB Sentences (928 videos), respectively; it achieves 56.6% in terms of \(\mathcal{J}\&\mathcal{F}\) on Ref-YouTube-VOS (3975 videos and 7451 objects). Particularly, compared to the best candidate method, it has a gain of 2.1% and 3.2% in terms of P\(@\)0.5 on the former two, respectively, while it has a gain of 2.9% in terms of \(\mathcal{J}\) on the latter one.
keywords: Video object segmentation, stacked transformer, diverse object mask, vision-language alignment +
Footnote †: journal: arXiv
## 1 Introduction
Referring Video Object Segmentation (RVOS) bridges the semantic gap between text description and video content, by yielding the pixel-level object mask sequence, which matches with the referred target in a sentence (_a.k.a._, text query or referring expression). Generally, RVOS has widespread application fields, such as human-robot interaction [44] and language-based video editing [15]. Unlike popular semi-supervised video object segmentation [21; 11] that provides the first-frame mask, there is none of pixel labeling and the visual-linguistic cross-modality understanding is required for RVOS, which makes it much more challenging.
**Research Objectives**. We have two primary goals: 1) matching the objects with the text query by cross-modal feature alignment; 2) locating the pixel regions of candidate objects by learning promising feature representation which captures the spatiotemporal pixel relations among frames.
Essentially, RVOS requires understanding the scene and the objects in video while matching objects with text query. The challenge is there are usually more than one object, and it desires modeling the relative positions of objects and action interactions. However, most existing methods regard RVOS as a pixel-wise classification problem, classifying each pixel of input video into two categories, namely target or background. Hence, the non-referred objects may be treated as background, adding difficulty to modeling the object spatial relations. Here, we argue that RVOS is an object-wise rather than pixel-wise classification problem. For example, there are a person and a kangaroo in Figure 1, and the referring expression is "_a person walking behind the kangaroo_"; thus the model needs to capture both the person and the kangaroo at the same time while modeling the relations of the two objects, and then segment the target object (person), i.e., the subject of the referring sentence. Existing pixel-wise methods [16; 46; 50] fail to model the spatial relations at object level, thus degrading the performance.
To overcome the above limitation, we rethink the RVOS as an object-wise classification problem, and adopt the paradigm of mask classification [10] to construct a novel object-wise framework. The basic idea is to generate binary masks of multiple candidate objects, and then select the best candidate object mask as the final prediction. As shown in Figure 1, the video and the referring expression involve two candidate objects, "person" and "kangaroo". Our method aims at generating masks for all the objects and finally selecting the binary mask corresponding to "person" as the target mask. In the mask classification paradigm, the goal is to capture all candidate objects in video, while effectively understanding the potential relations among candidate objects.
To locate the pixel regions of candidate objects, a natural way is to obtain their features individually. However, existing methods [4; 46] adopt convolution and upsampling operations to decode frame features into masks. But the feature representation of pixel region mainly uses the convolution operations to locally capture the spatial context to obtain image-level feature, which lacks global relation modeling and cannot independently characterize each candidate object. Thus, we propose a transformer-based object decoding network to globally capture the spatial context and obtain independent candidate object feature maps.
To this end, we propose a Fully Transformer-Equipped Architecture (FTEA) for RVOS task, and the overall pipeline is illustrated in Figure 1. Given a video and a text query (left top), FTEA outputs mask sequence matching with the referred target (person in purple at the left bottom). FTEA is composed of Visual Encoder, Text Encoder, Cross-Modal Alignment module, and Mask Decoder. Unlike previous complex pipelines [33; 63; 50; 46] unifying both CNNs (Convolution Neural Networks) [6; 47] and RNNs (Recurrent Neural Networks) [12; 19]), our framework is completely dependent upon efficient transformers including visual and text feature encoding. More importantly, we propose Stacked Transformer based Mask Decoder
Figure 1: Overall pipeline of our model. It adopts an end-to-end framework completely built upon transformers, and the Stacked Transformer with diversity loss is developed for decoding mask sequence of the referred object (person).
to decode frame feature maps to mask sequences by designing two modules, i.e., Stacked Attention (SA) and Stacked Feed Forward Network (SFFN). It employs a progressive object learning strategy for capturing the object-level spatial context, and utilize dynamic convolution with the imposed diversity loss on candidate object kernels, in order to make candidate object masks diverse. In Mask Decoder, one linear layer follows Stacked Transformer to join the dynamic convolution with candidate object kernels to produce corresponding masks. The referred object matches with the mask sequence with the highest sum of referring scores, and these scores are from Cross-Modal Alignment module.
The former SA module takes low-level feature maps and cross-modal alignment features as input, trying to capture the appearance cues, such as edges and contours, and thus locate the region of interest relating with the referred object. To obtain independent candidate object feature maps, SA applies candidate object kernels on pixel-wise features of the image, and object kernels encode the object-level property such as location or action. The latter SFFN module, placed behind SA, divides feature map channels into groups, whose number equals the number of candidate objects, resulting in group sparsity of object relations during group linear mapping. Each group of feature maps is learned to reveal the latent pattern of one candidate object. Due to the sparsely group-wise strategy, computational cost is largely reduced compared to densely channel-wise way adopted by common convolution.
In mask classification paradigm, the number of candidate object masks is often much greater than that of objects in video, and there may be a phenomenon that multiple candidate masks are connected to the same object while some possible candidates are neglected. To deal with this issue, as shown in the right bottom part of Figure 1, we impose the diversity loss on candidate object kernels to make them dissimilar between each other. This encourages generating diverse candidate masks, such that the objects in video can be all covered as possible. Here, candidate object kernels are learned from visual-text features by Cross-Modal Alignment module.
The main contributions of this work are highlighted as:
* We propose an end-to-end RVOS framework completely built upon transformers, i.e., Fully Transformer-Equipped Architecture, termed FTEA.
* We develop the stacked attention mechanism to model object-level spatial context, and the stacked FFN to reduce model parameters when obtaining independent candidate object feature maps by group linear mapping.
* We introduce the diversity loss on candidate object kernels to diversify candidate object masks.
* Extensive experiments were carried out on three benchmarks to validate the effectiveness of our FTEA approach.
The remainder of this paper is structured as follows. Section 2 reviews some closely related works and Section 3 introduces our FTEA framework for RVOS task. Then, we report experimental results on three benchmarks to show the superiority of our method in Section 4 followed by the discussion in Section 5. Finally, this work is concluded in Section 6.
## 2 Related Work
### Referring Video Object Segmentation
Different from conventional VOS [45; 20; 28] and semantic segmentation [42], RVOS aims to segment the pixel-level target object in video, given a natural language description, i.e., text query or referring expression. This desires visual-linguistic understanding, and establishes the relation between the referred object and its corresponding pixel region of video.
In RVOS, dynamic convolution [16] as a typical method, introduces the dynamic kernels conditioned on text features and convolves with visual features to obtain object masks. While it is simple and efficient, but fails to well capture the pixel-level spatial context, since convolution kernels heavily depend upon referring expressions. To overcome this drawback, Wang _et al._[50] proposed Context Modulated Dynamic Networks (CMDA) to generate dynamic kernels by using both visual and linguistic features, such that the object discriminant ability can be enhanced. Besides, several works adopt the attention mechanism, which not only globally encodes video frames and referring expressions, but also models the fine-grained semantic relations, including word-word, pixel-pixel, and word-pixel relations. For example, Wang _et al._[51] designed the vision-guided language attention to reduce the linguistic variation of text query and the language-guided vision attention to obtain the object region related to text query. Since vanilla attention mechanism does not encode pixel-wise object position [49], Ning _et al._[41] developed the Polar Relative Positional Encoding (PRPE) mechanism to represent object spatial relation described by text query. To characterize more accurate object appearance, Ye _et al._[63] employ cross-modal self-attention to capture both the pixel-level and the word-level relations of linguistic and visual features. To leverage temporal coherence among video frames and obtain consistent object masks, Seo _et al._[46] introduced memory attention to model pixel-wise correlation across frames. Furthermore, Hui _et al._[22] utilized a cross-modal attention module to enable the interaction between linguistic and visual features at both frame-level and video clip-level to obtain more robust object representations. In addition to vanilla dot product attention, Liu _et al._[33] proposed a temporal graph reasoning strategy on top of cross-modal attention maps to highlight the target object. Furthermore, some works take advantage of both dynamic convolution and attention mechanism, e.g., Botach _et al._[4] adopted the popular Transformer blocks [49] with attention to obtain more informative object-level features while using dynamic convolution to produce object masks. Following this work, Wu _et al._[54] treated the language as queries, which are transformed into dynamic kernels for yielding more robust object features, but it is computationally expensive. Differently, Bellver _et al._[3] used frame-level features to obtain final object masks for better efficiency, but failed to capture temporal relations across frames, leading to inconsistent object prediction. Thereafter, Kazakos _et al._[25] tried to generate synthetic referring expressions to improve the model generalization ability given different text queries. Instead of using convolution or transformer, Mcintosh _et al._[39] encoded both the video and the text input in the form of capsules [18] to obtain more effective cross-modal object representations.
Apart from the above end-to-end approaches, other RVOS methods [59; 29; 13] adopt the multi-stage pipeline, i.e., ensemble learning. For example, Chen _et al._[8] estimated the initial object location using object proposals, which are derived from offline-trained instance segmentation model [17]; Liang _et al._[29] introduced a more complex but powerful pipeline, which is composed of two instance segmentation models [7; 48] and one VOS model [62] to produce more accurate masks. Besides, Ding _et al._[13] proposed the referring expression comprehension and segmentation model [38] and one VOS model [62] to obtain pixel-level object regions. In addition, Yang _et al._[58] tried to align the video content with the textual query in a fine-grained manner to alleviate the semantic asymmetry problem, while Yang _et al._[61] conducted intra-modal and inter-modal joint learning for video referring segmentation without the aid of object detection or category-specific pixel labeling.
### Transformers
Transformer was proposed by Vaswani _et al._[49] as an attention-based building block for the sequence-to-sequence machine translation task. In recent, transformer has gained much popularity in natural language processing, and also been successfully applied to computer vision, such as object detection [5; 64], visual tracking [57; 9], semantic segmentation [10; 2], and video captioning [27]. Here we discuss several typical transformers.
Vision Transformer (ViT) [14] shows CNNs are unnecessary for image classification and a pure Transformer applied to sequences of image patches works as well. To handle large variations of visual entities
and the high resolution of image pixels, Liu _et al._[35] presented a hierarchical Transformer with Shifted windows (Swin) for attention mechanism, and Swin Transformer has been adapted to video recognition [36]. Transformers not only can be used as vision classifier, but also can be use in object detection. For example, DETR (Detection with Transformers) [5] introduces transformers into object detection, by employing a set of object queries as candidates, which are fed into Transformer as the decoder to obtain a final set of detection predictions. In addition, VisTR (Video instance segmentation with Transformers) [52] extends the DETR framework to video instance segmentation (VIS) [60] task and tackles VIS in an end-to-end manner with parallel sequence decoding.
Recently, transformers have exhibited its strong power in semantic segmentation. For instance, For instance, Maskformer [10] is the transformer based model which addresses both semantic and panoptic segmentation tasks using mask classification paradigm. This paradigm disentangles the image partitioning and classification aspects of segmentation, which leverages both semantic-level and instance-level characteristics. Our model adopts the similar paradigm but uses different architecture that employs stacked transformers with an imposed diversity constraint. Inspired by this, we also use mask classification to distinguish objects in video as candidates, and utilize the associated referring scores conditioned on text query to predict the target object. Besides, previous transformer-based models still use convolution building block to decode pixel-wise features into masks, failing to fully exploit the transformer advantage. This motivates us to further develop a transformer based mask decoder for deriving more discriminant object-wise features.
## 3 Our Method
The methodology section first covers the problem definition with the model architecture overview, and then elaborates the details of each component in the FTEA framework, followed by the loss function descriptions in the end.
### Problem Definition
For a video sequence with \(n\) frames, each frame is accompanied by a text query \(\mathcal{E}=\{w_{s}\}_{s=1}^{S}\) with \(S\) words, the RVOS task aims to produce a series of frame-wise binary segmentation masks of the target object referred in the text query. As a common practice, the frames are randomly sampled from one video sequence to form a clip in each epoch during training, and the number of selected frames is called _temporal length_ or _window size_. Thus, for a video clip \(\mathcal{V}=\{\mathbf{I}_{t}\in\mathbb{R}^{H\times W\times 3}|t=1,2,\dots,T\} \subseteq\mathcal{V}\) with \(T\) frames, where \(\mathbf{I}_{t}\) denotes the \(i\)-th RGB frame with the height \(H\), the width \(W\), and three channels, the goal is to obtain
Figure 2: Overview of our Fully Transformer-Equipped Architecture (FTEA) framework for RVOS task.
corresponding \(T\) binary segmentation masks \(\hat{\mathcal{P}}=\{\mathbf{\hat{P}}_{t}\in\{0,1\}^{H\times W}|t=1,2,\ldots,T\}\), where \(\mathbf{\hat{P}}_{t}\) denotes the segmentation mask of \(i\)-th frame.
### The FTEA Framework
Previous end-to-end approaches [16; 46; 63; 8] commonly formulate RVOS as a per-pixel binary classification problem, applying a classification loss to each output pixel. However, when handling the text query involving multiple objects per-pixel binary classification cannot well capture multiple objects since the other objects are viewed as background except for the referred one. Hence, we introduce an alternative paradigm called mask classification [10], which predicts a set of binary masks, and each mask is associated with a single class prediction. Throughout this paper, we regard all appearing objects in video as candidate objects and attempt to produce binary masks for them all. For each binary mask, we use referring score, _a.k.a._, confidence score, to indicate whether the object is referred in text query and visible in video.
Given a video clip \(\mathcal{V}\), an RVOS model \(\Phi(\cdot)\) will produce a set of mask sequences \(\tilde{\mathcal{P}}^{(k)}\) for \(K\) candidate objects and the corresponding set of referring scores \(\tilde{\mathcal{R}}^{(k)}\) associated with each object mask, i.e., \(\Phi(\mathcal{V})\rightarrow\{(\tilde{\mathcal{P}}^{(k)},\tilde{\mathcal{R}}^ {(k)})\}_{k=1}^{K}\). Here, \(\tilde{\mathcal{P}}^{(k)}\) contains \(T\) object masks \(\tilde{\mathbf{P}}_{t}^{(k)}\in\{0,1\}^{H\times W}\), and \(\tilde{\mathcal{R}}^{(k)}=\{\tilde{r}_{k,t}\}_{t=1}^{T}\) contains \(T\) probability values \(\tilde{r}_{k,t}\), i.e., confidence score, indicating whether the \(k\)-th candidate object is referred and visible in the \(t\)-th frame. Usually, the number of candidate objects \(K\) is set to be much larger than the number of objects \(N\) in video. Next, the mask sequence with the highest confidence score is selected to produce final masks of the referred object.
In principle, we utilize transformer [49] as building block to construct our RVOS model, which is termed as Fully Transformer Equipped Architecture (FTEA), whose entire framework is depicted in Figure 2. It contains four main components, i.e., Visual Encoder that extracts multi-scale spatiotemporal features, Text Encoder that extracts compact text features, Cross-Modal Alignment module that aligns visual and text features, and Mask Decoder that produces final prediction by decoding object-level features into masks using newly developed Stacked Transformer.
The working mechanism of FTEA is briefly described as: Firstly, video clip and text query are respectively fed into Visual Encoder and Text Encoder for obtaining visual features and text features, which are then concatenated into visual-textual features. Next, the transformer based Cross-Modal Alignment module is used to capture the frame-wise global visual-linguistic context, including pixel-pixel spatial relation, pixel-word semantic relation, and word-word semantic relation, resulting in the visual-text alignment features with candidate object kernels and referring scores. Then, the stacked transformer serves as the main component of Mask Decoder, which progressively learns object-level features that are decoded into \(K\) mask sequences using candidate object kernels. Finally, the relevant mask sequences are selected for supervision during training, according to referring scores and mask quality in terms of Dice coefficient [40], while the mask sequence with the highest referring score is picked as the final prediction during inference.
### Visual and Text Encoder
For a video clip \(\mathcal{V}=\{\mathbf{I}_{t}\in\mathbb{R}^{H\times W\times 3}\}_{t=1}^{T}\), we use the transformer based visual encoder [36] to generate pixel-level high, middle, and low-resolution appearance feature maps, i.e., \(\mathbf{F}_{1/4}\in\mathbb{R}^{T\times H_{1}W_{1}\times C_{1}}\), \(\mathbf{F}_{1/8}\in\mathbb{R}^{T\times H_{2}W_{2}\times C_{2}}\), \(\mathbf{F}_{1/16}\in\mathbb{R}^{T\times H_{3}W_{3}\times C_{3}}\), where \(C_{1}=96,C_{2}=192,C_{3}=384\) are channel numbers and \(H_{1}W_{1}=\frac{H}{4}\cdot\frac{W}{4}\), \(H_{2}W_{2}=\frac{H}{8}\cdot\frac{W}{8}\), \(H_{3}W_{3}=\frac{H}{16}\cdot\frac{W}{16}\) are the feature vector lengths of the down-sampled feature maps. For text query \(\mathcal{E}=\{w_{s}\}_{s=1}^{S}\) with \(S\) words, we adopt transformer based linguistic model RoBERTa (Robustly optimized BERT approach) [34], to extract the word-level text feature \(\mathbf{Y}\in\mathbb{R}^{S\times C^{\prime}}\) (\(C^{\prime}=768\)). Then, the low-resolution appearance feature map \(\mathbf{F}_{1/16}\) and text feature \(\mathbf{Y}\) are linearly projected to the shared data space with dimension \(C\) (\(C=256\)). Next, the projected low-resolution appearance feature map is flattened and concatenated with the projected text feature to produce the visual-text feature tensor \(\mathbf{X}\in\mathbb{R}^{T\times(H_{3}W_{3}+S)\times C}\).
### Cross-Modal Alignment
The cross-modal alignment module mainly consists of vanilla transformer encoders and transformer decoders. These transformer blocks adopt scale dot product attention and linear layers. The former attends the region of interest in each frame by encoding visual-text feature \(\mathbf{X}\), while the latter learns a set of candidate object queries (i.e., object feature vectors) using the output features of the former, which is similar to the object queries in DETR [5].
For transformer encoders (three here), each one has a standard architecture and consists of multi-head self-attention module and feed forward network (FFN). Following [5], we add the fixed sine spatial positional encoding to the appearance feature map of each frame while no positional encoding is used for text features. For memory efficiency, we divide the temporal dimension into \(T^{\prime}\) groups for parallel computing. Its output is the visual-text alignment feature \(\mathbf{X}^{\prime}\in\mathbb{R}^{T\times(H_{3}W_{3}+S)\times C}\).
For transformer decoders (three here), we introduce \(K\) candidate object queries to represent candidate objects of each frame, as in [52]. Specifically, we define a tensor \(\mathbf{O}\in\mathbb{R}^{T\times K\times C}\) to accommodate candidate object queries. The queries share the weights across frames and learn to attend to the same candidate object in video. After that, we feed the candidate object query \(\mathbf{O}\) with visual-text alignment feature \(\mathbf{X}^{\prime}\) to the vanilla transformer decoder, in order to globally reason about all candidate objects together and thus model the semantic relations of object-word and object-pixel pairs. Then, we compute the attentive candidate object query tensor \(\mathbf{O}^{\prime}\in\mathbb{R}^{T\times K\times C}\), i.e.,
\[\mathbf{O}^{\prime}=\text{LN}(\text{mh-attn}(\mathbf{O}\mathbf{W}^{\prime}_{query },\mathbf{X}^{\prime}\mathbf{W}^{\prime}_{key},\mathbf{X}^{\prime}\mathbf{W}^ {\prime}_{value})+\mathbf{O}), \tag{1}\]
where \(\{\mathbf{W}^{\prime}_{query},\mathbf{W}^{\prime}_{key},\mathbf{W}^{\prime}_ {value}\}\in\mathbb{R}^{C\times C}\) are learnable parameter matrices, and \(\mathbf{O}^{\prime}\) retrieves relevant object-level semantics from visual-text alignment feature \(\mathbf{X}^{\prime}\) by exploring their pair-wise relations. Next, the attentive candidate object queries are fed to one FFN to obtain the first hidden feature \(\mathbf{O}_{1}\in\mathbb{R}^{T\times K\times C}\), which goes through the subsequent two transformer decoders with the same visual-text alignment feature \(\mathbf{X}^{\prime}\) to obtain finer hidden features \(\{\mathbf{O}_{2},\mathbf{O}_{3}\}\in\mathbb{R}^{T\times K\times C}\). By concatenating these hidden features, it outputs hidden feature of candidate object as \(\mathbf{H}=[\mathbf{O}_{1};\mathbf{O}_{2};\mathbf{O}_{3}]\in\mathbb{R}^{3\times T \times K\times C}\), where \([~{};~{}]\) denotes the concatenation. This results in coarse-to-fine hidden features for learning finer details of objects.
To incorporate object-level spatial location with pixel-level visual feature, inspired by [16; 50], we propose to use dynamic convolution strategy to produce the mask sequence for each candidate object. Particularly, two linear layers with ReLU in between are operated on the hidden feature \(\mathbf{H}\) to derive several dynamic kernels \(\{\mathbf{Z}^{(1)},\mathbf{Z}^{(2)},\mathbf{Z}^{(3)}\}\in\mathbb{R}^{T\times K \times C_{0}}\), where \(C_{0}=8\). Since the hidden feature is learned from attentive candidate object query \(\mathbf{O}^{\prime}\) to finely represent candidate objects, dynamic kernels are also called candidate object kernels. These dynamic kernels are then fed into the following mask decoder, for progressively decoding pixel-wise object features into \(K\) candidate object mask sequences.
To match the mask with the referred object, we introduce a referring score \(\tilde{r}_{k,t}\) for each candidate object mask, which is a probability produced by passing hidden feature of candidate object \(\mathbf{H}\) to a linear layer with sigmoid function. Thus, we obtain referring score matrix \(\hat{\mathbf{R}}\in\mathbb{R}^{T\times K}\), whose entries are between 0 and 1.
### Mask Decoder
Previous works [46; 4] usually adopt convolution based Feature Pyramid Network (FPN) [30] to decode pixel-wise feature maps into masks. However, the convolution only aggregates pixel-level features from the local region of feature map and these features cannot well reflect rich semantics, e.g., car, person, and bicycle all appear in one image. This desires modeling the object-level appearance pattern that embeds latent semantics inherently. Hence, to model object-level spatial relations for candidate objects, we introduce a progressive object learning scheme by designing Stacked Transformer, which consists of two modules, i.e., _Stacked Attention_ and _Stacked FFN_. They _stack_ the object-level semantic features of candidate objects sequentially along the channel dimension in order to avoid mask confusion.
In detail, the candidate object kernels from cross-modal alignment module are fed into the Stacked Transformer to incorporate object-level context. Similar to FPN, Stacked Transformer utilizes the rich appearance cues by fusing high-resolution feature maps derived from visual encoder across frames in video. Here, we adopt two cascaded Stacked Transformers to build our mask decoder. To produce pixel-wise mask, we remove the text part of visual-text alignment feature \(\mathbf{X}^{\prime}\in\mathbb{R}^{T\times(H_{3}W_{3}+S)\times C}\) to obtain visual alignment feature \(\tilde{\mathbf{X}}\in\mathbb{R}^{T\times H_{3}W_{3}\times C}\).
For the first Stacked Transformer, we take the visual alignment feature \(\tilde{\mathbf{X}}\), middle-resolution appearance feature \(\mathbf{F}_{1/8}\in\mathbb{R}^{T\times H_{2}W_{2}\times C_{2}}\), and the first candidate object kernel \(\mathbf{Z}^{(1)}\in\mathbb{R}^{T\times K\times C_{0}}\) as input, and it outputs middle-resolution decoding feature \(\mathbf{X}_{1/8}\in\mathbb{R}^{T\times H_{2}W_{2}\times\alpha K}\), where \(\alpha\) (set to 4) is the feature channel number of each candidate object. For memory efficiency, we split the temporal dimension into \(T\) groups for parallel computing. Thus, we denote the visual alignment feature as \(\tilde{\mathbf{X}}=\{\tilde{\mathbf{x}}_{t}\}_{t=1}^{T}\), middle-resolution appearance feature as \(\mathbf{F}_{1/8}=\{\mathbf{f}_{t}\}_{t=1}^{T}\), and the first candidate object kernel as \(\mathbf{Z}^{(1)}=\{\mathbf{z}_{t}\}_{t=1}^{T}\), where \(\tilde{\mathbf{x}}_{t}\in\mathbb{R}^{H_{3}W_{3}\times C}\), \(\mathbf{f}_{t}\in\mathbb{R}^{H_{2}W_{2}\times C_{2}}\), \(\mathbf{z}_{t}\in\mathbb{R}^{K\times C_{0}}\). For the second Stacked Transformer, we employ middle-resolution decoding feature \(\mathbf{X}_{1/8}\), high-resolution appearance feature map \(\mathbf{F}_{1/4}\), and the second candidate object kernel \(\mathbf{Z}^{(2)}\) as input to produce high-resolution decoding feature \(\mathbf{X}_{1/4}\in\mathbb{R}^{T\times H_{1}W_{1}\times\alpha^{\prime}K}\), where \(\alpha^{\prime}\) is set to 2.
**Stacked Attention**. We first feed each visual alignment feature and middle-resolution appearance feature into linear layers to produce a 3-tuple \((query,key,value)\) as:
\[\mathbf{q}_{t}^{\prime\prime}=\mathbf{f}_{t}\mathbf{W}_{query}^{\prime\prime},\tilde{\mathbf{k}}_{t}=\tilde{\mathbf{x}}_{t}\mathbf{W}_{key}^{\prime\prime},\tilde{\mathbf{v}}_{t}=\tilde{\mathbf{x}}_{t}\mathbf{W}_{value}^{\prime\prime}, \tag{2}\]
where \(\mathbf{W}_{query}^{\prime\prime}\in\mathbb{R}^{C_{2}\times K}\), \(\mathbf{W}_{key}^{\prime\prime}\in\mathbb{R}^{C\times K}\) and \(\mathbf{W}_{value}^{\prime\prime}\in\mathbb{R}^{C\times\alpha K}\) are learnable parameters; \(\mathbf{q}_{t}^{\prime\prime}\in\mathbb{R}^{H_{2}W_{2}\times K}\), \(\tilde{\mathbf{k}}_{t}\in\mathbb{R}^{H_{3}W_{3}\times K}\) and \(\tilde{\mathbf{v}}_{t}\in\mathbb{R}^{H_{3}W_{3}\times\alpha K}\) denote the initial query, key and value in attention mechanism.
Then, we incorporate object-level semantics into pixel-wise features using candidate object kernel, leading to candidate object weight matrix \(\mathbf{m}_{t}\in\mathbb{R}^{H_{2}W_{2}\times K}\) as
\[\mathbf{m}_{t}=\sigma(\text{Upsample}1(\mathbf{z}_{t}*(\tilde{\mathbf{x}}_{t} \mathbf{W}_{0}))), \tag{3}\]
where \(\mathbf{W}_{0}\in\mathbb{R}^{C\times C_{0}}\) is a learnable parameter matrix; Upsample\(1(\cdot)\) denotes bilinear interpolation to upsample the resolution of input feature map from \(H_{3}\times W_{3}\) to \(H_{2}\times W_{2}\); \(\sigma(\cdot)\) denotes a sigmoid function; \({}^{\prime}*^{\prime}\) denotes a \(1\times 1\) dynamic convolution operation, which can be viewed as a linear projection using candidate object kernel \(\mathbf{z}_{t}\) as the projection matrix. We apply an element-wise product \(\odot\) between query \(\mathbf{q}_{t}^{\prime\prime}\) and weight matrix \(\mathbf{m}_{t}\) to enhance the object awareness of the query as
\[\tilde{\mathbf{q}}_{t}=\mathbf{q}_{t}^{\prime\prime}\odot\mathbf{m}_{t}\in \mathbb{R}^{H_{2}W_{2}\times K}, \tag{4}\]
which has \(K\) feature channels and each channel reflects the visual appearance of each candidate object. These channels are stacked together to shape a feature bank in order. Thus, we can utilize the object-level spatial context to capture finer appearance cues. Next, the query \(\tilde{\mathbf{q}}_{t}\) interacts with the initial key and value derived from visual alignment feature, using cross attention, i.e.,
\[\text{Att}(\tilde{\mathbf{q}}_{t},\tilde{\mathbf{k}}_{t},\tilde{\mathbf{v}}_{t })=\text{Softmax}(\frac{\tilde{\mathbf{q}}_{t}\tilde{\mathbf{k}}_{t}^{T}}{ \sqrt{K}})\tilde{\mathbf{v}}_{t}, \tag{5}\]
where \(\text{Softmax}(\cdot)\) denotes softmax function and the output is the initial attentive object feature \(\mathbf{\hat{x}}_{t}\in\mathbb{R}^{H_{2}W_{2}\times\alpha K}\).
Essentially, the query \(\tilde{\mathbf{q}}_{t}\) encodes fine details (edge and texture) and coarse object-level spatial location, the key \(\tilde{\mathbf{k}}_{t}\) encodes the coarse appearance feature of each candidate object and finer object-level location, the value \(\tilde{\mathbf{v}}_{t}\) enriches object-level semantics. Using the dot product attention in Eq. (5), object-level appearance cues can be retrieved from visual alignment feature and fused with finer details, resulting in more accurate
object mask sequence for each candidate object. In addition, a residual connection is used to produce attentive object feature \(\hat{\mathbf{x}}_{t}^{\prime}\) by
\[\hat{\mathbf{x}}_{t}^{\prime}=\hat{\mathbf{x}}_{t}+\text{Upsample1}(\tilde{ \mathbf{v}}_{t})\in\mathbb{R}^{H_{2}W_{2}\times\alpha K}. \tag{6}\]
This takes into account the upsampled value \(\tilde{\mathbf{v}}_{t}\), which enriches visual semantics of candidate objects.
**Stacked FFN.** To preserve the channel order of object-level features, we propose a group-wise multi-layer perceptron as an implementation of Feed Forward Network [49], termed Stacked FFN (SFFN). Since each group of perceptron reveals the visual semantics of one candidate object, the obtained object-level features are sequentially stacked in a feature pool. Specifically, we divide attentive object feature \(\hat{\mathbf{x}}_{t}^{\prime}\) into \(K\)-groups along the channel dimension, and each group is assigned with \(\alpha\) channels.
Besides, we employ group-wise convolution to design two linear layers to capture object-level appearance independently. Meanwhile, layer normalization [1] and residual connection are used to produce the feature \(\hat{\mathbf{x}}_{t}^{\prime\prime}\), i.e.,
\[\hat{\mathbf{x}}_{t}^{\prime\prime}=\text{LN}(\text{SFFN}(\text{LN}(\hat{ \mathbf{x}}_{t}))+\hat{\mathbf{x}}_{t})\in\mathbb{R}^{H_{2}W_{2}\times\alpha K}, \tag{7}\]
which acts as the middle-resolution decoding feature of the first Stacked Transformer for each frame. Then, we concatenate corresponding \(T^{\prime}\) output features to obtain \(\mathbf{X}_{1/8}\in\mathbb{R}^{T\times H_{2}W_{2}\times\alpha K}\). Similarly, we can obtain \(\mathbf{X}_{1/4}\) using the second Stacked Transformer.
After passing through the above two modules, we apply \(1\times 1\) dynamic convolution using the third candidate object kernel \(\mathbf{Z}^{(3)}\) on the linearly projected high-resolution decoding feature \(\mathbf{X}_{1/4}\) to obtain \(K\) sets of candidate object mask sequences, i.e.,
\[\{\tilde{\mathcal{P}}^{(k)}\}_{k=1}^{K}=\sigma(\text{Upsample2}(\mathbf{Z}^{(3 )}*(\mathbf{X}_{1/4}\mathbf{W}_{\text{proj}}))), \tag{8}\]
where \(\mathbf{W}_{\text{proj}}\in\mathbb{R}^{\alpha^{\prime}K\times C_{0}}\) is a learnable parameter; Upsample\(2(\cdot)\) denotes bilinear interpolation to upsample the resolution of input feature map from \(H_{1}\times W_{1}\) to \(H\times W\); \(\tilde{\mathcal{P}}^{(k)}=\{\tilde{\mathbf{P}}_{k,t}\in\mathbb{R}^{H\times W }\}_{t=1}^{T}\) contains \(K\) candidate object masks for each frame. Besides, we convert the referring score matrix \(\tilde{\mathbf{R}}\in[0,1]^{T\times K}\) in Sec. 3.4 into referring score set \(\{\tilde{\mathcal{R}}^{(k)}\}_{k=1}^{K}\), where \(\tilde{\mathcal{R}}^{(k)}=\{r_{k,t}\}_{t=1}^{T}\).
We associate masks and referring scores accordingly to form pair-wise predictions for the \(k\)-th candidate object as
\[\hat{\mathcal{Y}}^{(k)}=(\tilde{\mathcal{P}}^{(k)},\tilde{\mathcal{R}}^{(k)}). \tag{9}\]
### Loss Function
We adopt the sum of Dice loss [40] and Focal loss [31], i.e., \(\mathcal{L}_{\text{mask}}\), to supervise the mask prediction at frame-level, while using the binary cross entropy loss \(\mathcal{L}_{\text{ref}}\) to supervise the referring score prediction. Details are below.
To select the best matched candidate object mask sequence, we generate the ground-truth referring score \(r_{k^{\prime},t}\), according to the ground-truth mask indicating the referred object in each frame. When the referred object is visible, its value is set to 1, otherwise 0. In some cases, there is more than one text query for a video, i.e., more than one referred objects. For the ground-truth referring score set \(\{\mathcal{R}^{(k^{\prime})}\}_{k^{\prime}=1}^{K}\), where \(\mathcal{R}^{(k^{\prime})}=\{r_{k^{\prime},t}\in\{0,1\}\}_{t=1}^{T}\), we pad it with \(\emptyset\) to fill in those missing slots. The symbol \(\emptyset\) denotes that the ground-truth mask is unavailable for corresponding candidate object. Similarly, the ground-truth mask sequence set is denoted as \(\{\mathcal{P}^{(k^{\prime})}\}_{k^{\prime}=1}^{K}\), where \(\mathcal{P}^{(k^{\prime})}=\{\mathbf{P}_{k^{\prime},t}\in\{0,1\}^{H\times W} \}_{t=1}^{T}\). Then, the ground-truth sequence tuple for \(k^{\prime}\)-th candidate object is
\[\mathcal{Y}^{(k^{\prime})}=(\mathcal{P}^{(k^{\prime})},\mathcal{R}^{(k^{\prime} )}). \tag{10}\]
To find a matching between ground-truth mask sequences and candidate object mask sequences, we apply pair-wise matching cost function [4] as
\[\begin{split}\mathcal{C}_{\text{match}}(\hat{\mathcal{Y}}^{(k)}, \mathcal{Y}^{(k^{\prime})})&=\lambda_{\text{dice}}\mathcal{C}_{ \text{dice}}(\tilde{\mathcal{P}}^{(k)},\mathcal{P}^{(k^{\prime})})\\ &+\lambda_{\text{ref}}\mathcal{C}_{\text{ref}}(\tilde{\mathcal{R }}^{(k)},\mathcal{R}^{(k^{\prime})}),\end{split} \tag{11}\]
where \(\lambda_{\text{dice}}>0\) and \(\lambda_{\text{ref}}>0\) are hyper-parameters, the first term \(\mathcal{C}_{\text{dice}}(\cdot)\) supervises the \(k\)-th candidate mask sequence using the \(k^{\prime}\)-th ground-truth mask sequence by averaging negative Dice Coefficients [40] of each corresponding mask pair per time instance, and the second term \(\mathcal{C}_{\text{ref}}(\cdot)\) supervises the predicted referring score using the corresponding ground-truth sequence as
\[\mathcal{C}_{\text{ref}}(\tilde{\mathcal{R}}^{(k)},\mathcal{R}^{(k^{\prime})} )=-\frac{1}{T}\sum_{t=1}^{T}r_{k^{\prime},t}\cdot\tilde{r}_{k,t}. \tag{12}\]
According to the cost function in Eq. (11), we can find the optimal assignment with the minimum cost by Hungarian algorithm [26], and calculate loss function only for the candidate objects assigned with nonempty ground truth. For \(N\) referred objects, we define the new index of the candidate object as \(k^{\prime\prime}=\delta(k)\in\{1,\ldots,N\}\), where \(\delta(\cdot)\) denotes the optimal Hungarian assignment.
Besides, we adopt the Dice loss [40] and Focal loss [31] to supervise the mask prediction as
\[\begin{split}\mathcal{L}_{\text{mask}}(\tilde{\mathbf{P}}_{k^{ \prime\prime},t},\mathbf{P}_{k^{\prime},t})&=\sum_{t=1}^{T}( \lambda_{\text{dice}}\mathcal{L}_{\text{dice}}(\tilde{\mathbf{P}}_{k^{\prime \prime},t},\mathbf{P}_{k^{\prime},t})\\ &\quad+\lambda_{\text{focal}}\mathcal{L}_{\text{focal}}(\tilde{ \mathbf{P}}_{k^{\prime\prime},t},\mathbf{P}_{k^{\prime},t})),\end{split} \tag{13}\]
where \(\lambda_{\text{focal}}>0\) is a hyper-parameter, \(\mathcal{L}_{\text{dice}}(\cdot)\) denotes the Dice loss which calculates negative Dice Coefficient for every mask pair, and \(\mathcal{L}_{\text{focal}}(\cdot)\) is a cross entropy loss which can alleviate the class imbalance problem at pixel-level by focusing on hard pixels.
Besides, we utilize the binary cross entropy loss to supervise the referring score prediction, i.e.,
\[\mathcal{L}_{\text{ref}}(\tilde{r}_{k^{\prime\prime},t},r_{k^{\prime},t})=- \lambda_{\text{ref}}\sum_{t=1}^{T}r_{k^{\prime},t}\log(\tilde{r}_{k^{\prime \prime},t}). \tag{14}\]
Following [5, 4], we downweight the loss values of negative ("unreferred") candidate objects by a factor of 10 to account for class imbalance.
In fact, Dice loss and Focal loss cannot well model object-level relations. However, our model adopts the mask classification paradigm, which produces a number of candidate object mask sequences, where the matched one is selected as final prediction. Naturally, we hope each produced mask can depict a unique candidate object and different candidate objects can be characterized with distinct masks. Hence, to capture as many objects as possible in video, we introduce the diversity loss on candidate object kernels for yielding diverse candidate object masks, i.e.,
\[\mathcal{L}_{\text{div}}(\mathbf{Z}^{(1)},\mathbf{Z}^{(2)},\mathbf{Z}^{(3)})= \sum_{t=1}^{T}\sum_{j=1}^{3}||\mathbf{Z}_{j,t}\mathbf{W}_{\text{div}}\mathbf{ Z}_{j,t}^{T}-\mathbf{I}||_{F}+||\mathbf{W}_{\text{div}}||_{1}, \tag{15}\]
where \(||\cdot||_{1}\) denotes \(\ell_{1}\)-norm, \(||\cdot||_{F}\) denotes Frobenius norm, \(\mathbf{W}_{\text{div}}\in\mathbb{R}^{C_{0}\times C_{0}}\) is a learnable parameter, \(\mathbf{I}\in\mathbb{R}^{C_{0}\times C_{0}}\) is an identity matrix, and \(\mathbf{Z}_{j,t}\in\mathbb{R}^{K\times C_{0}}\) denotes the \(j\)-th candidate object kernels of the \(t\)-th frame, i.e., the \(C_{0}\)-dimensional feature representation of \(K\) candidate objects.
Specifically, we first do a linear mapping with learnable matrix \(\mathbf{W}_{\text{div}}\) on candidate object kernel \(\mathbf{Z}_{j,t}\) to model the pair-wise similarity of candidate objects. It is followed by a matrix multiplication between the
projected object kernel and the original one, resulting in pair-wise similarity scores. In addition, we subtract an identity matrix for ignoring the similarity of candidate objects themselves. In addition, we apply \(\ell_{1}\)-norm on the weight matrix \(\mathbf{W}_{\text{div}}\) to prevent over-fitting. By minimizing the normalized similarity score, the model is forced to learn different object-level features to produce diverse candidate object kernels. In this way, diverse masks can be generated by utilizing the dynamic convolution with diverse candidate object kernels, which further helps to obtain accurate masks of the referred object.
Therefore, the final objective function of our FTEA model is formulated as
\[\mathcal{L}=\mathcal{L}_{\text{mask}}+\mathcal{L}_{\text{ref}}+\lambda_{\text {div}}\mathcal{L}_{\text{div}}, \tag{16}\]
where \(\lambda_{\text{div}}>0\) is a hyper-parameter for balancing the contribution of the diversity term.
## 4 Experiments
All the experiments were conducted on a machine with two NVIDIA TITAN RTX Graphic Cards for training and inference, and our model was compiled using PyTorch 1.10, Python 3.9 and CUDA 11.1.
### Data Sets
We conduct extensive experiments on three datasets: A2D Sentences [16], J-HMDB Sentences [16], and Ref-YouTube-VOS [46]. Some statistics of the them are listed in Table 1 and details are shown below.
**A2D Sentences**[16] is extended from Actor-Action database [55] by adding textual descriptions for each video. It contains 3782 videos annotated with 8 action classes and 6655 sentences in total. For each video, there are 3 to 5 frames annotated with pixel-wise segmentation masks. There are 3036 training videos and 746 test videos, respectively.
**J-HMDB Sentences**[16], extended from J-HMDB database [23], contains 21 different actions, 928 videos and corresponding 928 sentences. For each video, there are frame-wise 2D articulated human puppet masks. All actors are humans and one natural language query is annotated to describe the performed action.
**Ref-YouTube-VOS**[46] is extended from YouTube-VOS database [56]. It contains 3975 videos, 7451 objects and 27899 text expressions at both first-frame and full-video levels. The first-frame expressions only describe the target object in the first frame, while the full-video expressions describe it through the whole video. But only the subset with more challenging full-video expressions are publicly available and is split into training set, validation set, and test set with 3471, 202, and 305 videos, respectively. Since there are only ground-truth annotations for training while the test server is now closed, we upload our mask predictions of validation set to the competition server1 to derive results.
Footnote 1: [https://competitions.codalab.org/competitions/29139](https://competitions.codalab.org/competitions/29139)
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline Dataset & Train & Val & Test & Class & Sentence \\ \hline A2D Sentences[55] & 3036 & - & 746 & 8 & 6655 \\ J-HMDB Sentences [16] & - & - & 928 & 21 & 928 \\ Ref-YouTube-VOS[46] & 3471 & 202 & 305 & 94 & 27899 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics of the datasets.
### Evaluation Metrics
Following previous works [4; 16], we train our model on A2D training set while evaluating the segmentation performance on A2D test set and the entire J-HMDB Sentences data. Meanwhile, we adopt _Overall IoU_ (Intersection over Union), _Mean IoU_, _Precision@\(\zeta\)_ and _mAP_ (mean Average Precision) as the pixel-wise evaluation criteria.
Among them, _Overall IoU_ denotes the ratio between the total intersection and the total union area over all test samples; _Mean IoU_ denotes the average value of IoU over all test samples; _Precision@\(\zeta\)_ is the percentage of test samples whose IoU scores are higher than a threshold \(\zeta\in[0.5,0.6,0.7,0.8,0.9]\); _mAP_ calculates the mean precision averaged over a group of IoU thresholds [0.5:0.05:0.95] by using the API2 implementation of object detection benchmark Microsoft COCO [32].
Footnote 2: [https://github.com/cocodataset/cocoapi](https://github.com/cocodataset/cocoapi)
For Ref-YouTube-VOS, we follow [46] to adopt several evaluation metrics, including _Region Similarity_ (\(\mathcal{J}\)), _Contour Accuracy_ (\(\mathcal{F}\)), and their average score (\(\mathcal{J}\&\mathcal{F}\)). Here, \(\mathcal{J}\) denotes the average value of IoU scores over all test samples, and \(\mathcal{F}\) calculates the average F1 scores of the contour points over all test samples.
### Implementation Details
In the default configuration, we set the temporal length \(T\) (_a.k.a._, window size or frame number) of one clip to 8 and each video is divided into many equal-length clips. Note that the above data sets have provided sampled frames in each video and we just use them as is; if the ultimately remaining frames are less than the temporal length, we just use them all. For A2D Sentences and Ref-YouTube-VOS, the batch size is respectively set to 4 and 2, on 2 TITAN RTX 24GB GPUs. Due to limited GPU resources, both the batch size and the temporal length have been set to the possible maximal. Actually, the segmentation performance can be further boosted if the temporal length increases as indicated in [4].
We adopt the tiny version of the Video Swin Transformer [36] pretrained on Kinetics-400 [24] as visual encoder to extract pixel-level features for input video clips. Following [4], we use the last stage features from the visual encoder as the input of cross-modal alignment module, and there are three spatial strides (_a.k.a._, down-sampling rates) 4, 8, and 16, for obtaining the features. To obtain per-frame features, the single temporal down-sampling layer is abandoned from visual encoder, by setting the kernel and stride of its 3D convolution to size \(1\times 4\times 4\). Besides, we adopt the base version of RoBERTa (Robustly optimized BERT approach)[34] implemented by Hugging Face [53] as text encoder to extract word-level features, and this encoder is a transformer-based language representation model. The weights of cross modal alignment module are randomly initialized with Xavier initialization. Additive dropout of 0.1 is applied after after every multi-head attention and FFN before layer normalization in the cross modal alignment module. For the spatial positional encoding in the cross modal alignment model, we adopt a 2D case [43] instead of the original one from transformer [49]. In addition, we set the number of candidate objects to \(K=50\), which is much larger than the actual number of objects in one video.
**Training**. For A2D Sentences, we feed the model \(T\) frames with the annotated target frame in the middle, and each frame is downsampled to \(320\times 576\). For Ref-YouTube-VOS, we feed the model \(T\) consecutive annotated frames. Each frame is resized to \(360\times 640\). The loss hyper-parameters are empirically set as: \(\lambda_{\text{dice}}=5\), \(\lambda_{\text{ref}}=5\), \(\lambda_{\text{focal}}=2\), \(\lambda_{\text{div}}=0.07\). We utilize AdamW (Adam with decoupled Weight decay) [37] as the optimizer with weight decay set to \(10^{-4}\). We also apply gradient clipping with a maximal gradient norm of 0.1. To improve position awareness, we randomly flip the input frames horizontally and swap direction-related word in the corresponding text queries accordingly. A learning rate \(\eta\) of \(10^{-4}\) is used for the cross alignment module and mask decoder, while \(5\times 10^{-5}\) for visual encoder. The parameters of text encoder are kept frozen for efficiency. We train our model on A2D Sentences for 70 epochs with a learning rate drop by a factor of 2.5 after the first 50 epochs. For Ref-YouTube-VOS, the model is trained for 30 epochs with a
learning rate drop by a factor of 2.5 after the first 20 epochs. Note that we do not utilize the time-consuming pretraining process on static images, such as Microsoft COCO database [32], which could further boost the segmentation performance. To avoid over-lengthy training, we adopt early stopping strategy by setting the maximum epoch to 70 and 30 for A2D Sentences and Ref-YouTube-VOS, respectively.
**Inference**. For A2D Sentences and J-HMDB Sentences, each frame is resized so that the short side has at least 320 pixels and the long side has at most 576 pixels; for Ref-YouTube-VOS, we apply the same resize configuration as in training. To obtain the mask sequence of target object, it requires to identify the index of the highest referring score summation, i.e., \(k^{*}=\operatorname*{arg\,max}_{k=1,\dots,K}\sum_{t=1}^{T}\tilde{r}_{k,t}\), and then retrieve the corresponding mask sequence.
### Comparison with State-of-the-art Methods
We compare our FTEA approach with several state-of-the-art RVOS methods on benchmarks. The results on A2D Sentences and J-HMDB Sentences are recorded in Table 2 and the results on Ref-YouTube-VOS are shown in Table 3. The best record is in boldface and the second best is underlined. Note that the model trained on A2D Sentences is directly used for segmenting all samples in J-HMDB Sentences. The results of those alternative methods are derived from their original papers, and we use the results of MTTR [4] with the same temporal length 8 of one video clip. Note that we do not compare with the expensive model ReferFormer [54] since it requires 8 NVIDIA V100 32GB GPUs, which are unavailable for most researchers. Generally, our FTEA method achieves consistently better segmentation performances than several strong baselines. Some discussions on the observations of comparison records are given below.
**A2D Sentences**. From the left part of Table 2, FTEA outperforms SOTA alternatives across all evaluation metrics. In particular, FTEA has obvious gains of 2.1% and 2.0% over the best candidate MTTR on [email protected] and [email protected], respectively. Meanwhile, our method obtains the performance improvement of 1% over MTTR in terms of both Overall IoU and Mean IoU. Both MTTR and FTEA adopt dynamic convolution and attention mechanism, and we attribute the performance gains to the fact that FTEA employs the newly developed stacked transformer and imposes the diversity constraint on the objective function, which can help the model to fuse multi-level semantics used for generating find the correct object from those candidate ones while providing more accurate pixel-level segmentation. Besides, the methods without dynamic convolution such as CMPC-V [33] performs much worse than MTTR and FTEA, which demonstrates the effectiveness of generating a group of convolution kernels conditioned on different text queries and video frames.
\begin{table}
\begin{tabular}{l c c|c c c c|c c c|c c c c|c c c|c} \hline \multirow{3}{*}{Method} & \multirow{3}{*}{Year} & \multirow{3}{*}{DC} & \multirow{3}{*}{Att} & \multicolumn{6}{c|}{A2D Sentences} & \multicolumn{6}{c}{J-HMDB Sentences} \\ \cline{4-14} & & & & \multicolumn{4}{c|}{Precision} & & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c|}{IoU} & \multicolumn{4}{c|}{Precision} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c}{IoU} \\ \cline{4-14} & & & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline AAVS[16] & 2018 & ✓ & 50.0 & 37.6 & 23.1 & 9.4 & 0.4 & 21.5 & 55.1 & 42.6 & 71.2 & 51.8 & 26.4 & 3.0 & 0.0 & 26.7 & 55.5 & 57.0 \\ ACGA[51] & 2019 & ✓ & 55.7 & 45.9 & 31.9 & 16.0 & 2.0 & 27.4 & 60.1 & 49.0 & 75.6 & 56.4 & 28.7 & 3.4 & 0.0 & 28.9 & 57.6 & 58.4 \\ VT-Caps.[39] & 2020 & & 52.6 & 45.0 & 34.5 & 20.7 & 3.6 & 30.3 & 56.8 & 46.0 & 67.7 & 51.3 & 28.3 & 5.1 & 0.0 & 26.1 & 55.5 & 57.0 \\ PRPE[41] & 2020 & ✓ & 63.4 & 57.9 & 48.3 & 32.2 & 8.3 & 38.8 & 66.1 & 52.9 & 69.1 & 57.2 & 31.9 & 6.0 & **0.1** & 29.4 & - & - \\ CMDy[50] & 2020 & ✓ & 60.7 & 52.5 & 40.5 & 23.5 & 4.5 & 33.3 & 62.3 & 53.1 & 74.2 & 58.7 & 31.6 & 4.7 & 0.0 & 30.1 & 55.4 & 57.6 \\ Hui et al.[22] & 2021 & ✓ & 65.4 & 58.9 & 49.7 & 33.3 & 9.1 & 39.9 & 66.2 & 56.1 & 78.3 & 63.9 & 37.8 & 7.6 & 0.0 & 33.5 & 59.8 & 60.4 \\ Ye et al.[63] & 2021 & ✓ & 48.7 & 43.1 & 35.8 & 23.1 & 5.2 & - & 61.8 & 43.2 & 76.4 & 62.5 & 38.9 & 9.0 & 0.0 & - & 62.8 & 58.1 \\ CMPC-V[33] & 2022 & ✓ & 65.5 & 59.2 & 50.6 & 34.2 & 9.8 & 40.4 & 65.3 & 57.3 & 81.3 & 65.7 & 37.1 & 7.0 & 0.0 & 34.2 & 61.6 & 61.7 \\ MTTR[4] & 2022 & ✓ & ✓ & 72.1 & 68.4 & 60.7 & 45.6 & 16.4 & 44.7 & 70.2 & 61.8 & 91.0 & 81.5 & 57.0 & 14.4 & **0.1** & 36.6 & 67.4 & 67.9 \\ \hline FTEA (Ours) & ✓ & ✓ & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** & **94.2** & **83.8** & **59.2** & **15.5** & **0.1** & **38.7** & **70.1** & **69.5** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison with SOTA methods on A2D Sentences and J-HMDB Sentences. DC:Dynamic Convolution; Att:Attention.
**J-HMDB Sentences**. From the right part of Table 2, FTEA derives largely better segmentation results in comparison with other methods. For example, it achieves 3.2% gain on [email protected], 2.1% gain on mAP, and 2.7% gain in terms of Overall IoU, compared to the strongest baseline MTTR [4]. In other words, more obvious performance boosting has been observed on J-HMDB Sentences than that on A2D Sentences. The former has 21 action classes while the latter has only 8 ones, which suggests that our method can better handle videos with more actions. This also consolidates that the fully transformer-equipped architecture exhibits clear advantages for referring video object segmentation task. Note that all methods get very poor performance on [email protected], which might be because the model is trained on A2D Sentences and the ground-truth masks in J-HMDB Sentences are generated by a coarse human puppet model, leading to some inaccuracy.
**Ref-YouTube-VOS**. As can be seen from Table 3, our approach gets the most promising performance among several SOTA methods including some ensemble approach like [13] trained on additional data sets. For instance, the ensemble method [13] obtains lower performance by 2% on Contour Accuracy (\(\mathcal{F}\)) compared to ours, while MTTR lowers the performance by 1.9% in terms of \(\mathcal{J}\&\mathcal{F}\). Note that MTTR with a star in the superscript indicates that the results are reproduced using the released GitHub code when the temporal length of clip equals 8. FTEA not only surpasses the strong baseline MTTR, but also significantly outperforms the ensemble method (non end-to-end) which is the second best. This demonstrates that the stacked transformer design and the imposed diversity loss explicitly contribute to positive performance gains, while there are thousands of objects and text expressions.
### Ablation Studies
We conduct ablation studies on the benchmark test sets to examine the influences of each component in FTEA, including stacked transformer, and diversity loss. Unless stated otherwise, we only change the component to be examined while keeping others fixed.
**Components of FTEA**. We first evaluate the effectiveness of stacked transformer and diversity loss, and the results are shown in Table 4, 5, and 6. Here, the baseline model is MTTR [4] which uses convolution based Feature Pyramid Network (FPN) [30] as mask decoder and trained without diversity loss. DL and ST
\begin{table}
\begin{tabular}{c c|c c c c c|c c c} \hline \hline \multirow{2}{*}{Baseline} & \multirow{2}{*}{ST} & \multirow{2}{*}{DL} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{3-10} & & & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline ✓ & & & 72.1 & 68.4 & 60.7 & 45.6 & 16.4 & 44.7 & 70.2 & 61.8 \\ ✓ & ✓ & & 73.9 & **70.2** & 61.8 & 46.9 & 16.1 & 44.9 & 70.6 & 62.7 \\ ✓ & & ✓ & 73.2 & 69.2 & 62.1 & 46.8 & 16.9 & 44.8 & 70.7 & 62.5 \\ ✓ & ✓ & ✓ & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of FTEA components on A2D Sentences test set. ST: Stacked Transformer; DL: Diversity Loss.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Method & Year & DC & Att & Visual Enc. & Text Enc. & \(\mathcal{J}\) & \(\mathcal{F}\) & \(\mathcal{J}\&\mathcal{F}\) \\ \hline URVOS[46] & 2020 & ✓ & ResNet50 & Word emb. & 45.3 & 49.2 & 47.2 \\ CMPC-V[33] & 2022 & ✓ & I3D & GloVe & 45.6 & 49.3 & 47.5 \\ SynthRef[25] & 2021 & & & DeepLabv3 & BERT & 39.5 & - & - \\ Ding et al.[13] & 2021 & ✓ & ResNeSt & GRU & 53.7 & 56.0 & 54.8 \\ MTTR[4] & 2022 & ✓ & ✓ & Swin-T & RoBERTa & - & - & 54.6 \\ MTTR*[4] & 2022 & ✓ & ✓ & Swin-T & RoBERTa & 52.1 & 55.2 & 53.6 \\ \hline FTEA (Ours) & & ✓ & ✓ & Swin-T & RoBERTa & **55.0** & **58.0** & **56.5** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparisons on Ref-YouTube-VOS validation set.
denote Diversity Loss and Stacked Transformer, respectively. On the one hand, when using our designed stacked transformer as mask decoder, the [email protected] is upgraded from 72.1% to 73.9% on A2D Sentences, and from 91.0% to 93.6% on J-HMDB Sentences. Simultaneously, Mean IoU is boosted by 0.9% on A2D Sentences and 1.6% on J-HMDB Sentences. This explicitly shows the merits of stacked architecture. On the other hand, when adding the diversity loss during the baseline model training on A2D Sentences and J-HMDB Sentences, it has a performance gain of 1.4% and 1.8% on [email protected], respectively. This verifies that the diversity loss enables encouraging the diversity among candidate object masks so as to alleviate the problem of appearance similarity of candidate objects. While unifying the two components together in fully transformer-equipped framework, it gets accumulative performance gains on A2D Sentences and J-HMDB Sentences, e.g., 2.1% and 3.2% on [email protected], respectively. In addition, even larger performance gaps are observed on Ref-YouTube-RVOS as in Table 6. This shows the additive positive influences of the two components on the referring segmentation performance.
**Stacked Transformer**. We examine different modules available for substituting Stacked Transformer as mask decoder, and the results are shown in Table 7 and Table 8. We keep all the other settings fixed while only changing the building block of Mask Decoder (MD). Compared with the commonly used FPN [30], adding the Stacked Attention (SA) enhances the performance of Feed Forward Network (FFN), which is for the reason that it exploits the object-level spatial context by applying candidate object kernels on the pixel-wise feature maps. While further adopting Stacked FFN, the segmentation performance gets additional improvements. This is because Stacked FFN incorporates the group-wise multi-layer perceptron to learn the appearance attributes of each candidate object independently. Moreover, due to the group sparsity by adopting group linear mapping, the number of model parameters in Stacked Transformer is only one-fifth of that of previously used Feature Pyramid Network. In addition, we explore the three components SA, FFN, SFFN individually for the in-depth analysis in Table 7. As shown in the bottom group, SA brings the most significant gain and achieves 44.8% in terms of mAP, but simply only using SFFN or FFN degenerates the performance a lot. Meanwhile, SFFN performs a bit better than FFN with the much less parameters. This further validate the advantages of cooperatively unifying Stacked Attention and Stacked Feed Forward Network. Hence, we can draw a conclusion that SA plays the most important role in contributing to the success of Stacked Transformer module.
**Diversity loss variant**. To explore the effect of the norm style imposed on the diversity term, we show the results of using \(\ell_{1}\)-norm, \(\ell_{2}\)-norm, and Frobenius norm, in Table 9 and 10. The first row (Base) show
\begin{table}
\begin{tabular}{c c|c c c} \hline \hline Baseline & ST & DL & \(\mathcal{J}\) & \(\mathcal{F}\) & \(\mathcal{J}\)\&\(\mathcal{F}\) \\ \hline ✓ & & & 52.1 & 55.2 & 53.6 \\ ✓ & ✓ & & 53.6 (+1.5) & 56.8 (+1.6) & 55.2 (+1.6) \\ ✓ & & ✓ & 53.3 (+1.2) & 56.5 (+1.3) & 54.9 (+1.3) \\ ✓ & ✓ & ✓ & **55.0** (+2.9) & **58.0** (+2.8) & **56.5** (+2.9) \\ \hline \hline \end{tabular}
\end{table}
Table 6: Ablation study of components on Ref-YouTube-RVOS.
\begin{table}
\begin{tabular}{c c c|c c c c c|c c c} \hline \hline \multirow{2}{*}{Baseline} & \multirow{2}{*}{ST} & \multirow{2}{*}{DL} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{5-10} & & & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline ✓ & & & 91.0 & 81.5 & 57.0 & 14.4 & 0.1 & 36.6 & 67.4 & 67.9 \\ ✓ & ✓ & & 93.6 & 83.4 & 58.8 & 15.2 & 0.1 & 38.0 & 69.6 & 69.5 \\ ✓ & & ✓ & 93.3 & 83.0 & 58.2 & 15.0 & 0.1 & 37.7 & 68.8 & 68.9 \\ ✓ & ✓ & ✓ & **94.2** & **83.8** & **59.2** & **15.5** & 0.1 & **38.7** & **70.1** & **69.5** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ablation study of components on J-HMDB Sentences.
the result of diversity loss without any normalization term. The different norms are applied on the learnable matrix \(\mathbf{W}_{\text{div}}\), which is used for projecting the candidate object kernel into subspace where the object-wise similarity among all candidate objects are computed. From the table, the performance of using \(\ell_{1}\)-norm achieves the best. The possible reason is that there are some redundant candidate objects in produced mask sequences and \(\ell_{1}\)-norm introduces sparsity constraint onto the diversity loss for reducing redundancy, so as to alleviate the overfitting problem during model optimization.
**Diversity loss hyper-parameter**. To examine the contribution of the diversity loss term to the model on A2D Sentences test set, we vary the hyper-parameter \(\lambda_{\text{div}}\) from 0.01 to 0.09 with an interval of 0.02 and the results are shown in Table 11. As can be seen from the table, the performance goes up in the beginning, peaks
\begin{table}
\begin{tabular}{l l|l l l l|l|l l l} \hline \multirow{2}{*}{ST} & \multirow{2}{*}{\#Para.} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{3-10} & & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline FPN & 1.0M & 73.2 & 69.2 & 62.1 & 46.8 & 16.9 & 44.7 & 70.7 & 62.5 \\ SA+FFN & 0.5M & 73.6 & 69.2 & 62.2 & 47.1 & 17.0 & 44.9 & 70.8 & 62.6 \\ SA+SFFN & **0.2M** & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** \\ \hline SA & 0.1M & 73.5 & 69.1 & 61.4 & 44.9 & 15.5 & 44.8 & 70.7 & 62.0 \\ SFFN & 0.05M & 71.7 & 66.4 & 56.5 & 40.1 & 12.3 & 41.0 & 68.4 & 59.9 \\ FFN & 0.4M & 70.8 & 65.6 & 56.1 & 39.4 & 12.7 & 40.0 & 67.7 & 59.2 \\ \hline \end{tabular}
\end{table}
Table 7: Ablation study of Stacked Transformer on A2D-Sentences test set. FPN: Feature Pyramid Network; SA: Stacked Attention; SFFN: Stacked Feed Forward Network.
\begin{table}
\begin{tabular}{l|l l l l l|l l|l l} \hline \multirow{2}{*}{\(\mathcal{L}_{\text{div}}\)} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{2-9} & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline Base & 73.6 & 69.8 & 62.3 & 47.1 & 16.8 & 44.7 & 70.9 & 62.4 \\ + \(\ell_{1}\) & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** \\ + \(\ell_{2}\) & 73.2 & 69.2 & **62.4** & **47.8** & 16.9 & 44.7 & 70.6 & 62.5 \\ + Fro & 72.6 & 69.1 & 62.2 & 46.7 & 16.5 & 44.6 & 70.2 & 62.0 \\ \hline \end{tabular}
\end{table}
Table 9: Ablation study of diversity loss variants on A2D Sentences.
\begin{table}
\begin{tabular}{l|l l l l l|l l l} \hline \multirow{2}{*}{\(\mathcal{L}_{\text{div}}\)} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{2-9} & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline Base & 73.6 & 69.8 & 62.3 & 47.1 & 16.8 & 44.7 & 70.9 & 62.4 \\ + \(\ell_{1}\) & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** \\ + \(\ell_{2}\) & 73.2 & 69.2 & **62.4** & **47.8** & 16.9 & 44.7 & 70.6 & 62.5 \\ + Fro & 72.6 & 69.1 & 62.2 & 46.7 & 16.5 & 44.6 & 70.2 & 62.0 \\ \hline \end{tabular}
\end{table}
Table 9: Ablation study of diversity loss variants on A2D Sentences.
at 0.07, and then goes down. So we set \(\lambda_{\text{div}}\) to 0.7 through all experiments. Unlike Dice loss [40] and Focal loss [31], which calculate the loss of candidate object individually, our diversity loss is applied on candidate objects to calculate the similarity of object pairs, resulting in greater loss values. So the balancing coefficient \(\lambda_{\text{div}}\) is small.
**Candidate object number**. To examine the influences of candidate object number, we vary the number \(K\) from 10 to 90 with a gap of 20 and show the segmentation results on A2D Sentences test set in Table 12. As observed in the table, the performance rises by 4.8% when \(K\) changes from 10 to 50 and drops by 3.3% from 50 to 90, in terms of mAP. Neither too many candidate objects nor too less ones can achieve good performance. Hence, we choose 50 for \(K\) since the performance saturates at this point. This phenomenon stems from the fact that the actual number of objects in video is much smaller than that of candidate objects, and the model will be trapped in local minima when the candidate object number gets very large. Moreover, too small number does not allow the model to capture the diverse pattern of different candidate objects in video.
\begin{table}
\begin{tabular}{l|c c c c|c|c c c} \hline \multirow{2}{*}{\(\lambda_{\text{div}}\)} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{2-9} & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline
0.01 & 71.7 & 67.8 & 60.8 & 46.1 & 16.8 & 43.7 & 70.0 & 61.6 \\
0.03 & 72.1 & 68.3 & 61.1 & 46.3 & 16.4 & 44.0 & 70.3 & 61.6 \\
0.05 & 73.6 & 69.1 & 61.7 & 46.8 & 16.1 & 44.6 & 70.2 & 62.0 \\
0.07 & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** \\
0.09 & 72.7 & 68.7 & 60.3 & 46.3 & 16.4 & 44.2 & 69.5 & 61.3 \\ \hline \end{tabular}
\end{table}
Table 11: Ablation study of hyper-parameter \(\lambda_{\text{div}}\) on A2D Sentences.
\begin{table}
\begin{tabular}{l|c c c c|c c|c c} \hline \multirow{2}{*}{\(K\)} & \multicolumn{4}{c|}{Precision} & \multicolumn{2}{c|}{mAP} & \multicolumn{2}{c}{IoU} \\ \cline{2-9} & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.5:0.95 & Overall & Mean \\ \hline
10 & 72.1 & 67.4 & 59.3 & 43.2 & 13.5 & 40.3 & 69.4 & 61.0 \\
30 & 72.7 & 67.7 & 60.3 & 45.0 & 14.5 & 41.6 & 69.9 & 61.7 \\
50 & **74.2** & **70.2** & **62.4** & **47.8** & **17.1** & **45.1** & **71.2** & **62.8** \\
70 & 73.2 & 69.2 & 61.9 & 47.2 & 16.5 & 44.8 & 70.2 & 62.5 \\
90 & 73.1 & 68.8 & 60.9 & 46.0 & 15.3 & 41.8 & 69.8 & 62.1 \\ \hline \end{tabular}
\end{table}
Table 12: Ablation study of candidate object number on A2D Sentences.
Figure 3: Qualitative results on Ref-YouTube-VOS under adverse scenarios. Mask and text color indicates some object in video clip.
### Qualitative Analysis
To quantitatively demonstrate the robustness of our FTEA method, we illustrate the segmentation results of three randomly selected video clips from Ref-YouTube-VOS validation set under adverse scenarios in Figure 3. From this figure, we can see that our model can successfully locate the text referred objects while producing accurate masks in several challenging video scenes, such as object occlusions(i.e., lady hands before goose at top row), severe appearance changes (i.e., sportsman body pose variation at middle row), partial appearance missing (i.e., large part of giraffe is unseen at bottom row). This might because our method can well capture object-level spatial context by using fully transformer-equipped framework with additionally imposed diversity loss term.
Moreover, to give an intuitive visualization comparison, we randomly selected two video clips from Ref-YouTube-VOS validation set and show the segmentation results in Figure 4. For both left and right clips in this figure, the first row shows the sampled frames in video clip, the three middle rows show the results of baseline models URVOS [46], CMPC-V [33], and MTTR [4], respectively; the bottom row is the result of our FTEA model. Since the ground-truth mask is unavailable, we take the results of FTEA as the reference and highlight the false positive pixels in red and false negative pixels in yellow for the baseline models. As vividly shown, our method produces the smoother contours of the giant panda and the while car while the baseline models may treat some surrounding background pixels as object part by mistake. For example, ours can well handle car appearance variation and the baselines do not perform satisfactorily. This can be due to the fact that our method adopts the strategy of progressive object learning, which finds the target object from a group of candidate objects by stacked transformers.
In addition, we show several failure cases by randomly selecting several video clips from A2D Sentences test set. As show in Fig. 5, the bird mouth is hard to segment in the top clip when the mouth is very small and the body shares the similar color with the background; the cat mask in the beginning of the middle clip is obscure when the lighting condition is very poor; the masks of the bird and the toy are confused in the bottom clip when the two objects are overlapped. This will inspire our future research works of addressing these issues.
Figure 4: Qualitative comparison between the baselines and our model on Ref-YouTube-VOS validation set. (a) Video frames; (b) URVOS [46]; (c) CMPC-V [33]; (d) MTTR [4]; (e) FTEA (Ours).
## 5 Discussion
Unlike traditional VOS tasks, RVOS requires to segment the objects referred by natural language queries, which desires to bridge the semantic gap between video and language. Essentially, we should model the pixel-wise spatiotemporal relations among the frames in video, and align the target area with the referred object. Existing methods usually regard RVOS as a pixel-wise classification problem which classifies the pixels into foreground (objects) or background (scenes), failing to well consider the object spatial relations. Instead, we treat RVOS as an object-wise classification problem, i.e., capturing the appearances of the objects referred at the same time, thus modeling their spatial relations.
For the candidate object location, existing methods often adopt the convolution and upsampling to decode the features into masks. While these frame features capture the local context, they fail to model the global relations. This inspires us to develop a transformer-based network for decoding the features into object masks, by capturing the globally spatial context. Moreover, our framework completely adopts the stacked transformers to encode both the visual and the text features, which has been demonstrated by the above experiments.
In addition, there is a fact that the number of candidate object masks is much more than that of objects in video, and one object will be aligned with multiple candidate masks, which remains under-explored by previous works. So, we propose to impose the diversity loss on candidate objects to make them diverse as much as possible, so that the objects in video can be maximally identified. The effectiveness of this idea has been validated by the ablation records in Table 9 and 10.
In fact, our transformation block, i.e., Stacked Transformer, can be applied to other pixel-wise segmentation tasks, while the diversity loss can be applied in DETR-like architecture for other tasks, such as object detection. From the practical view, our RVOS model can be used in video editing, automobile driving, and
Figure 5: Failure cases of our model on A2D Sentences test set.
human-robot interaction. For example, it facilitates the production of the customized film by segmenting the designated object or person quickly from thousands of source videos.
## 6 Conclusion
This paper has proposed a fully transformer-equipped architecture for referring video object segmentation, termed FTEA, which can be trained in an end-to-end manner. Unlike existing methods failing to well capture long-range spatial-temporal relations across video frames, our model completely use transformers for visual and text feature extraction, cross-modal alignment, and mask decoding. In particular, we developed the stacked transformers for composing mask decoder to better capture object-level spatial context, which is beneficial for identifying the target object from candidate objects. Furthermore, we impose the diversity regularization term on the objective function to promote yielding diverse object masks, such that as many as objects in video are taken into account by the model. To examine the performance of our method, we conducted a lot of experiments on three benchmark databases, and the extensive results have clearly validated the superiority of our FTEA model in end-to-end RVOS task.
However, there are still some limitations of our method. First, it may fail to identify the complete contours of referred objects in some adverse scenarios, such as similar appearance of object and background, poor lighting condition, and object overlapping. Second, it still requires large computational sources for training and inference, which will prohibit its widespread applications. In future, we will explore the way of facilitating the inference of language relations among multiple objects in similar appearance, and attempt to compress the model by adopting the knowledge distillation or pruning strategy.
|
2305.00533 | Guaranteed Evader Detection in Multi-Agent Search Tasks using Pincer
Trajectories | Assume that inside an initial planar area there are smart mobile evaders
attempting to avoid detection by a team of sweeping searching agents. All
sweepers detect evaders with fan-shaped sensors, modeling the field of view of
real cameras. Detection of all evaders is guaranteed with cooperative sweeping
strategies, by setting requirements on sweepers' speed, and by carefully
designing their trajectories. Assume the smart evaders have an upper limit on
their speed which is a-priori known to the sweeping team. An easier task for
the team of sweepers is to confine evaders to the domain in which they are
initially located. The sweepers accomplish the confinement task if they move
sufficiently fast and detect evaders by applying an appropriate search
strategy. Any given search strategy results in a minimal sweeper's speed in
order to be able to detect all evaders. The minimal speed guarantees the
ability of the sweeping team to confine evaders to their original domain, and
if the sweepers move faster they are able to detect all evaders that are
present in the region. We present results on the total search time for a novel
pincer-movement based search protocol that utilizes complementary trajectories
along with adaptive sensor geometries for any even number of pursuers. | Roee M. Francos, Alfred M. Bruckstein | 2023-04-30T17:12:14Z | http://arxiv.org/abs/2305.00533v1 | # Guaranteed Evader Detection in Multi-Agent Search Tasks using Pincor Trajectories
###### Abstract
Assume that inside an initial planar area there are smart mobile evaders attempting to avoid detection by a team of sweeping searching agents. All sweepers detect evaders with fan-shaped sensors, modeling the field of view of real cameras. Detection of all evaders is guaranteed with cooperative sweeping strategies, by setting requirements on sweepers' speed, and by carefully designing their trajectories. Assume the smart evaders have an upper limit on their speed which is a-priori known to the sweeping team. An easier task for the team of sweepers is to confine evaders to the domain in which they are initially located. The sweepers accomplish the confinement task if they move sufficiently fast and detect evaders by applying an appropriate search strategy. Any given search strategy results in a minimal sweeper's speed in order to be able to detect all evaders. The minimal speed guarantees the ability of the sweeping team to confine evaders to their original domain, and if the sweepers move faster they are able to detect all evaders that are present in the region. We present results on the total search time for a novel pincer-movement based search protocol that utilizes complementary trajectories along with adaptive sensor geometries for any even number of pursuers.
## I Introduction
**Motivation**. The goal of this research is to provide efficient "must-win" search strategies for a team of \(n\) identical sweeping agents that must guarantee detection of an unknown number of smart evaders that are initially located inside a given circular region of radius \(R_{0}\) while minimizing the search time. The evaders move and try to escape the initial region and have maximal speed of \(V_{T}\), known to the sweepers, and do not have any turning constraints.
A smart evader is one that detects and responds to the motions of searchers by performing optimal evasive maneuvers, to avoid interception. A smart evader is assumed to have full knowledge of the search strategy of the sweeping team. Possessing such knowledge enables smart evaders to plan their movements in a way that maximizes the time required for the pursuing sweepers to detect them. Guaranteed detection of all evaders implies that for all particular choices of escape strategies smart evaders may implement in response to the motions of searches, they will all eventually be detected.
Agents gather information only from their sensors, and evaders in a sweeper's field of view are immediately detected. There can be many evaders, and those can be located at any point in the interior of the circular region at the beginning of the search process. Importantly, in contradiction to most recent literature on pursuit-evasion problems, the sweepers do not have any information regarding evaders locations' that are outside of their sensing range, nor on the total number of evaders they must detect. All sweepers move at a speed \(V_{s}>V_{T}\) (measured at the center of the sensor that represents a sweeper's center of mass) and detect evaders using fan shaped sensors with a given half-angle denoted by \(\alpha\) and a length of \(2r\). Finding an efficient algorithm requires that, throughout the sweep, the footprint of the sweepers' sensors maximally overlaps the evader region (the region where evaders may possibly be) in order to detect as many evaders as possible.
The sweeping team consists of an even number of agents, referred to as sweepers, that act as sensors and sweep the region until all evaders are detected. The search is done by pairs of agents sweeping toward each other for the purpose of entrapping all evaders. Cooperation among agents scanning adjacent sectors and sweep toward each other enables them to entrap all evaders, regardless of evaders' adversarial motions.
Since the sweepers do not have any additional knowledge about the evaders whereabouts, or even if all evaders were found at some intermediate point of time during the search, the search is continued until the whole region is searched.
Additional information such as the number of evaders and their exact locations provided to the sweeper team in advance, could reduce the termination time by utilizing this knowledge and organizing the entire search process differently. However this is not the focus of this paper. Therefore, the resulting search times for a circular environment can be seen as an upper bound on the search time, resulting from the lack of specific information about evaders locations.
We chose to analyze the performance of the system with a fan-shaped sensor as this type of sensor is highly common in many sensing and scanning applications, from optical to radar and sonar. Fan-shaped sensors with a variable half-angle resemble actual pinhole camera visual sensors of a given aperture. Furthermore, a fan shaped sensor with a larger area can reduce the critical speed and sweep time compared to approaches that use linear sensors while avoiding the usage of unrealistic sensors such as circular sensors which assume to detect evaders in all angles around a searcher.
The considered protocol may be implemented in a \(2D\) environment in which the actual agents travel on a plane or as a \(3\) dimensional search where the sweepers are drone-like agents which fly over the evader area at different heights.
**Guaranteed evader detection in practical robotic applications.** A wide range of real-world tasks that are nowadays carried out by human-controlled machines are expected to be replaced by partially autonomously operated robots in the nearby future. Search and rescue missions, airborne surveillance applications, various monitoring tasks for security applications, wildlife tracking, fire control as well as
inspection tasks in hazardous zones can all benefit from the theoretical and experimental results developed in this work. The combination of the proposed search protocol and sensor choice enables nearly optimal cooperation between agents and allows the deployment of multi-robot teams with superior performance.
For the mentioned applications, guaranteeing success in the worst-case scenario ensures succeeding in the task for all other simpler scenarios as well. This approach is often used in real-world settings where full state information is not available and performance guarantees must be kept.
The searching agents considered in this work do not assume knowledge of the number of evaders present in the region, their locations, or their escape plan and despite that they are able to detect all of them. Therefore, this work is of prime theoretical and practical importance as is in many pursuit-evasion games the searching team does not have complete information about its opposing team, as is often assumed by many previous papers.
Since multi-agent pursuit-evasion search protocols mainly utilize multi-agent UAVs, sweepers fly over the environment containing the evaders, therefore investigating issues such as obstacles is not the main focus of the work, because the sweeping team flies over them. Obstacles limit the movements and locations of ground-moving evaders, and therefore their presence assists the searching team to detect them since it limits the escape options of evaders, and thus does not impact our "worst-case" analysis.
The mentioned protocols can use a vast suite of onboard sensors to detect evaders, depending on the domain of application. Potential choices vary from visual sensors such as cameras which have the benefit of having a high resolution and being lightweight. Therefore, detecting evaders with cameras requires a smaller battery in order to accomplish the desired task compared to other sensing modalities such as radars that increase the weight of the payload and hence limit the duration of the search mission due to increased energy consumption. Actual detection of evaders can utilize a vast number of computer-vision detection algorithms such as [1, 2].
Since to our use case, the preferred choice for the detection modality is a camera, we extend and generalize previous works on guaranteed detection of smart targets to accommodate usage of such sensors. Previous works such as used circular sensors and linear sensors. However, since both circular and linear sensors offer simplistic assumptions about the area detected by actual searching robot teams, in this work we extend and generalize state-of-the art results on guaranteed detection of smart evaders that use pincer sweeps between searching pairs to search teams that use sensors modelling actual visual detectors. Our obtained results are insensitive to locations of evaders or their numbers. The proposed protocols can be applied in other convex environments as well, by using slight modifications to the explored sweeping strategies.
**Overview of related research.** Several interesting search problems originated in the second world war due to the need to design patrol strategies for aircraft aiming to detect ships or submarines in the English channel, see [3]. Patrolling a corridor with multi agent teams whose goal is ensuring detection and interception of smart evaders was also investigated in [4] while optimally proven strategies were provided in [5]. A somewhat related, discrete version of the problem, was also investigated in [6]. It focuses on a dynamic variant of the cooperative cleaners problem, a problem that requires several simple agents to a clean a connected region on a grid with contaminated pixels. This contamination is assumed to spread to neighbors at a given rate.
In [7, 8], Bressan et al. investigate optimal strategies for the construction of barriers in real-time aiming at containing and confining the spread of fire from a given initial area of the plane. The authors are interested in determining a minimal possible barrier construction speed that enables the confinement of the fire, and on determining optimality conditions for confinement strategies.
A non-escape search procedure for evaders that are originally located in a convex region of the plane from which they may move out of, is investigated in [9], and a cooperative progressing spiral-in algorithm performed by several agents with disk shaped sensors in a leader-follower formation is proposed. In [10], McGee et al. investigate guaranteed search patterns for smart evaders that do not have any maneuverability restrictions except for an upper limit on their speed. The sensor the agents are equipped with detects evaders within a disk shaped area around the searcher's location. Search patterns consisting of spiral and linear sections are considered. In [11], Hew investigates search for smart evaders by implementing concentric arc trajectories with agents having disk-shaped sensors similar to the ones used in [10]. The aim of search protocol is to detect submarines in a channel or in a half plane.
Another set of related problems are pursuit-evasion games, where the pursuers' objective is to detect evaders and the evaders objective is to avoid the pursuers. Pursuit-evasion games include combinations of single and multiple evaders and pursuers scenarios. These types of problems are addressed in the context of perimeter defense games by Shishika et al. in [12, 13], with a focus on utilizing cooperation between pursuers to improve the defense strategy. In [12], implicit cooperation between pairs of defenders that move in a "pincer movement" is performed to intercept intruders before they enter a planar convex region. In [14], pursuit-evasion problems involving multiple pursuers and multiple evaders (MPME) are studied. The original MPME problem is decomposed to a sequence of simpler multiple pursuers single evader (MPSE) problems by classifying if a pursuer is relevant or redundant for each evader and only the relevant pursuers participate in the MPSE pursuit of each evader. Pursuers and evaders are all assumed to be identical, and pursuers follow either a constant bearing or a pure pursuit strategy. The problem is simplified by adopting a dynamic divide and conquer approach, where at every time instant each evader is assigned to a set of pursuers based on the instantaneous positions of all the players. The original MPME problem is decomposed to a sequence of simpler multiple pursuers single evader (MPSE) problems by
classifying if a pursuer is relevant or redundant for each evader by using Apollonius circles. Only the relevant pursuers' participate in the MPSE pursuit of each evader.
Recent surveys on pursuit evasion problems are [15, 16, 17]. In [15], a taxonomy of search problems is presented. The paper highlights algorithms and results arising from different assumptions on searchers, evaders and environments and discusses potential field applications for these approaches. The authors focus on a number of pursuit-evasion games that are directly connected to robotics and not on differential games which are the focus of the other cited surveys. [16] presents a survey on pursuit problems with \(1\) pursuer versus \(2\) evaders or \(2\) pursuers versus \(1\) evader are formulated as a dynamic game and solved with general methods of zero-sum differential games. In [17], the authors present a recent survey on pursuit-evasion differential games and classify the papers according to the numbers of participating players: single-pursuer single-evader (SPSE), MPSE, one- pursuer multiple-evaders (SPME) and MPME. In [18], a two-player differential game in which a pursuer aims to capture an evader before it escapes a circular region is investigated. In [19], the problem of border defense differential game where \(M\) pursuers cooperate in order to optimally catch \(N\) evaders before they reach the border of the region and escape is investigated.
**Contributions:** We present a comprehensive theoretical and numerical analysis of trajectories, critical speeds and search times for a team of \(n\) cooperative sweeping agents equipped with fan-shaped sensors with a variable half-angle, whose mission is to guarantee detection of all smart evaders that are initially located inside a given circular region from which they may move out of to escape the pursuing sweeping agents.
* We present a novel pincer search strategy utilizing pincer sweeps and complementing sensor geometries to improve the detection capabilities of the search team.
* We develop analytic formulas for a search protocol, applicable to any even number of sweepers with fan-shaped sensors with a given half-angle. Fan shaped sensors with a variable half-angle that model the actual field-of-view of visual sensors, therefore enabling the applicability of the established results in real-world search scenarios.
* We extend state-of-the-art multi-pursuer multi-evader literature to scenarios with arbitrary large numbers of evaders, where very limited information is available to the pursuers, and even so they are able to optimally cooperate in order to successfully complete their mission.
The research performed in this paper extends and generalizes previous results on linear and circular sensors such as [10, 20] to fan-shaped sensors with an arbitrary angle. Since "at the limit", a linear shaped sensor is a special case of a fan shaped sensor with an angle of \(0\) and circular or disk-shaped sensors are fan shaped sensors with an angle of \(2\pi\).
Hence the analysis performed in this work provides a significant theoretical milestone in generalizing previous results and allowing the application of the established results to practical robotic vision-based search tasks.
**Numerical Evaluation.** The theoretical analysis is complemented by simulation experiments in MATLAB and NetLogo that verify the theoretical results, highlight the performance of search strategies with different number of sweepers and sensors and illustrates them graphically in the figures embedded throughout the text and in the accompanying video.
**Paper Organization.** The paper is organized as follows: Section \(2\) describes the motivation and setting for using pincer-based search strategies. Section \(3\) presents a optimal lower bound on the sweepers' speed that is independent of the specific chosen sweep protocol. In section \(4\), an analytical analysis of critical speeds and sweep time is provided, accompanied by numerical results. In the last section conclusions are given and future research directions are discussed.
## II Pincer-Based Search Problem Formulation
Inherently, the complete search time of the region depends on the sweeping protocol the sweeping team implements. The protocols we propose aim to reduce the radius of the circle bounding the evader region after each full sweep. This guarantees complete elimination of the potential area where evaders may be located.
At the beginning of the search, we assume the entire area of the sweepers' sensors is inside the evader region. This implies that the full length of the central line of the sweepers' sensors (see the light green lines that depicts this part of the sensor in Fig. \(1\)) is inside the evader region, i.e., a footprint of length \(2r\). The sweepers' sensors are shown in green in Fig. \(1\) as well. The blue circle on the light green line depicts the center of the sensor of clockwise sweeping agent (and therefore its center of mass). \(\alpha\) denotes the half-angle of the sweepers fan-shaped sensors.
If we were to distribute sweepers equally along the boundary of the initial evader region, and have them move in the same direction, potential escape from the points adjacent to the starting locations of the sweepers might occur. To enable sweepers to succeed in the task while having the lowest possible critical speed, we propose that pairs of sweepers move out in opposite directions along the boundary of the evader region and sweep in a pincer movement instead of implementing a protocol where all sweepers move in the same direction along the boundary.
The proposed strategy can be applied with any even number of sweepers. Each sweeper is responsible for an angular sector of the evader region that is proportional to the number of sweepers. The sweepers' field-of-views are initially positioned in pairs back-to-back. One sweeper in the pair moves counter-clockwise while the other sweeper in the pair moves clockwise. Once the sweepers meet, i.e. their sensors are again superimposed at a meeting point, they switch the directions in which they move. Changing of directions takes place every time sweepers "bump" into each other.
It is worth emphasizing that once a sweeper leaves a location that was cleared from evaders, other evaders may attempt to reach this location again. Therefore, the proposed sweep protocol must ensure that there is no evaders strategy that
enables any evader to escape even if evaders wait at the edge of a cleared location and start their escape instantly after a sweeper leaves this location.
Sweepery implementing pincer movements solve the problem of evader region's spread from the "most dangerous points". Evaders located at these points have the maximum time to escape throughout the sweepers' movements. Consequently, if evaders attempting to escape from these locations are detected, evaders attempting to escape from all other locations are detected as well. When a pair of sweepers travelling in a pincer movement finishes sweeping its allocated section of the environment (particularly an angle of \(\frac{2\pi}{n}-\gamma\), which is explained in detail in the next section), provided they move at a speed exceeding the critical speed, the spread of the evader region that originates from these points has to be less than \(2r\). Therefore, the sweepers advance towards the center of the evader region by the margin between the spread of the evader region during this motion and \(2r\).
Since a pair of sweepers begin their sweep when the footprints of their sensors are "back-to-back", the evader region's points that should be considered to guarantee detection of all evaders are located at the inner tips of the central part of the sweepers' sensors closest to the center of the evader region and not from points on the boundary of the evader region.
If all sweepers move in the same direction following their equally spaced placement around the region, the evader region's points that need to be considered for limiting the region's spread are points located on the boundary of the evader region. This will in turn result in higher critical speeds for sweepers implementing same-direction sweeps. Requiring higher critical speeds also results in longer sweep times for same-direction protocols compared to pincer based methods.
In [21], a quantitative analysis comparing critical speeds and sweep times between pincer-based search protocols and same-direction protocols operated by the same teams of sweepers is investigated for agents with different sensors and sweep protocols then the ones considered in this work. The analysis in [21] indicates that the critical speeds and sweep times for agents employing same direction sweeps is indeed higher compared to the pincer-based speeds. We assume this applies in this settings as well and plan to analytically compare these search methodologies in future work.
The proposed search pattern uses spiral scans, related to the sweep pattern suggested in [10]. In order to have maximal sensor footprint intersecting the evader region throughout the sweep, the proposed search pattern aims to track the expanding evader region's "wavefront".
Simulations demonstrating the evolution of the search strategies are generated using NetLogo software [22] and shown in Fig. \(2\). Green areas are locations that are free from evaders and red areas indicate locations where potential evaders may still be located. Fig. \(2\) shows the cleaning progress of the evader region when \(4\) sweepers employ the proposed sweep protocol.
**Comparison to related research.** In [23], the confinement and complete detection tasks for a line formation of agents or alternatively for a single agent with a linear sensor are analyzed. The presented approach ensures detection of all evaders, however the complete detection time and the critical speeds required for the sweeping formation to succeed in its task are inferior to the results achieved in this work, since no pincer sweeps are performed and since the line formation performs simple circular motion and does not track the expanding wavefront of potential evader locations as is performed in this work. In [20], teams of agents perform pincer sweep search strategies with linear sensors. These two early works assume linear sensors and not fan-shaped sensors as the ones assumed in this paper, hence this paper can be regarded as a generalization of the results from [20] to fan-shaped sensors with a given half-angle that model the field-of-view of actual cameras allowing applicability of the results to real robotic search and surveillance missions.
While automated discovery of search policies relies on knowledge of the players' locations throughout the search process (as assumed in [12, 13]), to plan the trajectories of pursuers and evaders, in our setting we bypass the need for such extensive and often unrealistic knowledge of the environment and perfect communication exchange between
Fig. 1: Initial placement of \(2\) sweepers employing the spiral pincer sweep strategy.
Fig. 2: Swept areas and evader region status for different times in a scenario where \(4\) agents employ the spiral pincer sweep process and \(\alpha=30^{\circ}\). (a) shows the status at beginning of first cycle, (b)- shows the status midway through the first cycle, (c)-shows the status at beginning of the second cycle and (d) shows the status toward the end of the second cycle.
members of the same team with a simple and efficient strategy. Furthermore, in contrast to [12, 13] we do not assume a finite number of evaders and that each sweeper can intercept only a single evader, and guarantee interception of all evaders.
As opposed to our work, references [10, 11] use a disk shaped sensor with a radius of \(r\), and do not calculate the detection times of all evaders. Although the works in references [12, 13] use pincer movements between pairs of defenders as well, they have a different objective of protecting an initial region from invaders, contrary to our goal which is to detect all evaders that may spread from the interior of the region. Furthermore these works are concerned with devising policies for intercepting as many intruders as possible relying on an assumption that the number of intruders is finite and that each defender can only intercept a single intruder and not on guaranteeing interception of all evaders or intruders, as is the aim of our work.
Contrary to the work reported in references [18, 19], we do not assume the number of evaders is known to the pursuers and not that the members of each party have full access to the locations of the members of the opposing party and use it in order to plan their actions. From our point of view using such information is unrealistic for search applications where determining the locations of the opposing team members is at the heart of the problem. Such knowledge requires very sophisticated sensing capabilities when searching large regions. Furthermore, we solve the assignment of pursuers to evaders elegantly by assigning sweeper pairs based on their geometric location. This simple assignment rule alleviates the need for communicating synchronized and precise location information between the searchers.
## III Lower Bound on the Speed of Sweepers with Fan-Shaped Sensors
We present an optimal bound on the speed of sweepers equipped with fan shaped sensors. The developed bound is irrespective of the choice of the implemented sweeping protocol. This bound serves as one of the benchmarks that are used to compare the performance of different search strategies by a metric we refer to as the critical speed.
The largest number of evaders are detected when the entire fan shaped sensor overlaps the evader region. For a fan shaped sensor of length \(2r\) and half angle \(\alpha\) the maximal rate of detecting evaders must be higher than the minimal expansion rate of the evader region. Else, there is no feasible sweeping protocol that can ensure the detection of all evaders.
The lower bound is derived for a sweeper team containing \(n\) identical sweepers. The smallest sweeper's speed that ensures the maximal detection rate is larger than the minimal expansion rate is based purely on geometric properties of the evader region, the sweepers sensors' dimensions and the evaders' maximal speed. This lower bound on the speed is defined as the critical speed and is denoted by \(V_{LB}\).
\[V_{LB}=\frac{\pi R_{0}V_{T}}{nr} \tag{1}\]
This lower bound is similar for line sensors and its proof follows the derivation such a lower bound for a sweeper's critical speed in [20]. Similar results for the bound in [20] for sweepers with circular sensors are given in [10].
If sweepers move at speeds that exceed the critical speed, they possess the ability to implement a suitable sweep protocol that decreases the evader region, therefore allowing them to sweep the next iteration around a smaller region. Therefore, the critical speed is calculated based on the sweep around the initial and largest radius, since sweeping around smaller regions takes less time. Therefore, if the sweepers succeeded in confining evaders to the larger and initial evader region they will surely succeed when the evader region decreases throughout the search protocol (since their speed stays constant throughout the protocol).
## IV Spiral Pincer Sweep Strategies For Sweepers with Fan-Shaped Sensors
### _The Critical Speed_
In order for the sweeper team to efficiently scan the region, we desire that there will be maximal intersection between the sweepers' sensors and the evader region. Implementing spiral trajectories, in which sweepers' sensors track the expanding evader region's wavefront, allows the sweeping team to achieve nearly optimal efficiency as proven in this section.
We consider a scenario in which a sweeping team consisting of \(n\) sweepers, where \(n\) is even. All sweepers have fan-shaped sensors with finite visibility that model the sensor geometry of real cameras. Sweepers' sensors have a diameter length of \(2r\) and a half-angle of \(\alpha\), as shown in Fig. 1. Selecting sweepers with complementing sensor geometries along with having the sweepers move in opposite directions prevents evaders from devising an escape strategy that leverages the gap between the sweepers' sensors, since the field-of-views of each sweeping pair are tangent to each other. These considerations highlight the benefits of using both pincer sweeps along with sensor geometries between sweeping pairs that complete each other in order to achieve maximal evader detection performance.
Fig. 1 shows the initial setting of the problem when \(2\) sweepers perform the search. The symmetry between the trajectories of adjacent searching pairs prevents potential escape of evaders from point \(P=(0,R_{0})\), "the most dangerous point" evaders may attempt to escape from due to similar consideration as are proven in [23].
Hence, a sweeper's critical speed depends solely on the time required for it to sweep its allocated angular sector. Define by \(\gamma\) the angle describing the offset angle of the center of the sweeper from the center of the region at the beginning of the sweep process, see Fig. 1 for a geometric illustration of \(\gamma\). The relation between \(\gamma\) and \(\alpha\) obtained by using the law of cosines on the depicted triangles of Fig. 1 and is given by,
\[4r^{2}={R_{0}}^{2}+(R_{0}-2r+r\cos\alpha)^{2}-2R_{0}\left(R_{0}-2r+r\cos\alpha \right)\cos\gamma. \tag{2}\]
Rearranging terms yields,
\[\gamma=\arccos\left(\frac{2{R_{0}}^{2}+2R_{0}r{\left[\cos\alpha-2\right]}+r^ {2}\cos\alpha(\cos\alpha-2)}{2R_{0}(R_{0}+r\cos\alpha-2r)}\right) \tag{3}\]
Therefore, at each sweep agents traverse an angle of \(\frac{2\pi}{n}-\gamma\). After sweepers complete searching their allocated sector, and only if they move at a sufficient speed, they advance toward the center of the evader region together. If the search is planar, sweepers switch the sweeping directions following an inward movement toward the center. If the search is \(3\) dimensional, sweepers first move together toward the center of the region and only afterwards exchange the sector they sweep with the sweeping teammate they perform the pincer sweep with. Following this motion they commence sweeping an evader region bounded by a circle with a smaller radius.
Sweepers begin their spiral motion with the outer tip of the central line of their fan-shaped sensor tangent to the evader region's boundary. To preserve the outer central tip of their sensor tangent to the evader region, sweepers move with an angular offset of \(\phi\) to the normal of the evader region (at each point) throughout their sweep. \(\phi\) is the angle between the outer tip of the central line of a sweeper's sensor and the normal of the evader region at the meeting point between the evader region and the tip of the central line of the sensor that is furthest from the center of the evader region (see the light green line that depicts this part of the sensor in Fig. 1). \(\phi\) depends on the ratio between the sweeper and evader speeds (see Fig. 1 for its depiction). The sweepers' incentive to move at a constant angle \(\phi\) to the normal of the evader region is to preserve the evader region's circular shape and to keep as much of their sensor footprint inside the evader region at all times in order to detect a maximal number of evaders. \(\phi\) is given by,
\[\sin\phi=\frac{V_{T}}{V_{s}} \tag{4}\]
Because spiral sweeping preserves the evader region's circular shape, as a result of the isoperimeteric inequality such trajectories necessitate that the length of the curve bounding the evader region is minimal and therefore the time required for the sweepers to sweep around it is minimal as well. Denote by \(\theta_{s}\) the sweeper's angular speed, given by,
\[\frac{d\theta_{s}}{dt}=\frac{V_{s}\cos\phi}{R_{s}(t)}=\frac{\sqrt{{V_{s}}^{2} -{V_{T}}^{2}}}{R_{s}(t)} \tag{5}\]
Integration of (5) with the initial and final sweep times of the angular sector as the integral's limits is,
\[\int_{0}^{t_{\theta}}\dot{\theta}\left(\zeta\right)d\zeta=\int_{0}^{t_{\theta }}\frac{\sqrt{{V_{s}}^{2}-{V_{T}}^{2}}}{V_{T}\zeta+R_{0}-r}d\zeta \tag{6}\]
With a solution for \(\theta\left(t_{\theta}\right)\) obtained from (6) given by,
\[\theta\left(t_{\theta}\right)=\frac{\sqrt{{V_{s}}^{2}-{V_{T}}^{2}}}{V_{T}}\ln \left(\frac{V_{T}t_{\theta}+R_{0}-r}{R_{0}-r}\right) \tag{7}\]
Raising by an exponent (7) yields,
\[\left(R_{0}-r\right)\exp\left(\frac{V_{T}\theta(t_{\theta})}{\sqrt{{V_{s}}^{ 2}-{V_{T}}^{2}}}\right)=V_{T}t_{\theta}+R_{0}-r=R_{s}(t_{\theta}) \tag{8}\]
The time required for a sweeper to scan its allocated angular section corresponds to it sweeping an angle of \(\theta\) by \(\frac{2\pi}{n}-\gamma\) around the region. While the sweeper performs this motion, the evader region's expansion must be at most \(2r\) from its initial radius. Otherwise, the sweepers will not be to detect potential evaders attempting to escape from the region. Allowing a spread of at most \(2r\) is a simplification of the problem that assumes that while the sweepers progress toward the center of the evader region the evaders do not spread. Obviously this is not a realistic assumption, hence we address this case after the solution of the simplified setting. To guarantee that no potential evader escapes the sweepers, following a sweep by \(\frac{2\pi}{n}-\gamma\) we must demand that,
\[R_{0}+r\geq R_{s}(t_{\frac{2\pi}{n}-\gamma}) \tag{9}\]
Define,
\[\lambda\overset{\Delta}{=}\exp\left(\frac{\left(\frac{2\pi}{n}-\gamma\right)V_ {T}}{\sqrt{{V_{s}}^{2}-{V_{T}}^{2}}}\right) \tag{10}\]
Replacing \(R_{s}(t_{\frac{2\pi}{n}-\gamma})\) with the expression derived for the trajectory of the sweeper's center results in, \(R_{0}+r\geq\left(R_{0}-r\right)\lambda\). Hence, to guarantee no potential evader escapes without being detected by the sweeper team, it is mandatory for the sweepers' speed to exceed,
\[V_{S}\geq V_{T}\sqrt{\frac{\left(\frac{2\pi}{n}-\gamma\right)^{2}}{\left(\ln \left(\frac{R_{0}+r}{R_{0}-r}\right)\right)^{2}}+1} \tag{11}\]
As mentioned earlier, in order to accommodate the expansion of evaders during the inward motion of sweepers a modification to the critical speed in (11) must be made. This change requires that when sweepers move inwards after they finish their sweep, they must meet the evader region's expanding wavefront that moves outwards from every point in the evader region with a speed of \(V_{T}\) at the previous radius \(R_{0}\). Incorporating this constraint into the solution of the critical speed guarantees that no evaders escape during the sweepers inwards motion as well. Denote by \(T_{c}\) the evader region's expansion throughout the first sweep. In order for no evader to escape detection the following inequality must hold, \(V_{T}T_{c}\leq\frac{2rV_{s}}{V_{s}+V_{T}}\). Replacing \(T_{c}\) with \(\frac{\left(R_{0}-r\right)\left(\lambda-1\right)}{V_{T}}\) results in,
\[\left(R_{0}-r\right)\left(\lambda-1\right)=\frac{2rV_{s}}{V_{s}+V_{T}} \tag{12}\]
**Theorem 1**.: _For a spiral pincer sweep protocol with \(n\) sweepers in which \(n\) is even, the critical speed, \(V_{s}\), that allows the sweeping team to confine all evaders to their original domain is computed from,_
\[V_{s}=V_{T}\frac{1}{1-\frac{2r}{\left(R_{0}-r\right)\left(\lambda-1\right)}} \tag{13}\]
The solution for the critical speed is calculated numerically by applying the Newton-Raphson method. The simplified critical speed of (11) serves as an initial guess. In the rest of the article the critical speed considered is that of theorem \(3\), thus it accounts for the expansion of the evader region during the sweepers advancements into the center of the region.
### _Analytical Sweep Time Analysis_
**Theorem 2**.: _For a team of \(n\) sweepers implementing the pincer sweep protocol, the time required to detect all evaders in the region and reduce the evader region's area to \(0\) is given by the sum of inward advancement times after all the sweeps,\(T_{in}(n)\), along with the sum of all the spiral traversal times, \(T_{spiral}(n)\) in all sweeps. Hence,_
\[T(n)=T_{in}(n)+T_{spiral}(n) \tag{14}\]
Proof.: Denote by \(\Delta V>0\) the excess speed of a sweeper above the critical speed. The sweeper's speed hence is \(V_{s}=V_{c}+\Delta V\). Let \(\theta\left(t_{\theta}\right)\) denote the angle of a sweeper with respect to the center of evader region. At the start of each sweep the center of a sweeper's sensor is located at a distance of \(R_{i}-r\) from the center of the evader region. \(\theta\left(t_{\theta}\right)\) is calculated in (7). Replacing \(R_{0}\) with \(R_{i}\) yields,
\[\theta\left(t_{\theta}\right)=\frac{\sqrt{{V_{s}}^{2}-{V_{T}}^{2}}}{{V_{T}}} \ln\left(\frac{{V_{T}t_{\theta}+R_{i}-r}}{{R_{i}-r}}\right) \tag{15}\]
Denote by \(T_{spiral}\) the time required for a sweeper to sweep an angle of \(\theta\left(t_{\theta}\right)=\frac{2\pi}{n}-\gamma_{i}\). This time can be calculated by rearranging terms in (15) and equals,
\[T_{spiral_{i}}=\frac{\left(R_{i}-r\right)\left(\lambda-1\right)}{{V_{T}}} \tag{16}\]
In case sweepers move at speeds exceeding the critical speed appropriate for the scenario, denote the distance every sweeper advances toward the center of the evader region by \(\delta_{i}(\Delta V)\). Following this inward motion, the evader region decreases and is contained inside a smaller circular evader region having a radius of \(R_{i+1}=R_{i}-\delta_{i}(\Delta V)\). Hence \(\delta_{i}(\Delta V)\) equals,
\[\delta_{i}(\Delta V)=2r-{V_{T}}{T_{spiral_{i}}}\;,\;0\leq\delta_{i}(\Delta V) \leq 2r \tag{17}\]
The number of sweepers, the half-angle of their sensors and the sweep cycle number (the number representing the number of sweeps already completed by the sweeping team around the evader region) all influence the distance sweepers are able to progress inwards toward the center of the evader region after completing a sweep. If the evader region was not expanding throughout the sweepers' inward motion, then \(\delta_{i}(\Delta V)\) is,
\[\delta_{i}(\Delta V)=2r-\left(R_{i}-r\right)\left(\lambda-1\right) \tag{18}\]
In the expression \(\delta_{i}(\Delta V)\), \(\Delta V\) expresses the sweeper's excess speed above the critical speed. Denote by \(i\) the sweep cycle number that starts from sweep number \(0\). As mentioned earlier in the development of the critical speed, the time required for sweepers to advance toward the center of the evader region up to the point in which their sensors fully overlap the evader region depends on the relative speed between the sweepers inwards entry speed and the evader region outwards expansion speed. Hence, after sweepers finish a sweep they advance toward the center of evader region by a distance of,
\[\delta_{i_{eff}}(\Delta V)=\delta_{i}(\Delta V)\left(\frac{{V_{s}}}{{V_{s}}+{ V_{T}}}\right) \tag{19}\]
Following the completion of a sweep around the region, sweeper pairs progress together in the direction of the evader region's center. During inwards motions, sweepers move at speed of \(V_{s}\), up to the point in which they begin to spirally sweep again once their sensors are fully over the evader region's expanding wavefront. Therefore, their speed is always bounded. We assume as a worst-case assumption, that the sweeper's sensors detect evaders only when sweepers perform spiral motions. Therefore, throughout inward motions no detection of evaders occurs, while the evader region continues to expand due to motions of evaders. In the video accompanying the paper, the time it takes the sweepers to advance inwards is taken into account and dictates the new radius of the evader region after the sweep. Hence, after an inward advancement the evader region is within a circle whose radius is,
\[R_{i+1}=R_{i}-\delta_{i}(\Delta V)\left(\frac{{V_{s}}}{{V_{s}}+{V_{T}}}\right) \tag{20}\]
Denote by \(\widetilde{R}_{i}=R_{i}-r\). The usage of \(\widetilde{R}_{i}\) allows to analytically solve for the number of sweeps required to complete the search of the entire evader region. Replacing \(\delta_{i}(\Delta V)\) into (20) yields the following difference equation,
\[\widetilde{R}_{i+1}=\widetilde{R}_{i}\left(\frac{{V_{T}}+{V_{s}}\lambda}{{V_{s }}+{V_{T}}}\right)-\frac{2r{V_{s}}}{{V_{s}}+{V_{T}}} \tag{21}\]
Denote the coefficients of (21) by, \(c_{1}=-\frac{2r{V_{s}}}{{V_{s}}+{V_{T}}},c_{2}=\frac{{V_{T}}+{V_{s}}\lambda}{{V _{s}}+{V_{T}}}\). Hence,
\[\widetilde{R}_{i+1}=c_{2}\widetilde{R}_{i}+c_{1} \tag{22}\]
Denote by \(R_{N}\) the radius of the circle bounding the evader region when it is shrunk to be within a circle having a radius smaller or equal to \(2r\). Calculating \(R_{N}\) is possible only after the number of sweeps around the region, \(N_{n}\), is calculated. Hence, we use \(\widetilde{R}_{N}=2r\) as an estimate of \(R_{N}\) to allow the calculation of \(N_{n}\). The number of sweeps required to decrease the evader region to be within a circle of radius \(\widehat{R}_{N}=2r\) is,
\[N_{n}=\left\lceil\frac{\ln\left(\frac{r(3-\lambda)}{{R_{0}}(1-\lambda)+r(1+ \lambda)}\right)}{\ln\left(\frac{{V_{T}}+{V_{s}}\lambda}{{V_{s}}+{V_{T}}} \right)}\right\rceil \tag{23}\]
The ceiling operator is used since the number of sweeps must be integer. This means that sweepers continue sweep number \(N_{n}\) even if the evader region's radius is reduced to less \(2r\) throughout the final spiral sweep. Hence, to facilitate the calculation of \(N_{n}\) we assume the last sweep occurs when the evader region is within a circle of radius \(\widehat{R}_{N}=2r\). Denote by \(T_{in_{i}}\) the duration of each inwards motion. It is a function of the sweep cycle number and equals,
\[T_{in_{i}}=\frac{\delta_{i_{eff}}(\Delta V)}{{V_{s}}}=\frac{2r-\widetilde{R}_{ i}\left(\lambda-1\right)}{{V_{s}}+{V_{T}}} \tag{24}\]
Denote the sum of all inward advancement times up to the point in which the evader region is within a circle of radius smaller than or equal to \(2r\) as \(\widetilde{T}_{in}(n)\), i.e. \(\widetilde{T}_{in}(n)=\sum\limits_{i=0}^{N_{n}-2}{T_{in_{i}}}\). During inwards motions the sweepers' sensors are not fully
inside the evader region, hence they detect evaders in locations that are already free from evaders. Hence, we assume that sweepers do not detect evaders until they complete their inward motion and begin sweeping again. The total sweep times until the evader region is within a circle having a radius less than or equal to \(2r\) is computed as the addition of the total spiral sweep times and the total inward advancement times. Therefore,
\[\widetilde{T}(n)=\widetilde{T}_{in}(n)+\widetilde{T}_{spiral}(n) \tag{25}\]
The sum of all inward motion times until the evader region is reduced to be within a circle having a radius less than or equal to \(2r\) is,
\[\begin{array}{l}\widetilde{T}_{in}(n)=\sum\limits_{i=0}^{N_{n}-2}T_{in_{i}} =\frac{2r}{V_{x}+V_{T}}+\frac{R_{0}-r}{V_{x}}+\frac{2r\left(V_{T}+V_{x} \lambda\right)}{V_{x}\left(V_{x}+V_{T}\right)\left(1-\lambda\right)}\\ -\frac{\left(V_{T}+V_{x}\lambda\right)N_{n}-1}{V_{x}\left(V_{x}+V_{T}\right) \left(1-\lambda\right)}\left(R_{0}\left(1-\lambda\right)+r\left(1+\lambda \right)\right)\end{array} \tag{26}\]
During the final inward motion, sweepers advance toward the region's center and place the inner tips of the central part of their sensors at the center of the evader region. Afterwards, the sweepers perform a final circular sweep and complete the detection of all evaders in the region. The time required to perform this motion is denoted by \(T_{{}_{in}last}(n)\) and is given by \(T_{{}_{in}last}(n)=\frac{R_{N}}{V_{s}}\), resulting in,
\[R_{N}=-\frac{2r}{1-\lambda}+{c_{2}}^{N_{n}}\left(\frac{R_{0}\left(1-\lambda \right)+r\left(1+\lambda\right)}{1-\lambda}\right) \tag{27}\]
By replacing the exact value of \(R_{N}\) from (27) in \(T_{{}_{in}last}(n)\) we obtain,
\[T_{{}_{in}last}(n)=-\frac{2r}{V_{s}\left(1-\lambda\right)}+{c_{2}}^{N_{n}} \left(\frac{R_{0}\left(1-\lambda\right)+r\left(1+\lambda\right)}{V_{s}\left(1 -\lambda\right)}\right) \tag{28}\]
The time required for sweeping around radius \(\widetilde{R}_{i}\) is computed by multiplying \(\widetilde{R}_{i}\) by \(\frac{\lambda-1}{V_{T}}\). Hence, multiplying (21) by \(\frac{\lambda-1}{V_{T}}\) yields the following sweep times difference equation,
\[T_{i+1}=c_{2}T_{i}+c_{3} \tag{29}\]
Where \(c_{3}=\frac{-2rV_{s}\left(\lambda-1\right)}{\left(V_{s}+V_{T}\right)V_{T}}\). The equation for computing the total spiral sweep times up to the point at which the evader region is within a circle of radius less or equal to \(2r\) is,
\[\widetilde{T}_{spiral}(n)=\frac{T_{0}-c_{2}T_{N_{n}-1}+\left(N_{n}-1\right)c_{ 3}}{1-c_{2}} \tag{30}\]
Replacing the coefficients into (30) provides,
\[\begin{array}{l}\widetilde{T}_{spiral}(n)=\frac{\left(r-R_{0}\right)\left(V _{x}+V_{T}\right)}{V_{T}V_{s}}-\frac{2r\left(V_{T}+V_{x}\lambda\right)}{V_{T} V_{s}\left(1-\lambda\right)}+\frac{2r\left(N_{n}-1\right)}{V_{T}}\\ -\left(\frac{V_{T}+V_{x}\lambda}{V_{x}+V_{T}}\right)^{N_{n}}\left(\frac{\left( V_{x}+V_{T}\right)\left(R_{0}\left(\lambda-1\right)-r\left(\lambda+1\right)\right)}{V_{T} V_{s}\left(1-\lambda\right)}\right)\end{array} \tag{31}\]
### _The End-game_
Following the sweepers' completion of sweep number \(N_{n}-1\), they progress in the direction of the center of the evader region up to a position in which the inner tips of the central parts of their sensors are positioned at the center of the evader region. After this inward motion, the sweepers must complete a set of maneuvers referred to as the end-game in order to ensure detection of all smart evaders. A final circular sweep around radius \(r\) is required to complete the detection of all evaders and complete the search mission. Sweepers have the ability to successfully detect all evaders during the last circular sweep only if their speed is sufficiently high to ensure that although they do not track the expanding wavefront of the remaining evader region at this last sweep with a spiral trajectory and perform a considerably less efficient circular sweep, they are still able to catch all evaders. The reason why this last motion cannot be a spiral out motion is because we wish that the tips of the central part of the sweepers sensors will always be positioned at the center of the evader region to ensure that no evaders remain at the center of the region or near its vicinity. Due to the fact that the critical speed for a spiral sweep is lower compared to a critical speed of circular trajectories, the sweepers can only implement the last circular sweep after spiral sweep number \(N_{n}-1\) if their speeds are sufficiently high and satisfy the following inequality,
\[2r\geq V_{T}T_{last}+V_{T}T_{{}_{in}last}+R_{N} \tag{32}\]
Satisfying (32) implies that detection of all evaders is guaranteed. Prior to the last sweep the evader region is within a circle of radius \(R_{N}\) such that \(0<R_{N}\leq 2r\). \(R_{N}\) can be also expressed as, \(R_{N}=r\left(2-\varepsilon\right)\). Hence, \(\varepsilon\) may be expressed as \(\varepsilon=\frac{2r-R_{N}}{r}\), \(0\leq\varepsilon<2\). The final circular sweep takes place once the sweepers progress to the region's center and place the lower tips of the central part of their sensors at the center of the evader region. Hence, the last circular sweep is spans an angle of \(\frac{2\pi}{n}-\gamma_{i}\) around a region contained within a circle of radius \(r\) around the center of the evader region. The time required for sweepers to perform this motion is,
\[T_{last}(n)=\left(\frac{2\pi}{n}-2\alpha\right)\frac{r}{V_{s}} \tag{33}\]
Denote the smallest possible \(\varepsilon\) that satisfies (32) as \(\varepsilon_{c}\). To perform the last circular sweep immediately following spiral sweep number \(N_{n}-1\), the inequality in (32) implies that \(\varepsilon\geq\varepsilon_{c}=\frac{2V_{T}\left(\pi^{-}n\left(\alpha-1\right) \right)}{n\left(V_{T}+V_{s}\right)}\). This consideration dictates that implementation of a circular sweep immediately after spiral sweep number \(N_{n}-1\), \(V_{s}\) is possible only if,
\[V_{s}\geq\frac{2V_{T}\left(\pi-n\left(\alpha-1\right)\right)-n\varepsilon_{c}V _{T}}{n\varepsilon_{c}} \tag{34}\]
If \(R_{N}\geq r\left(2-\varepsilon_{c}\right)\) or equivalently,
\[R_{N}\geq\frac{2rn\left(V_{T}+V_{s}\right)-2V_{T}r\left(\pi-n\left(\alpha-1 \right)\right)}{n\left(V_{T}+V_{s}\right)} \tag{35}\]
Then the sweepers' speed does not suffice to rule out the possibility of a feasible escape trajectory for potential evaders. Hence, if (35) holds or (34) does not, then the sweepers must implement a final spiral sweep, that starts with the lower tips of the central parts of their sensors at the center of the evader region. This spiral sweep starts when the center of each sweeper is at a distance of \(r\) from the center of the region. Denote by \(T_{l}(n)\) the time required to sweep it. Hence,
\(T_{l}(n)\) equals \(T_{l}(n)=\frac{r\left(\lambda-1\right)}{V_{T}}\). Denote by \(\eta\) a characteristic function that assumes only two possible values, \(1\) or \(0\). If the additional spiral sweep has to be implemented than \(\eta=1\) and consequently \(T_{l}(n)\) is added to the total sweep time, otherwise in case no additional spiral sweep is required \(\eta=0\). Hence,
\[T_{l}(n)=\frac{r\left(\lambda-1\right)}{V_{T}} \tag{36}\]
\[T_{spiral}(n)=\widetilde{T}_{spiral}(n)+T_{last}(n)+\eta T_{l}(n) \tag{37}\]
Denote by \(T_{in_{f}}(n)\) the sweepers inward advancement time corresponding to the spread of evaders originating from the center of the evader region at the beginning of the last spiral sweep that had time of \(T_{l}(n)\) to spread at a speed of \(V_{T}\). Therefore, \(T_{in_{f}}(n)\) is given by \(T_{in_{f}}(n)=\frac{T_{l}(n)V_{T}}{V_{s}}\). \(T_{in}(n)\), the total inward advancement time therefore equals,
\[T_{in}(n)=\widetilde{T}_{in}(n)+T_{in\_last}(n)+\eta T_{in_{f}}(n) \tag{38}\]
Or alternatively,
\[T_{in}(n)=\widetilde{T}_{in}(n)+\frac{R_{N}}{V_{s}}+\frac{\eta r\left(\lambda -1\right)}{V_{s}} \tag{39}\]
### _Experimental Results_
Fig. 3 presents a numerical analysis of the total sweep times required to detect all evaders in the region for different even numbers of sweepers (\(2\) to \(16\) sweepers). The total sweep times are the sum of spiral sweep and inward advancement times.
## V Conclusions and Future Research Directions
This research studies the problem of guaranteed detection of smart mobile evaders by a team of sweeping agents equipped with fan-shaped sensors that act as visual detectors. Evaders are initially located inside a known circular environment that does not have physical barriers that prevent evaders from attempting to escape it. An algorithm that guarantees detection of all evaders by using any even number of sweepers that use pincer sweeps between searching pairs is developed and analytically proven. Numerical and illustrative simulations using MATLAB and NetLogo demonstrate the performance of the proposed algorithm.
While in this work we focus on the rather simplistic circular environment, the concept of spiral pincer movements based on pairs of sweepers can be extended and generalized to more complex environments with different geometric layouts. Hence, a future research direction is to generalize the results to environments with different geometries.
An additional future extension seeks to analyze critical speeds and sweep times obtained for teams of sweepers with fan-shaped sensors that employ same-direction sweeps. We expect such methods to result in degraded performance compared to the pincer-based protocols developed in this work. Such protocols will enable to precisely quantify the performance improvement achieved with the developed pincer-based search protocols compared to their same-direction alternatives.
|